1
1
submitted 2 weeks ago by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

If you long-tap an image that someone sent, options are:

  • share with…
  • copy original URL
  • delete image

The URL is not the local URL, it’s the network URL for fetching the image again. When you send outbound images, Snikket stores them in one place, but it’s nowhere near the place where it stores inbound images. I found it once after a lengthy hunt but did not take notes. I cannot find it now. I think it’s well buried somewhere. What a piece of shit.

2
1
submitted 3 weeks ago* (last edited 3 weeks ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Those who condemn centralised social media naturally block these nodes:

  • #LemmyWorld
  • #shItjustWorks
  • #LemmyCA
  • #programmingDev
  • #LemmyOne
  • #LemmEE
  • #LemmyZip

The global timeline is the landing page on Mbin nodes. It’s swamped with posts from communities hosted in the above shitty centralised nodes, which break interoperability for all demographics that Cloudflare Inc. marginalises.

Mbin gives a way for users to block specific magazines (Lemmy communities), but no way to block a whole node. So users face this this very tedious task of blocking hundreds of magazines which is effectively like a game of whack-a-mole. Whenever someone else on the Mbin node subscribes to a CF/centralised node, the global timeline gets polluted with exclusive content and potentially many other users have to find the block button.

Secondary problem: (unblocking)
My blocked list now contains hundreds of magazines spanning several pages. What if LemmEE decides one day to join the decentralised free world? I would likely want to stop blocking all communities on that node. But unblocking is also very tedious because you have to visit every blocked magazine and click “unblock”.

the fix


① Nix the global timeline. Lemmy also lacks whole-node blocking at the user level, but Lemmy avoids this problem by not even having a global timeline. Logged-in users see a timeline that’s populated only with communities they subscribe to.

«OR»

② Enable users to specify a list of nodes for which they want filtered out of their view of the global timeline.

3
1
submitted 3 weeks ago* (last edited 3 weeks ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

While composing this post the Lemmy web client went to lunch. This is the classic behaviour of Lemmy when it has a problem. No error, just infinite spinner. After experimentation, it turns out that it tries to be smart but fails when treating URLs written with the gemini:// scheme.

(edit) It’s probably trying to visit the link for that convenience feature of pre-filling the title. If it does not recognise the scheme, it should just accept it without trying to be fancy. It likely screws up on other schemes as well, like dict, ftp, news, etc.

The workaround is to embed the #Gemini link in the body of the post.

4
1
submitted 3 weeks ago by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

I think the stock Lemmy client stops you from closing a browser tab if you have an editor open on a message, to protect you from accidental data loss.

Mbin does not.

5
1
submitted 3 weeks ago* (last edited 3 weeks ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

A vast majority of the fediverse (particularly the threadiverse) is populated by people who have no sense of infosec or privacy, who run stock browsers over clearnet (e.g. #LemmyWorld users, the AOL users of today). They have a different reality than street wise people. They post a link to a page that renders fine in the world they see and they are totally oblivious to the fact that they are sending the rest of the fediverse into an exclusive walled garden.

There is no practical way for street wise audiences to signal “this article is exclusive/shitty/paywalled/etc”. Voting is too blunt of an instrument and does not convey the problem. Writing a comment “this article is unreachable/discriminatory because it is hosted in a shitty place” is high effort and overly verbose.

the fix


The status quo:

  • (👍/👎) ← no meaning.. different people vote on their own invented basis for voting

We need refined categorised voting. e.g.

  • linked content is interesting and civil (👍/👎)
  • body content is interesting and civil (👍/👎)
  • linked article is reachable & inclusive (👎)¹
  • linked is garbage free (no ads, popups, CAPTCHA, cookie walls, etc) (👍/👎)

¹ Indeed a thumbs up is not useful on inclusiveness because we know every webpage is reachable to someone or some group and likely a majority. Only the count of people excluded is worth having because we would not want to convey the idea that a high number of people being able to reach a site in any way justifies marginalization of others. It should just be a raw count of people who are excluded. A server can work out from the other 3 voting categories the extent by which others can access a page.

From there, how the votes are used can evolve. A client can be configured to not show an egalitarian user exclusive articles. An author at least becomes aware that a site is not good from a digital rights standpoint, and can dig further if they want.

update


The fix needs to expand. We need a mechanism for people to suggest alternative replacement links, and those links should also be voted on. When a replacement link is more favorable than the original link, it should float to the top and become the most likely link for people to visit.

6
1
submitted 4 weeks ago* (last edited 4 weeks ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Some will regard this as an enhancement request. To each his own, but IMO *grep has always had a huge deficiency when processing natural languages due to line breaks. PDFGREP especially because most PDF docs carry a payload of natural language.

If I need to search for “the.orange.menace“ (dots are 1-char wildcards), of course I want to be told of cases like this:

A court whereby no one is above the law found the orange  
menace guilty on 34 counts of fraud..

When processing a natural language a sentence terminator is almost always a more sensible boundary. There’s probably no command older than grep that’s still in use today. So it’s bizarre that it has not evolved much. In the 90s there was a Lexis Nexus search tool which was far superior for natural language queries. E.g. (IIRC):

  • foo w/s bar :: matches if “foo” appears within the same sentence as “bar”
  • foo w/4 bar :: matches if “foo” appears within four words of “bar”
  • foo pre/5 bar :: matches if “foo” appears before “bar”, within five words
  • foo w/p bar :: matches if “foo” appears within the same paragraph as “bar”

Newlines as record separators are probably sensible for all things other than natural language. But for natural language grep is a hack.

7
1
submitted 1 month ago* (last edited 1 month ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

I cannot believe how stupid Chromium is considering it’s the king of browsers from a US tech giant. It’s another bug that should be embarrassing for Google.

If you visit a PDF, it fetches the PDF and launches pdf.js as expected. If you use the download button within pdf.js, you would expect it to simply copy the already fetched PDF from the cache to the download folder. But no.. the stupid thing goes out on the WAN and redownloads the whole document from the beginning.

I always suspected this, but it became obvious when I recently fetched a 20mb PDF from a slow server. It struggled for a while to get the whole thing just for viewing. Then after clicking to download within pdf.js, it was crawling again from 1% progress.

What a stupid waste of bandwidth, energy and time.

8
1
submitted 1 month ago* (last edited 1 month ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

cross-posted from: https://sopuli.xyz/post/12858874

When an image is posted by someone on a Cloudflared instance like the following:

  • #LemmyWorld
  • #ShitJustworks
  • #LemmyCA
  • #LemmyEE
  • #LemmyZip
  • #LemmyOne

the image is inaccessible to all demographics of people who Cloudflare discriminates against because images are not mirrored to federated nodes.

We expect corporations to not give a shit about marginising people who are not profitable enough to care about. But when naive asshole users outnumber progressive egalitarians, it highlights a problem with the fedi, which still lacks the tooling needed to keep oppression at bay.

The six listed nodes above effectively host the AOL users of our time. Lacking the sophistication needed to detect and grasp situations of eroded digital rights with a degree of blindness and lack of concern for centralised corporate control.

Suggestions needed for Lemmy nodes that are defederated from the above listed six.

9
1
submitted 1 month ago* (last edited 1 month ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Different apps expect passwords in the .netrc file to be quoted in different ways. E.g. fetchmail expects passwords to be quoted in a bash style way (quotes needed if there are special chars, but quotes themselves need quotes), while cURL gives no special meaning to quotes and takes them literally if present.

Who to blame for this is a bit unclear, but I believe the original purpose of .netrc was for the standard CLI FTP program, so in principle everything should be aligned on that, IMO.

Some apps will complain if they spot a .netrc syntax they don’t like, as if they get to decide that -- even if the line it complains about is not the record the app is looking for. OTOH, it’s useful to know what an app accepts and rejects.

What a mess.

10
1
submitted 1 month ago by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Updating my browser apparently caused extensions to get updated as well. Now uMatrix 1.1.2 is installed. The config box is very small compared to the size available to the browser window area. You have to scroll horizontally to reach the columns on the right, and the name of the 3rd party entity scrolls out of the window. This makes it inconvenient and cumbersome to alter the settings.

I suppose this change was motivated by complaints that the config window was too large on small screens:

https://github.com/gorhill/uMatrix/issues/483
https://github.com/gorhill/uMatrix/issues/683

11
1
submitted 1 month ago* (last edited 1 month ago) by activistPnk@slrpnk.net to c/bugs@sopuli.xyz
  • broken: Ungoogled Chromium ver. 90.0.4430.212-1.sid1
  • works: Ungoogled Chromium ver. 112.0.5615.165-1

If anyone has problems getting Ungoogled Chromium (and likely Google’s Chromium as well) to work on Lemmy, notice the versions above. The Lemmy webclient is a dysfunctional disaster in the old version but they fixed whatever the problem was in recent versions.

12
1
submitted 2 months ago* (last edited 2 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

I installed #neonmodem by simply grabbing the tarball, which expands files directly into the $CWD instead of nesting them in a folder named after the app. Not a big deal but it gave a slight hint that this project might have quality issues.

This command executes just fine:

$ torsocks neonmodem connect --type lemmy --url https://sopuli.xyz

It’s irritating that it does not inform the user where the data is being stored and it’s also undocumented. You have to guess how to use it and it’s misleading (I think the connect command does not actually result in a connection being made, it apparently just stores the login creds).

Simply running it crashes instantly:

$ torsocks neonmodem
  panic: Error(s) loading system(s)

  goroutine 1 [running]:
  github.com/mrusme/neonmodem/cmd.glob..func1(0x1771140?, {0xe973eb?, 0x0?, 0x0?})
          /home/runner/work/neonmodem/neonmodem/cmd/root.go:128 +0x268
  github.com/spf13/cobra.(*Command).execute(0x1771140, {0xc00008c1f0, 0x0, 0x0})
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:944 +0x847
  github.com/spf13/cobra.(*Command).ExecuteC(0x1771140)
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:1068 +0x3bd
  github.com/spf13/cobra.(*Command).Execute(...)
          /home/runner/go/pkg/mod/github.com/spf13/cobra@v1.7.0/command.go:992
  github.com/mrusme/neonmodem/cmd.Execute(0xc0000061a0?)
          /home/runner/work/neonmodem/neonmodem/cmd/root.go:141 +0x3e
  main.main()
          /home/runner/work/neonmodem/neonmodem/neonmodem.go:13 +0x25
13
1
submitted 2 months ago* (last edited 2 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

The 112.be website drops all Tor traffic, which in itself is a shit show. No one should be excluded from access to emergency app info.

So this drives pro-privacy folks to visit http://web.archive.org/web/112.be/ but that just gets trapped in an endless loop of redirection.

Workaround: appending “en” breaks the loop. But that only works in this particular case. There are many redirection loops on archive.org and 112.be is just one example.

Why posted here: archive.org has their own bug tracker, but if you create an account on archive.org they will arbitrarily delete the account without notice or reason. I am not going to create a new account every time there is a new archive.org bug to report.

14
1
submitted 2 months ago* (last edited 2 months ago) by coffeeClean@infosec.pub to c/bugs@sopuli.xyz

The cross-post mechanism has a limitation whereby you cannot simply enter a precise community to post to. Users are forced to search and select. When searching for “android” on infosec.pub within the cross-post page, the list of possible communities is totally clusterfucked with shitty centralized Cloudflare instances (lemmy world, sh itjust works, lemm ee, programming dev, etc). The list of these junk instances is so long !android@hilariouschaos.com does not make it to the list.

The workaround is of course to just create a new post with the same contents. And that is what I will do.

There are multiple bugs here:
① First of all, when a list of communities is given in this context, the centralized instances should be listed last (at best) because they are antithetical to fedi philosophy.
② Subscribed communities should be listed first, at the top
③ Users should always be able to name a community in its full form, e.g.:

  • [!android@hilariouschaos.com](/c/android@hilariouschaos.com)
  • hilariouschaos.com/android

④ Users should be able to name just the instance (e.g. hilariouschaos.com) and the search should populate with subscribed communities therein.

15
1
submitted 2 months ago by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Tedious to use. No way to import a list of URLs to download. Must enter files one by one by hand.

No control over when it downloads. Starts immediately when there is an internet connection. This can be costly for people on measured rate internet connections. Stop and Go buttons needed. And it should start in a stopped state.

When entering a new file to the list, the previous file shows a bogus “error” status.

Error messages are printed simply as “Error”. No information.

There is an embedded browser. What for?

What files are already present the download directory because another app put them there, GigaGet lists those files with “100%”. How does GigaGet know those files that another app put there are complete when gigaget does not even have URL for them (thus no way to check the content-length)?

16
1
submitted 2 months ago by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

Navi is an app in f-droid to manage downloads. It’s really tedious to use because there is no way to import a list of URLs. You either have to tap out each URL one at a time, or you have to do a lot of copy-paste from a text file. Then it forces you to choose filename for each download -- it does not default to the name of the source file.

bug 1


For a lot files it gives:

Error: java.security.cert.CertPathValidatorException: Trust anchor for certification path not found.

The /details/ page for the broken download neglects to give the error message, much less what the error means.

bug 2


Broken downloads are listed under a tab named “completed”.

bug 3


Every failed fetch generates notification clutter that cannot be cleaned up. I have a dozen or so notifications of failed downloads. Tapping the notification results in no action and the notification is never cleared.

bug 4


With autostart and auto connect both disabled, Navi takes the liberty of making download attempts as soon as there is an internet connection.

bug 5?


A web browser is apparently built-in. Does it make sense to embed a web browser inside a download manager?

17
1
submitted 3 months ago by activistPnk@slrpnk.net to c/bugs@sopuli.xyz

Images can be fully embedded inline directly in the HTML. Tor Browser displays them unconditionally, regardless of the permissions.default.image setting, which if set to “2” indicates images should not be loaded.

An example is demonstrated by the privacy-respecting search service called “dogs”:

If you search for a specific object like “sweet peppers”, embedded images appear in the results. This feature could easily be abused by advertisers. I’m surprised that it’s currently relatively rare.

It’s perhaps impossible to prevent embedded images from being fetched because the HTML standard does not include the length of the base64 blob ahead of it. Thus no way for the browser to know which position in the file to continue fetching from.

Nonetheless, the browser does not know /why/ the user disables images. Some people do it because they are on measured rate connections and need to keep their consumption low, like myself, and we are fucked in this case. But some people disable images just to keep garbage off the screen. In that case, the browser can (and should) respect their choice whether the images are embedded or not.

There should really be two config booleans:

  • fetch non-local images
  • render images that have been obtained The first controls whether the browser makes requests for images over the WAN. The second would just control whether the images are displayed.
18
1
submitted 3 months ago by activistPnk@slrpnk.net to c/bugs@sopuli.xyz

I was trying to work out how I managed to waste so much of my bandwidth allowance in a short time. With a Lemmy profile page loaded, I hit control-r to refresh while looking at the bandwidth meter.

Over 1 meg! wtf. I have images disabled in my browser, so it should only be fetching a small amount of compressed text. For comparison, loading ~25 IRC channels with 200 line buffers is 0.1mb.

So what’s going on? Is Lemmy transferring thumbnails even though images are disabled in the browser config?

19
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

I simply wanted to submit a bug report. This is so fucked up. The process so far:

① solved a CAPTCHA just to reach a reg. form (I have image loading disabled but the graphical CAPTCHA puzzle displayed anyway (wtf Firefox?)
② disposable email address rejected (so Bitbucket can protect themselves from spam but other people cannot? #hypocrisy)
③ tried a forwarding acct instead of disposable (accepted)
③ another CAPTCHA, this time Google reCAPTCHA. I never solve these because it violates so many digital right principles and I boycott Google. But made an exception for this experiment. The puzzle was empty because I disable images (can’t afford the bandwidth). Exceptionally, I enable images and solve the piece of shit. Could not work out if a furry cylindrical blob sitting on a sofa was a “hat”, but managed to solve enough puzzles.
④ got the green checkmark ✓
⑤ clicked “sign up”
⑥ “We are having trouble verifying reCAPTCHA for this request. Please try again. If the problem persists, try another browser/device or reach out to Atlassian Support.”

Are you fucking kidding me?! Google probably profited from my CAPTCHA work before showing me the door. Should be illegal. Really folks, a backlash of some kind is needed. I have my vision and couldn’t get registered (from Tor). Imagine a blind Tor user.. or even a blind clearnet user going through this shit. I don’t think the first CAPTCHA to reach the form even had an audio option.

Shame on #Bitbucket!

⑦ attempted to e-mail the code author:

status=bounced (host $authors_own_mx_svr said: 550-host $my_ip is listed at combined.mail.abusix.zone (127.0.0.11); 550 see https://lookup.abusix.com/search?q=$my_ip (in reply to RCPT TO command))

#A11y #enshitification

20
1
submitted 3 months ago* (last edited 3 months ago) by coffeeClean@infosec.pub to c/bugs@sopuli.xyz

There used to be no problem archiving a Mastodon thread in the #internetArchive #waybackMachine. Now on recent threads it just shows a blank page:

https://web.archive.org/web/20240318210031/https://mastodon.social/@lrvick/112079059323905912

Or is it my browser? Does that page have content for others?

21
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

If you’re logged out and reading a thread, you should be able to login in another tab and then do a forced refresh (control-shift-R); and it should show the thread with logged-in control. For some reason the cookie isn’t being passed or (perhaps more likely) the cookie is insufficient because Lemmy is using some mechanism other than cookies.

Scenario 2:

You’re logged in and reading threads in multiple tabs. Then one tab becomes spontaneously logged out after you take some action. Sometimes a hard-refresh (control-shift-R) recovers, sometimes not. It’s unpredictable. But note that the logged-in state is preserved in other tabs. So if several hard refreshes fail, I have to close the tab and use another tab to navigate to where I was in the other tab. And it seems navigation is important.. if I just copy the URL for where I was (same as opening a new tab), it’s more likely to fail.

In any case, there are no absolutes.. the behavior is chaotic and could be related to this security bug.

22
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

People on a tight budget are limited to capped internet connections. So we disable images in our browser settings. Some environmentalists do the same to avoid energy waste. If we need to download a web-served file (image, PDF, or anything potentially large), we run this command:

$ curl -LI "$URL"

The HTTP headers should contain a content-length field. This enables us to know before we fetch something whether we can afford it. (Like seeing a price tag before buying something)

#Cloudflare has taken over at least ~20% of the web. It fucks us over in terms of digital rights in so many ways. And apparently it also makes the web less usable to poor people in two ways:

  • Cloudflare withholds content length information
  • Cloudflare blocks people behind CGNAT, which is commonly used in impoverished communities do to limited number of IPv4 addresses.
23
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

The problem:

  1. !cashless_society@nano.garden is created
  2. node A users subscribe and post
  3. node B users subscribe and post
  4. nano.garden disappears forever
  5. users on node A and B have no idea; they carry on posting to their local mirror of cashless_society.
  6. node C never federated with nano.garden before it was unplugged

So there are actually 3 bugs AFAICT:

  1. Transparency: users on nodes A and B get no indication that they are interacting with a ghost community.
  2. Broken comms: posts to the ghost community from node A are never sync’d, thus never seen by node B users; and vice-versa.
  3. Users on node C have no way to join the conversation because the search function only finds non-ghost communities.

The fix for ① is probably as simple as adding a field to the sidebar showing the timestamp of the last sync operation.

w.r.t. ②, presumably, A and B do not connect directly because they are each federated to the ghost node. So there is no way for node A posts to reach node B. Correct? Lemmy should be designed to accommodate a node disappearing at any time with no disruption to other nodes. Node A and B should directly sychronize.

w.r.t. ③ node C should still be able to join the conversation between A and B w.r.t the ghost community.

(original thread)

24
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

There are “announcement” communities where all posts are treated as announcements. This all-or-nothing blunt choice at the time of community creation could be more flexible. In principle, a community founder should have four choices:

  • all posts are announcements (only mods can post)
  • all posts are discussions
  • (new) all posts are announcements (anyone can post)
  • (new) authors choose at posting time whether their post is an announcement or a discussion

This would be particularly useful if an author cross-posts to multiple communities but prefers not to split the discussion. In which case the carbon copies could use the announcement option (or vice versa).

There is a side-effect here with pros and cons. This capability could be used for good by forcing a conversation to happen outside of a walled garden. E.g. you post to a small free-world instance then crosspost an “announcement” in a walled garden like sh.itjust.works, then the whole discussion takes place in the more socially responsible venue with open access. OTOH, the same capability in reverse could also be used detrimentally, e.g. by forcing a discussion onto the big centralized platforms.

update


Perhaps the community creator should get a more granular specification. E.g. a community creator might want:

Original posts → author’s choice

Cross-posts coming from [sh.itjust.works,lemmy.world] → discussions only

Cross-posts coming from [*] → author’s choice

25
1
submitted 3 months ago* (last edited 3 months ago) by freedomPusher@sopuli.xyz to c/bugs@sopuli.xyz

A moderator deleted one of my posts for being off topic. I received no notification. It’s mere chance that I realized my post was silently removed, at which point I checked to modlog where a reason was given.

Users can filter sitewide modlogs on their own account to see the actions against them (great!) -- but there should also be a notification.

view more: next ›

Bug reports on any software

83 readers
6 users here now

When a bug tracker is inside the exclusive walled-gardens of MS Github or Gitlab.com, and you cannot or will not enter, where do you file your bug report? Here, of course. This is a refuge where you can report bugs that are otherwise unreportable due to technical or ethical constraints.

⚠of course there are no guarantees it will be seen by anyone relevant. Hopefully some kind souls will volunteer to proxy the reports.

founded 2 years ago
MODERATORS