this post was submitted on 08 Mar 2026
21 points (83.9% liked)

Opensource

5718 readers
257 users here now

A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!

CreditsIcon base by Lorc under CC BY 3.0 with modifications to add a gradient



founded 2 years ago
MODERATORS
 

Can mozilla just stop doing whatever they are doing?

I personally do not use firefox (i use qutebrowser), but if it were not for some other things, firefox (or librewolf) would be my favorite browser. This does not directly affect me but this still seems so dumb.

for more context as to why this is not that good of an idea as it may seem - please look at curl project and barrage of false security reports made. and curl is a cli project which just handles many protocols, and does not practically do any processing. firefox and any modern browser is basically a operating system and even a hardware (in forms of wasm allowing you to run arbitrary assembly written in systems programming language).

If firefox pays for it, they would loose lots of money, and if anthropicdoes it for free, they get clout (for supporting open projects), and without doing something that productive, they just put more strain on firefox devs to handle with false positive reports.

I am not a time traveller, so can not tell how this will go, but it does not seem that good

top 8 comments
sorted by: hot top controversial new old
[–] mbeezy@sh.itjust.works 26 points 1 day ago (2 children)

I don't really see the problem here, if it doesn't divert too much resources and it actually leads to bugs and exploits being fixed, why should it be a bad thing? With all the slop AI is being used for this seems like a rather good usecase.

[–] twinnie@feddit.uk 7 points 1 day ago (1 children)

What’s wrong with this? I’ve found AI to be really helpful for debugging. It doesn’t mean humans aren’t doing it to but AI can spot this that people would miss.

[–] Quacksalber@sh.itjust.works 6 points 1 day ago

Especially when combining it with the headline that 10% of all Firefox crashes are bit-flips, which means that their current review system already catches almost everything, maybe (well trained) AI will be able to catch on to stuff humans simply can not.

[–] sga@piefed.social 1 points 1 day ago (1 children)

please look at curl project and barrage of false security reports made. it got so bad because they do a paid bug bounty, and people would ask some llm to find bugs, llms would conjure something out of thin air, and people will post that. mozilla will now have to handle this junk.

[–] wizardbeard@lemmy.dbzer0.com 7 points 1 day ago (1 children)

Curl is getting a slew of amatuer programmers throwing non-tuned AI at the project and just saying "go find problems" then throwing it as pull requests at curl when the pull creators have no ability to understand what the AI found or the code it generated. Curl never asked for it, and they aren't self identifying as AI generated.

In contrast, Mozilla is actively working with Anthropic on this, which implies at least some amount of coordination and intent with this. That would mean professionals from Anthropic and Mozilla fine tuning these AIs to reduce false positives. They will also be clearly labeled as AI generated. If it results in needless busywork, they're free to cut the agreement at any time.

I'm not a particular fan of this either, and I think that there's plenty of ground to cover with less resource intensive pattern matching bug and error detection schemes that should be focused on first, but this is absolutely not the same situation that happened to curl.

[–] sga@piefed.social 1 points 1 day ago

i mostly agree with you, but i just do not trust closed ai labs. sorry. and yes most western ai labs are closed (meta and google used to release open stuff, even microsoft, but they do not do it anymore). if it were some open lab, mozilla could set up their own inference with specialised tiny - mid scale models. with closed labs, i do not expect them to give them access to models to run.

[–] towerful@programming.dev 5 points 1 day ago* (last edited 1 day ago)

for more context as to why this is not that good of an idea as it may seem - please look at curl project and barrage of false security reports made. and curl is a cli project which just handles many protocols, and does not practically do any processing. firefox and any modern browser is basically a operating system and even a hardware (in forms of wasm allowing you to run arbitrary assembly written in systems programming language).

The difference is that anthropic did due diligence. They narrowed their scope as a research project to just the JS engine. They then fully vetted 1 issue, had a complete minimum-test-case that would reporoduce the issue, then contacted Mozilla with it.
Mozilla then said "send over the rest, vetted or not", who then vetted them all.
Mozilla's findings with the bugs found by Claude were fed back to anthropic to hone the model.

So, it's completely different that someone yeeting a codebase into an LLM and reporting whatever it spewed out.
Anthropic required Claude to actually produce an example exploit of it. And it REALLY struggled to do this. But it did get there eventually, for 1 bug. Which they then reported.
Even then, the exploit was in the JS engine. It wouldn't have escaped the sandbox to actually be an issue. Classic defence-in-depth

Shitty use of AI is fucking horrible. It wastes resources, it wastes time, it wastes energy.
I think this is a responsible use of an LLM in limited scope to produce an actually useful result.

Just like you've mentioned, I personally use Librewolf as I find it to be a suitable alternative to Chome and native Firefox.

Hard to say if anyone left at Mozilla isn't drinking the AI koolaid at this point unfortunately.