I don't really see the problem here, if it doesn't divert too much resources and it actually leads to bugs and exploits being fixed, why should it be a bad thing? With all the slop AI is being used for this seems like a rather good usecase.
Opensource
A community for discussion about open source software! Ask questions, share knowledge, share news, or post interesting stuff related to it!
⠀
What’s wrong with this? I’ve found AI to be really helpful for debugging. It doesn’t mean humans aren’t doing it to but AI can spot this that people would miss.
Especially when combining it with the headline that 10% of all Firefox crashes are bit-flips, which means that their current review system already catches almost everything, maybe (well trained) AI will be able to catch on to stuff humans simply can not.
please look at curl project and barrage of false security reports made. it got so bad because they do a paid bug bounty, and people would ask some llm to find bugs, llms would conjure something out of thin air, and people will post that. mozilla will now have to handle this junk.
Curl is getting a slew of amatuer programmers throwing non-tuned AI at the project and just saying "go find problems" then throwing it as pull requests at curl when the pull creators have no ability to understand what the AI found or the code it generated. Curl never asked for it, and they aren't self identifying as AI generated.
In contrast, Mozilla is actively working with Anthropic on this, which implies at least some amount of coordination and intent with this. That would mean professionals from Anthropic and Mozilla fine tuning these AIs to reduce false positives. They will also be clearly labeled as AI generated. If it results in needless busywork, they're free to cut the agreement at any time.
I'm not a particular fan of this either, and I think that there's plenty of ground to cover with less resource intensive pattern matching bug and error detection schemes that should be focused on first, but this is absolutely not the same situation that happened to curl.
i mostly agree with you, but i just do not trust closed ai labs. sorry. and yes most western ai labs are closed (meta and google used to release open stuff, even microsoft, but they do not do it anymore). if it were some open lab, mozilla could set up their own inference with specialised tiny - mid scale models. with closed labs, i do not expect them to give them access to models to run.
for more context as to why this is not that good of an idea as it may seem - please look at curl project and barrage of false security reports made. and curl is a cli project which just handles many protocols, and does not practically do any processing. firefox and any modern browser is basically a operating system and even a hardware (in forms of wasm allowing you to run arbitrary assembly written in systems programming language).
The difference is that anthropic did due diligence. They narrowed their scope as a research project to just the JS engine. They then fully vetted 1 issue, had a complete minimum-test-case that would reporoduce the issue, then contacted Mozilla with it.
Mozilla then said "send over the rest, vetted or not", who then vetted them all.
Mozilla's findings with the bugs found by Claude were fed back to anthropic to hone the model.
So, it's completely different that someone yeeting a codebase into an LLM and reporting whatever it spewed out.
Anthropic required Claude to actually produce an example exploit of it. And it REALLY struggled to do this. But it did get there eventually, for 1 bug. Which they then reported.
Even then, the exploit was in the JS engine. It wouldn't have escaped the sandbox to actually be an issue. Classic defence-in-depth
Shitty use of AI is fucking horrible. It wastes resources, it wastes time, it wastes energy.
I think this is a responsible use of an LLM in limited scope to produce an actually useful result.
Just like you've mentioned, I personally use Librewolf as I find it to be a suitable alternative to Chome and native Firefox.
Hard to say if anyone left at Mozilla isn't drinking the AI koolaid at this point unfortunately.