this post was submitted on 26 Feb 2026
194 points (92.9% liked)

Open Source

44910 readers
907 users here now

All about open source! Feel free to ask questions, and share news, and interesting stuff!

Useful Links

Rules

Related Communities

Community icon from opensource.org, but we are not affiliated with them.

founded 6 years ago
MODERATORS
top 33 comments
sorted by: hot top controversial new old
[–] retrolasered@feddit.uk 2 points 2 hours ago

But if you go to the repo, and search all pull requests by author, nothing comes up for claude

[–] BladeFederation@piefed.social 15 points 16 hours ago

I don't code so correct me if I'm wrong, but wouldn't the code have to be generally accepted, reviewed, and verified by other members of the project? Ai can fuck right off as far as I'm concerned, but this isn't a situation where a CEO just unilaterally decides vibe coding is the move. Unless I'm mistaken.

[–] iByteABit@lemmy.ml 37 points 20 hours ago (1 children)

I'm no fan of AI generally, but "AI Vulnerable" as a term just doesn't make much sense to me. Code reviewing should be filtering out bad code whether it originates from an AI or a human.

PR spamming with the usage of AI is another problem which is very serious and harmful for OSS, but that's not due to some unique danger that only AI code has and human contributors don't.

[–] pinball_wizard@lemmy.zip 24 points 19 hours ago (1 children)

Code reviewing should be filtering out bad code whether it originates from an AI or a human.

But studies are showing it doesn't work.

A human makes a mental model of the entire system, does some testing, and submits code that works, passes tests, and fits their unstanding of what is need.

A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.

And yes, plenty of human coders fall into the second bracket, as well.

But AI is very good at writing code that looks right.

[–] iByteABit@lemmy.ml 8 points 19 hours ago (1 children)

A present day AI makes an educated guess which existing source code snippets best match the request, does some testing, and submits code that it judges is most likely to pass code review.

That's still on the human that opened the PR without doing the slightest effort of testing the AI changes though.

I agree there should be a lot of caution overall, I just think that the problem is a bit mischaracterized. The problem is the newfound ability to spam PRs that look legit but are actually crap, but the root here is humans doing this for Github rep or whatever, not AI inherently making codebases vulnerable. There need to be ways to detect such users that repeatedly do zero effort contributions like that and ban them.

[–] BreakerSwitch@lemmy.world 2 points 10 hours ago

Yes, it is their fault, and also, that fault is a widespread problem

[–] perry@lemy.lol 23 points 23 hours ago (2 children)

why is cpython on github? I thought they had their own forge like GNOME and KDE

[–] Damage@feddit.it 9 points 23 hours ago

The only link listed on wikipedia is on github

[–] indepndnt@lemmy.world 1 points 16 hours ago

They moved to GitHub a few years ago, mostly for the benefits of issue tracking, which previously was not integrated in the forge IIRC.

[–] Lemmchen@feddit.org 8 points 20 hours ago (1 children)
[–] underisk@lemmy.ml 25 points 19 hours ago (1 children)

cpython is the reference implementation of the python interpreter. The person who took this screenshot has the Claude user on GitHub blocked so that whenever it contributed to a git repo you see this warning. The Claude user is an AI agent. AI code is garbage.

[–] Lemmchen@feddit.org 5 points 18 hours ago

That was as to the point as humanly possible. Thank you.

[–] BaroqueInMind@piefed.social 11 points 1 day ago (1 children)
[–] illusionist@lemmy.zip 2 points 23 hours ago (2 children)

Waybe it's save if the maintainer reviews PRs before merging

[–] 4am@lemmy.zip 10 points 21 hours ago (2 children)

The problem is they get overwhelmed with these PRs. Godot has been talking about not being able to manage the workload lately, people just task AIs to vibecode fixes to perceived bugs and half of them don’t even do what they were prompted to do.

You can block those users but they just make new accounts

It honestly feels like a DDoS on do it yourself computing, by corporations who want total control over our thoughts.

[–] fubbernuckin@lemmy.dbzer0.com 3 points 17 hours ago

I can't wait for the money to dry up. It's insane to me just how stupid people have been, trusting LLMs with anything whatsoever. These things cost so much money to run and they seem to fucking hypnotize investors into burning their money. Sooner or later the fact that they're not making money has to catch up with them, right?

[–] illusionist@lemmy.zip 5 points 20 hours ago* (last edited 19 hours ago)

Thank for the explanation! The user in the image is claude itself, not a random anonymuous user. I see the problem of the ddos with issues, tickets etc. that is a real problem! But I don't get the rigid denial of generative ai. As long as I review the code it generates, it can save me lots of time. I would hate the actions you described as well but the image depicts nothing fishy. Am I wrong about this?

[–] ParlimentOfDoom@piefed.zip 2 points 20 hours ago

And maybe the janitor should sift through that river of diarrhea for the couple of pennies someone might have swallowed.

[–] yucandu@lemmy.world 4 points 20 hours ago (2 children)

What is "AI vulnerable"? What is the problem here? Claude isn't reverse-Midas, it's not like everything they touch turns to shit.

[–] BlameTheAntifa@lemmy.world 9 points 17 hours ago* (last edited 17 hours ago) (1 children)

Studies continue to show that AI routinely generates unsafe code and even human code reviews often don’t catch major problems. AI generated code should not be trusted or accepted and projects that accept them should be treated as compromised.

[–] yucandu@lemmy.world -2 points 13 hours ago (1 children)

Alright, well I use Claude in my code and it produced a better library than anything that was publicly available on Github from me just feeding a PDF of a datasheet for the module into an LLM.

I'm all for not blindly trusting AI, give it limits, review and test everything it makes, but flat out rejecting any AI generated code as "compromised" feels reactionary to me.

[–] HiddenLayer555@lemmy.ml 6 points 17 hours ago* (last edited 17 hours ago) (1 children)

Humans can barely write safe C code, so I definitely don't trust AI to. I'm not even blanket against AI assistance in programming, but there are way too many hidden landmines in C for an LLM to be reliable with.

[–] yucandu@lemmy.world -1 points 13 hours ago

I use it in C++ and it has been very helpful. The OP appears to be just blanket against AI assistance in programming? There's no indication of what degree Claude was involved here, or what amount of blind trust the human reviewers gave to it.

[–] silverneedle@lemmy.ca 6 points 23 hours ago

I mean it's Python. This is what we get for having been overly reliant on it.

All kidding aside, I am a more than a bit confused by this.

[–] psycotica0@lemmy.ca 4 points 22 hours ago (1 children)

Maybe I don't understand what this warning means, but I don't see anything here: https://github.com/python/cpython/issues?q=is%3Apr+author%3Aclaude

Does this mean something else?

[–] plantsmakemehappy@lemmy.zip 5 points 20 hours ago (1 children)

That would just show you issues opened by Claude. If you block Claude, or any user really, and then visit a repo they've contributed to you will see this message.

[–] Oinks@lemmy.blahaj.zone 1 points 17 hours ago

I tried a git log --grep=claude but it doesn't net much, basically just this PR (which in fairness does look vibecoded).

Maybe there's some development branch in the repository that has a commit authored by Claude but if so it's not on main.

[–] howdy@lemmy.ml 4 points 1 day ago
[–] goatbeard@beehaw.org 0 points 17 hours ago (1 children)

CC is actually really good if you know what you're doing. The only issue imo would be PR spamming

[–] froufox@lemmy.blahaj.zone 1 points 11 hours ago

I do not allow it committing

[–] Sims@lemmy.ml -4 points 20 hours ago (1 children)

How hard can it be to have an AI take PR's from other AI's and clean out the worst + plus hardening PR protocols ? It could even assist/guide AI contributors via a special AI-contributor forum or whatever. AI are currently high-lighting a lot of 'holes' in systems where we expect a certain behavior. Just coping/complaining and closing things off is a bad decision, and we should accept these flaws in our systems and adapt them to a new world. The sooner the better.

The projects that get it right, now have an army of managed AI contributors, and a filtered/educational AI PR pipeline where project maintainers cherry-pick the top creme de la creme..

[–] ilinamorato@lemmy.world 1 points 13 hours ago

We've got several months of evidence as to how hard.