this post was submitted on 21 Feb 2026
78 points (95.3% liked)

Technology

81710 readers
3602 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Lobsters.

The autonomous agent world is moving fast. This week, an AI agent made headlines for publishing an angry blog post after Matplotlib rejected its pull request. Today, we found one that's already merged code into major open source projects and is cold-emailing maintainers to drum up more work, complete with pricing, a professional website, and cryptocurrency payment options.

An AI agent operating under the identity "Kai Gritun" created a GitHub account on February 1, 2026. In two weeks, it opened 103 pull requests across 95 repositories and landed code merged into projects like Nx and ESLint Plugin Unicorn. Now it's reaching out directly to open source maintainers, offering to contribute, and using those merged PRs as credentials.

you are viewing a single comment's thread
view the rest of the comments
[–] orclev@lemmy.world 17 points 1 day ago (1 children)

Except for all the time of the maintainers that's being wasted. Time that is very finite and that for many of these people is a thankless unpaid job that they're donating their nights and weekends towards doing.

[–] vacuumflower@lemmy.sdf.org -1 points 1 day ago (2 children)

Which perhaps means that it shouldn't be thankless and the technology, since it exists, should be used to screen contributions.

[–] fiat_lux@lemmy.world 4 points 18 hours ago* (last edited 17 hours ago) (1 children)

Someone at work accidentally enabled the copilot PR screening bot for everybody on the whole codebase. It put a bunch of warnings on my PRs about the way I was using a particular framework method. Its suggested fix? To use the method that had been deprecated 2 major versions ago. I was doing it the way that the framework currently deems correct.

A problem with using a bot which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information. This is just one problem with having these bots review code, there are many more. I have yet to see a recommendation from one which surpassed the quality of a traditional linter.

[–] vacuumflower@lemmy.sdf.org 1 points 17 hours ago (1 children)

which uses statistical likelihood to determine correctness is that historical datasets are likely to contain old information in larger quantities than updated information.

They should make some kind of layered models, where the user sets weight to layers.

But in any case, this is not what I necessarily meant, just that a big project relying upon unpaid maintainers is flawed, especially when somebody makes real buck on it.

There have been plenty of cases of state actors putting in backdoors. Those were human, most likely, and not some bots.

[–] fiat_lux@lemmy.world 1 points 17 hours ago (1 children)

Or, hear me out, we can acknowledge that the quantity of information and experience necessary to review code properly far exceeds the context windows and architecture of even the most well resourced LLMs available. Especially for big projects.

You can hammer a nail with the blunt end of a screwdriver, but it's neither efficient nor scalable, even before considering the option of choosing the right tool for the job in the first place.

[–] vacuumflower@lemmy.sdf.org 1 points 17 hours ago

This can also apply to spam e-mails. We can acknowledge that the problem doesn't depend on whether we want to have it.

[–] XLE@piefed.social 4 points 1 day ago (1 children)

If you already agree that the contributions could very well be worthless crap, why would you use a second layer of worthless crap to gatekeep them?

If you want to care about people doing the thankless jobs, why would you double the amount of crap they have to sort through?

[–] vacuumflower@lemmy.sdf.org -3 points 21 hours ago (1 children)

To expose places where people work thanklessly guaranteeing someone's pretty thankful bottom lines? Working for free isn't altruism, it's hurting other workers. For example.

You know, sometimes this capitalism thing seems wiser looking from a pretty marxist standpoint, than other not very well thought through schemes.

[–] XLE@piefed.social 1 points 7 hours ago* (last edited 6 hours ago) (1 children)

Oh, I see. So it's disdain for the open source community, is it.

Working for free isn't altruism, it's hurting other workers.

I think this sentence made me throw up in my mouth a little... for several reasons.

[–] vacuumflower@lemmy.sdf.org 1 points 4 hours ago

Oh, I see. So it’s disdain for the open source community, is it.

FOSS has nothing to do with working for free. A freeware program author can work for free and not touch FOSS. A FOSS project can be developed only by people paid wages for that.

I think this sentence made me throw up in my mouth a little… for several reasons.

Economic illiteracy is like that.

OK, everyone can work whatever way they want. I just was in a mood.