this post was submitted on 31 Mar 2026
177 points (97.3% liked)

Fuck AI

6564 readers
1242 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Earlier today (March 31st, 2026) - Chaofan Shou on X discovered something that Anthropic probably didn’t want the world to see: the entire source code of Claude Code, Anthropic’s official AI coding CLI, was sitting in plain sight on the npm registry via a sourcemap file bundled into the published package.

I’ve maintained a backup of that code on GitHub here but that’s not the fun part... Let’s dive deep into what’s in it, how the leak happened and most importantly, the things we now know that were never meant to be public...

This is, without exaggeration, one of the most comprehensive looks we’ve ever gotten at how the production AI coding assistant works under the hood. Through the actual source code.

A few things stand out:
- The engineering is genuinely impressive. This isn’t a weekend project wrapped in a CLI. The multi-agent coordination, the dream system, the three-gate trigger architecture, the compile-time feature elimination - these are deeply considered systems.
- There’s a LOT more coming. KAIROS (always-on Claude), ULTRAPLAN (30-minute remote planning), the Buddy companion, coordinator mode, agent swarms, workflow scripts - the codebase is significantly ahead of the public release. Most of these are feature-gated and invisible in external builds.
- The internal culture shows. Animal codenames (Tengu, Fennec, Capybara), playful feature names (Penguin Mode, Dream System), a Tamagotchi pet system with gacha mechanics. Some people at Anthropic is having fun...

If there’s one takeaway this has, it’s that security is hard...

Source: https://kuber.studio/blog/AI/...Entire-Source-Code-Got-Leaked... [web-archive]

---

I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.

That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.

This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.

Source: https://github.com/instructkr/claw-code/.../2026-03-09-is-legal-...-erosion-of-copyleft.md

---

Related: https://github.com/instructkr/claw-code (Better Harness Tools, not merely storing the archive of leaked Claude Code but also make real things done. Now rewriting in Rust...)

you are viewing a single comment's thread
view the rest of the comments
[–] kingofras@lemmy.world 31 points 16 hours ago (3 children)

Thin foil hat here: something is off at anthropic.

The system outages are spiking. 2 days ago their follow up model to Opus, called Mythos, ‘leaked’ and now the CC source?

Within weeks of having a public spat with the Pentagon whilst they have found to be using Anthropic products to engage in the most clueless war ever engaged in by the USA.

I’ve run out of tin foil, but it doesn’t add up.

[–] Tar_alcaran@sh.itjust.works 20 points 12 hours ago (1 children)

The entire LLM industry is falling apart.

They're not realizing the growth in compute they promised

They're not realizing the results they promised (duh)

Almost every company that tries to use LLMs is failing to successfully implement it, because it doesn't work.

Nobody is willing to pay what it actually costs, all current use is heavily subsidized so they're burning money fast

Investors are finally digging their heads out of their senses and asking what that strong smell of burning cash is.

Energy costs are rising, making LLMs even more insanely expensive.

The only thing they've still got going for is that most so-called tech journalism is fucking awful and will just parrot LLM press releases without using their brain.

[–] kingofras@lemmy.world 4 points 7 hours ago

The growth in compute is on par or similar to Moore’s law. There’s not much wrong there. The problem they are having is that they can’t scale a probabilistic system to a have deterministic outcomes, therefore making it economically suicidal for any company to invest in this technology.

Your point about the press is true for all journalism. Also not their fault. Once captain bonespurs allowed prince bonesaw to get away with sawing journalists into pieces while they are alive, journalists will think twice.

I don’t think investors are just opportunistic as always so this war is changing their appetite a bit, but they’ll come back to this. The hype for LLMs isn’t going to stop anytime soon.

It’s a raw technology, like nuclear fission. You need a lot of auxiliary equipment, knowledge and expertise before you can power a metropolis with it. It’ll take a decade before this technology will mature.

[–] chirospasm@lemmy.ml 10 points 15 hours ago

This. I have been thinking about why 'now', of all times, too.

[–] mortalblade@lemmy.dbzer0.com 0 points 10 hours ago

workers sabotaging their unethical workplace maybe ? hope so