this post was submitted on 31 Mar 2026
167 points (97.2% liked)

Fuck AI

6564 readers
1128 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Earlier today (March 31st, 2026) - Chaofan Shou on X discovered something that Anthropic probably didn’t want the world to see: the entire source code of Claude Code, Anthropic’s official AI coding CLI, was sitting in plain sight on the npm registry via a sourcemap file bundled into the published package.

I’ve maintained a backup of that code on GitHub here but that’s not the fun part... Let’s dive deep into what’s in it, how the leak happened and most importantly, the things we now know that were never meant to be public...

This is, without exaggeration, one of the most comprehensive looks we’ve ever gotten at how the production AI coding assistant works under the hood. Through the actual source code.

A few things stand out:
- The engineering is genuinely impressive. This isn’t a weekend project wrapped in a CLI. The multi-agent coordination, the dream system, the three-gate trigger architecture, the compile-time feature elimination - these are deeply considered systems.
- There’s a LOT more coming. KAIROS (always-on Claude), ULTRAPLAN (30-minute remote planning), the Buddy companion, coordinator mode, agent swarms, workflow scripts - the codebase is significantly ahead of the public release. Most of these are feature-gated and invisible in external builds.
- The internal culture shows. Animal codenames (Tengu, Fennec, Capybara), playful feature names (Penguin Mode, Dream System), a Tamagotchi pet system with gacha mechanics. Some people at Anthropic is having fun...

If there’s one takeaway this has, it’s that security is hard...

Source: https://kuber.studio/blog/AI/...Entire-Source-Code-Got-Leaked... [web-archive]

---

I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.

That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.

This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.

Source: https://github.com/instructkr/claw-code/.../2026-03-09-is-legal-...-erosion-of-copyleft.md

---

Related: https://github.com/instructkr/claw-code (Better Harness Tools, not merely storing the archive of leaked Claude Code but also make real things done. Now rewriting in Rust...)

you are viewing a single comment's thread
view the rest of the comments
[–] Spider89@lemmy.world 40 points 13 hours ago (3 children)

Why does the writeup sound like an LLM? I don't think people text like this..

[–] tyler@programming.dev 37 points 12 hours ago

Cause it is. They say they read all the source code, it’s 500k lines and it leaked yesterday. So either they’re lying, or they’re a robot. Or both. Likely both.

Their GitHub repo says they used AI tools to rewrite the entire codebase in Rust (avoiding legal issues I guess?) which does not give me confidence about their use of AI.

[–] artwork@lemmy.world 5 points 13 hours ago* (last edited 13 hours ago)

I do believe in people... yet who knows nowadays sometimes indeed... The person of the blog of MindDump states to be 19 at their main website:

//A 19-year-old AI developer & Perplexity Business Fellow...

Hey! I'm an AI dev & Tech Enthusiast from New Delhi, India
I'm studying Computer Science & AI from BITS Pilani and AI & Data Science from GGSIPU and building generative UI for LLMs @ PolyThink.

I've built and shipped 53+ projects (38+ AI Based) in the past year, run Projects to see some of my favourites (that I'm allowed to show)
I also write a blog called MindDump with 10-20k readers a month.
Also working on Democratizing private, local SLMs with superior context as your External Brain @ SecondYou, to know more read this tweet.

I design agentic LLM pipelines, post-train models, optimise local RAG systems and play around with resource constrained projects like The Backdooms and MiniLMs, but to know more about the languages and tools I know run Skills...

Fun Fact: I started programming when I was 12 on Roblox, became a Roblox millionaire at 14 as a freelance dev, at 16 I got into 3D modelling and won India's biggest student contest for it, but almost went to culinary school at 17 because I suck at math - glad it worked out though :)

Source: https://kuber.studio/