this post was submitted on 31 Mar 2026
99 points (96.3% liked)

Fuck AI

6564 readers
1276 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Earlier today (March 31st, 2026) - Chaofan Shou on X discovered something that Anthropic probably didn’t want the world to see: the entire source code of Claude Code, Anthropic’s official AI coding CLI, was sitting in plain sight on the npm registry via a sourcemap file bundled into the published package.

I’ve maintained a backup of that code on GitHub here but that’s not the fun part... Let’s dive deep into what’s in it, how the leak happened and most importantly, the things we now know that were never meant to be public...

This is, without exaggeration, one of the most comprehensive looks we’ve ever gotten at how the production AI coding assistant works under the hood. Through the actual source code.

A few things stand out:
- The engineering is genuinely impressive. This isn’t a weekend project wrapped in a CLI. The multi-agent coordination, the dream system, the three-gate trigger architecture, the compile-time feature elimination - these are deeply considered systems.
- There’s a LOT more coming. KAIROS (always-on Claude), ULTRAPLAN (30-minute remote planning), the Buddy companion, coordinator mode, agent swarms, workflow scripts - the codebase is significantly ahead of the public release. Most of these are feature-gated and invisible in external builds.
- The internal culture shows. Animal codenames (Tengu, Fennec, Capybara), playful feature names (Penguin Mode, Dream System), a Tamagotchi pet system with gacha mechanics. Some people at Anthropic is having fun...

If there’s one takeaway this has, it’s that security is hard...

Source: https://kuber.studio/blog/AI/...Entire-Source-Code-Got-Leaked... [web-archive]

---

I think more. What the GPL protected was not the scarcity of code but the freedom of users. The fact that producing code has become cheaper does not make it acceptable to use that code as a vehicle for eroding freedom. If anything, as the friction of reimplementation disappears, so does the friction of stripping copyleft from anything left exposed. The erosion of enforcement capacity is a legal problem. It does not touch the underlying normative judgment.

That judgment is this: those who take from the commons owe something back to the commons. The principle does not change depending on whether a reimplementation takes five years or five days. No court ruling on AI-generated code will alter its social weight.

This is where law and community norms diverge. Law is made slowly, after the fact, reflecting existing power arrangements. The norms that open source communities built over decades did not wait for court approval. People chose the GPL when the law offered them no guarantee of its enforcement, because it expressed the values of the communities they wanted to belong to. Those values do not expire when the law changes.

Source: https://github.com/instructkr/claw-code/.../2026-03-09-is-legal-...-erosion-of-copyleft.md

---

Related: https://github.com/instructkr/claw-code (Better Harness Tools, not merely storing the archive of leaked Claude Code but also make real things done. Now rewriting in Rust...)

you are viewing a single comment's thread
view the rest of the comments
[–] SolarMonkey@slrpnk.net 12 points 3 hours ago* (last edited 2 hours ago) (1 children)

Inside assistant/, there’s an entire mode called KAIROS i.e. a persistent, always-running Claude assistant that doesn’t wait for you to type. It watches, logs, and proactively acts on things it notices.

This is gated behind the PROACTIVE / KAIROS compile-time feature flags and is completely absent from external builds.

Oh hey, that’s creepy as shit, even if it’s not active (I assume that’s what the second paragraph means)

[–] artwork@lemmy.world 2 points 1 hour ago (1 children)

Referenced over 150 times in the source, KAIROS is an unreleased autonomous daemon mode where Claude operates as a persistent, always-on background agent. It receives periodic <tick> prompts to decide whether to act proactively, maintains append-only daily log files, and subscribes to GitHub webhooks.

KAIROS includes autoDream - a background memory consolidation process that runs as a forked subagent while the user is idle. The dream agent merges observations, removes contradictions, converts vague insights into absolute facts, and gets read-only bash access. A companion feature called ULTRAPLAN offloads complex planning to a remote cloud session running Opus 4.6 with up to 30 minutes of dedicated think time.

Source

[–] SolarMonkey@slrpnk.net 1 points 1 hour ago (1 children)

I’m not really sure what you are trying to tell me here..? I mean I read the entire article that you posted and it sort of explained the feature flag thing, so I’m good there, and the rest of this stuff was more or less in the article as well.

[–] artwork@lemmy.world 0 points 1 hour ago (1 children)

I am sorry. The source is different, and just in case someone needs more meta on it.

[–] SolarMonkey@slrpnk.net 1 points 47 minutes ago

Gotcha, thanks for the clarification :)