this post was submitted on 31 Mar 2026
77 points (98.7% liked)

Technology

83222 readers
4107 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 15 comments
sorted by: hot top controversial new old
[–] NocturnalMorning@lemmy.world 2 points 22 minutes ago

By 4:23 am ET, Chaofan Shou (@Fried_rice), an intern at Solayer Labs, broadcasted the discovery on X (formerly Twitter).

Ha, by an intern

[–] lmr0x61@lemmy.ml 13 points 1 hour ago (1 children)

Normally, I’d be reading about NPM security breaches and AI security breaches separately, but now I can get them in the same article! Truly amazing how technology has progressed.

[–] return2ozma@lemmy.world 3 points 1 hour ago

Fun times ahead!

[–] Encephalotrocity@feddit.online 74 points 3 hours ago (2 children)

Perhaps the most discussed technical detail is the "Undercover Mode." This feature reveals that Anthropic uses Claude Code for "stealth" contributions to public open-source repositories.

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

Laws should have been put in place years ago to make it so that AI usage needs to be explicitly declared.

[–] merc@sh.itjust.works 18 points 1 hour ago

The system prompt discovered in the leak explicitly warns the model: "You are operating UNDERCOVER... Your commit messages... MUST NOT contain ANY Anthropic-internal information. Do not blow your cover."

This is so incredibly stupid.

You've tried security.

You've tried security through obscurity.

Now try security through giving instructions to an LLM via a system prompt to not blow its cover.

[–] MangoCats@feddit.it 1 points 43 minutes ago (1 children)
[–] Encephalotrocity@feddit.online 1 points 29 minutes ago

If it was the law then the AI itself would be coded to not allow going "undercover", and there would be legal consequences if caught. Torvald's stance only matters for how things 'are' not how they 'could be'.

Would it be a cure all? Of course not. Fraud still happens despite the illegality. But it's better than not being able to trust anything ever again.

[–] rimu@piefed.social 40 points 3 hours ago* (last edited 2 hours ago)

If you installed or updated Claude Code via npm on March 31, 2026, between 00:21 and 03:29 UTC, you may have inadvertently pulled in a malicious version of axios (1.14.1 or 0.30.4) that contains a Remote Access Trojan (RAT). You should immediately search your project lockfiles (package-lock.json, yarn.lock, or bun.lockb) for these specific versions or the dependency plain-crypto-js. If found, treat the host machine as fully compromised, rotate all secrets, and perform a clean OS reinstallation.

Lol 😂

[–] pelespirit@sh.itjust.works 12 points 3 hours ago* (last edited 3 hours ago) (1 children)

Like a healthy brain. And just like a healthy brain, it'll still hallucinate and make mistakes probably:

The leaked source reveals a sophisticated, three-layer memory architecture that moves away from traditional "store-everything" retrieval.

As analyzed by developers like @himanshustwts, the architecture utilizes a "Self-Healing Memory" system.

[–] Semi_Hemi_Demigod@lemmy.world 6 points 2 hours ago (2 children)

We’re gonna make AGI and realize that being stupid sometimes and making mistakes is integral to general intelligence.

[–] MangoCats@feddit.it 1 points 39 minutes ago

being stupid sometimes and making mistakes is integral to general intelligence.

Smart people figured this out a long time ago.

https://www.amazon.com/s?k=nassim+taleb+antifragile&adgrpid=187118826460

https://www.goodreads.com/en/book/show/18378002-intuition-pumps-and-other-tools-for-thinking

[–] Didntdoit71@feddit.online 2 points 40 minutes ago

Actually, the people in the know...already knew this. We've known for years. Mistakes are required for learning.

[–] spez@sh.itjust.works 7 points 3 hours ago (2 children)

I mean it's not that big a deal. However, it would another thing if the model itself leaked. Now that would be something.

[–] MangoCats@feddit.it 3 points 42 minutes ago

As they tell it, Claude Code is over 80% written by the models anyway...

[–] lexiw@lemmy.world 5 points 2 hours ago

The harness is as important as the model