this post was submitted on 15 Dec 2025
36 points (89.1% liked)

Fuck AI

5157 readers
1157 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Would you participate?

you are viewing a single comment's thread
view the rest of the comments
[–] Grimy@lemmy.world 4 points 3 weeks ago (1 children)

In a joint study with the UK AI Security Institute and the Alan Turing Institute, we found that as few as 250 malicious documents can produce a "backdoor" vulnerability in a large language model—regardless of model size or training data volume.

This is the main paper I'm referencing https://www.anthropic.com/research/small-samples-poison .

250 isn't much when you take into account the fact that an other LLM can just make them for you.

[–] onehundredsixtynine@sh.itjust.works 2 points 3 weeks ago (1 children)

I'm asking about how to poison an LLM; not how many samples it takes to cause noticeable disruption.

[–] Grimy@lemmy.world 1 points 3 weeks ago* (last edited 3 weeks ago)

Bro, it's in the article. You asked "how so" when I said it was easy, not how to.