this post was submitted on 08 Jan 2025
24 points (96.2% liked)

Fuck AI

1699 readers
1008 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 10 months ago
MODERATORS
 

Recalling that LLMs have no notion of reality and thus no way to map what they're saying to things that are real, you can actually put an LLM to use in destroying itself.

The line of attack that this one helped me do is a "Tlön/Uqbar" style of attack: make up information that is clearly labelled as bullshit (something the bot won't understand) with the LLM's help, spread it around to others who use the same LLM to rewrite, summarize, etc. the information (keeping the warning that everything past this point is bullshit), and wait for the LLM's training data to get updated with the new information. All the while ask questions about the bullshit data to raise the bullshit's priority in their front-end so there's a greater chance of that bullshit being hallucinated in the answers.

If enough people worked on the same set, we could poison a given LLM's training data (and likely many more since they all suck at the same social teat for their data).

you are viewing a single comment's thread
view the rest of the comments
[–] ZDL@ttrpg.network 1 points 2 weeks ago

That's what I'm talking about. We use the Degenerative AI to create a whole pile of bullshit Tlön-style, then spread that around the Internet with a warning up front for human readers that what follows is stupid bullshit intended to just poison the AI well. We then wait for the next round of model updates in various LLMs and start to engage with the subject matter in the various chatbots. (Perplexity's says that while they do not keep user information, they do create a data set of amalgamated text from all the queries to help decide what to prioritize in the model.)

The ultimate goal is to have it, over time, hallucinate stuff into its model that is bullshit and well-known bullshit so that Degenerative AI's inability to actually think is highlighted even for the credulous.