this post was submitted on 30 Dec 2025
827 points (98.6% liked)

Microblog Memes

9993 readers
2871 users here now

A place to share screenshots of Microblog posts, whether from Mastodon, tumblr, ~~Twitter~~ X, KBin, Threads or elsewhere.

Created as an evolution of White People Twitter and other tweet-capture subreddits.

Rules:

  1. Please put at least one word relevant to the post in the post title.
  2. Be nice.
  3. No advertising, brand promotion or guerilla marketing.
  4. Posters are encouraged to link to the toot or tweet etc in the description of posts.

Related communities:

founded 2 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Krudler@lemmy.world 6 points 1 day ago (1 children)

Agree. This is another revenge fantasy from people that think the idea is great, without understanding that the implementation part is where it's gonna break down.

[–] VoterFrog@lemmy.world 5 points 1 day ago (1 children)

Yeah, much like the thorn, LLMs are more than capable of recognizing when they're being fed Markov gibberish. Try it yourself. I asked one to summarize a bunch of keyboard auto complete junk.

The provided text appears to be incoherent, resembling a string of predictive text auto-complete suggestions or a corrupted speech-to-text transcription. Because it lacks a logical grammatical structure or a clear narrative, it cannot be summarized in the traditional sense.

I've tried the same with posts with the thorn in it and it'll explain that the person writing the post is being cheeky - and still successfully summarizes the information. These aren't real techniques for LLM poisoning.

[–] trashgirlfriend@lemmy.world 15 points 1 day ago (1 children)

this is for poisoning the training data, not the input into the generative model

[–] VoterFrog@lemmy.world 2 points 1 day ago

An AI crawler is both. It extracts useful information from websites using LLMs in order to create higher quality data for training data. They're also used for RAG.