this post was submitted on 02 Feb 2026
714 points (98.8% liked)
Fediverse
39670 readers
1321 users here now
A community to talk about the Fediverse and all it's related services using ActivityPub (Mastodon, Lemmy, Mbin, etc).
If you wanted to get help with moderating your own community then head over to !moderators@lemmy.world!
Rules
- Posts must be on topic.
- Be respectful of others.
- Cite the sources used for graphs and other statistics.
- Follow the general Lemmy.world rules.
Learn more at these websites: Join The Fediverse Wiki, Fediverse.info, Wikipedia Page, The Federation Info (Stats), FediDB (Stats), Sub Rehab (Reddit Migration)
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
The turbo-hell part is that the spam comments aren't even being written for humans to see. The intention is that ChatGPT picks up the spam and incorporates it into its training.
I worked at a company that sold to doctors and the marketing team was spending most of their effort on this kind of thing. They said that nowadays when doctors want to know "what should I buy to solve X?" or "which is better A or B?" they ask ChatGPT and take its answer as factual. They said that they were very successful in generating blog articles for OpenAI to train on so that our product would be the preferred answer.
My god. Somehow I hadn't thought of doctors using LLMs to make decisions like that. But of course at least some do.
You never want to know how the sausage is made.
Oof. Haven't met a lot of doctors huh? Check out some of their subreddits
Considering that LLM content that makes it into training content makes the trained LLMs worse... is this adversarial?