this post was submitted on 12 Dec 2025
125 points (97.7% liked)
Not The Onion
18909 readers
1427 users here now
Welcome
We're not The Onion! Not affiliated with them in any way! Not operated by them in any way! All the news here is real!
The Rules
Posts must be:
- Links to news stories from...
- ...credible sources, with...
- ...their original headlines, that...
- ...would make people who see the headline think, “That has got to be a story from The Onion, America’s Finest News Source.”
Please also avoid duplicates.
Comments and post content must abide by the server rules for Lemmy.world and generally abstain from trollish, bigoted, or otherwise disruptive behavior that makes this community less fun for everyone.
And that’s basically it!
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
Absolutely not, but I would like to see how the study got this information from the bot. Don't get me wrong, I have my own sold reasoning for why llm in toys is not ok, but it's disengenuous to say these toys are the problem if the researcher had to coax dark info out of it.
The fact that the researcher could coax bad info out of it at all is a big problem.
That's kind of the existing issue I have with them. At their root, the LLMs are trained off of unfiltered internet and DMs harvested from social platforms. This means that regardless of the way you use it, all of them contain a sizable lexicon for explicit and abusive behaviour. The only reason you don't see it in every single AI is because the put a bot between it and you that checks the messages and redirects the bad stuff. It's like putting a t rex in your cattle pen and paying a guy to whack it or the cows if they get too close to each other.
The only way around this would be to manually vet everything fed into the llm to exclude any of this and since the idea is already not turning a profit, the cost of that would be far beyond what anyone is willing to do. So I'm not impressed that this toy is doing exactly what it's expected to do under laboratory scrutiny. I'd be more impressed if they actually told people why this keeps happening instead of fear mongering it.
Not all LLMs are trained on unfiltered Internet and social media DMs, though. It would totally be feasible to license and train one only with children's media like PBS cartoons, books, etc.
This company just decided not to do that, which is the problem.
That's literally my point. It's wholly possible to make an llm for this, but I suspect when you look at the llm in this toy it will just be a bootleg version of chat gpt.