this post was submitted on 28 Aug 2025
85 points (100.0% liked)

Fuck AI

3855 readers
687 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] sigmaklimgrindset@sopuli.xyz 25 points 2 days ago* (last edited 2 days ago) (18 children)

Ngl as a former clinical researcher putting aside my ethics concerns, I am extremely interested in the data we'll be getting regarding AI usage in groups over the next decades re: social behaviours, but also biological structural changes. Right now the sample sizes are way too small.

But more importantly, can anyone who has experience in LLMs explain why this happens:

Adding to the concerns, chatbots have persistently broken their own guardrails, giving dangerous advice on how to build bombs or on how to self-harm, even to users who identified as minors. Leading chatbots have even encouraged suicide to users who expressed a desire to take their own life.

How exactly are guardrails programmed into these chatbots, and why are they so easily circumvented? We're already on GPT-5, you would think this is something that would be solved? Why is ChatGPT giving instructions on how to assassinate it's own CEO?

[–] Ilovethebomb@sh.itjust.works 4 points 2 days ago (4 children)

It's incredible to me that it even has that information.

[–] fullsquare@awful.systems 9 points 2 days ago (1 children)

it's trained on entire internet, of course everything is there. tho taking bomb-building advice from an idiot box that can't count letters in a word is gotta be an entire new type of darwin award

[–] Ilovethebomb@sh.itjust.works 5 points 2 days ago (2 children)

I mean, that's part of the issue. We trained a machine on the entire Internet, didn't vet what we fed in, and let children play with it.

[–] shalafi@lemmy.world 3 points 1 day ago

Can't see how they would get the monstrous dataset(s) required with indiscriminate vacuuming. If we want to be more discriumate on ingestion parameters, the man hours involved would be boggling.

[–] fullsquare@awful.systems 6 points 2 days ago

well nobody guarantees that internet is safe, so it's more on chatbot providers pretending otherwise. along with all the other lies about machine god that they're building that will save all the worthy* in the incoming rapture of the nerds, and even if it destroys everything we know, it's important to get there before the chinese.

i sense a bit of "think of the children" in your response and i don't like it. llms shouldn't be used by anyone. there was recently a case of a dude with dementia who died after fb chatbot told him to go to nyc

* mostly techfash oligarchs and weirdo cultists

load more comments (2 replies)
load more comments (15 replies)