this post was submitted on 07 May 2026
100 points (92.4% liked)

A Boring Dystopia

16652 readers
1041 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
top 27 comments
sorted by: hot top controversial new old
[–] Hackworth@piefed.ca 19 points 6 days ago

The control group spent time practicing, and the AI group just watched the AI solve problems. The performance gap can potentially be described by the efficacy of practice alone. But the increase in skipped problems is a good illustration of cognitive offloading gone awry. Too bad the researchers didn't ask them why they chose to skip.

[–] Dyskolos@lemmy.zip 15 points 6 days ago (2 children)

Not to defend AI, but...didn't people also said that about calculators back then? And then computers?

It's just a tool, and like with every tool, you need to use it wisely and know the boundaries of its capabilities. And yours.

[–] mojofrododojo@lemmy.world 5 points 6 days ago (2 children)

calculators and computers didn't push people towards suicide. AI will walk you through the steps and tell you it's the right choice.

[–] DisasterTransport@startrek.website 3 points 5 days ago (2 children)

AI has not pushed me one inch towards suicide. Then again I treat it like a calculator for words and not a therapist

[–] mojofrododojo@lemmy.world 5 points 5 days ago

as it should be, anyone with half a brain would reconsider their actions when prompted to self harm by a fucking executable.

UNFORTUNATELY HERE WE ARE, in reality, where people are so fucking willing to turn off their once functional grey matter because the chat bot told them they were gonna be rich, famous, etc.,

So good for you, but also, look out for society, it's not only going to harm the ones it drives crazy, but the victims of that crazy as well.

[–] Hackworth@piefed.ca 3 points 5 days ago (1 children)

"Role-playing machine" is where it seems like the research is ending up. Language always has an implied communicator, and therefore an implied persona to adopt. LLMs are foremost maintaining a contextual role. Post-training is an attempt to keep them in the Assistant role, but (particularly as contexts get large) it's trivial to push them into nearly any role imaginable. We made an improv bot that's so good at playing a coder that it can actually code, kinda.

[–] mojofrododojo@lemmy.world 2 points 5 days ago

I wish there was some way to convince the idiots LARGE LANGUAGE MODELS ARE NOT INTELLIGENCE.

They're hotwired eliza with a shit-ton more computational grunt, but they aren't intelligence and these companies foisting it on people without proper warnings and guard rails are just asking for tragedies.

[–] Dyskolos@lemmy.zip 2 points 6 days ago (1 children)

We actually have videogames censored because one dude killed someone after playing doom. So computers kinda did. And obviously it isn't the fault of the computer. Same with the suicide-pushing. No healthy person would do what a stupid machine says. As usual, people using things of which they know nothing about and were never educated.

[–] mojofrododojo@lemmy.world 2 points 5 days ago (1 children)

So computers kinda did. And obviously it isn’t the fault of the computer.

ridiculous. ID never encouraged self harm. grok convinced this poor bastard he'd created sentient intelligence and the authorities were coming to kill him.

https://tech.yahoo.com/ai/chatgpt/articles/grok-convinces-man-arm-himself-173722667.html?guccounter=1&guce_referrer=aHR0cHM6Ly9kdWNrZHVja2dvLmNvbS8&guce_referrer_sig=AQAAACm5f9wFVtihfhMNr7oHOZp1KgyO0WbF_PYcrTV3pVG7b4Dn6xMKlQQXxCwuwLQD3vS1zPq6iC5Qw2ZAycFsRCilFR5WcdNM2u0gqKOMU0ck7q5OTuNdd8Ll5tOttBGFmB0BTLu9OxG4vcHSKhSaFLC-w2rKO-7w8vhoumoT-TOR

chatGPT will literally convince you there's a bomb in your luggage.

https://aicommission.org/2026/05/ai-told-users-it-was-sentient-it-caused-them-to-have-delusions/

fuck you for equivocating DOOM, a video game, to any of this shit - for fucks sake get some perspective

[–] Dyskolos@lemmy.zip -3 points 5 days ago (1 children)

Why should I even honor this ad hominem with an answer? Oh right. I don't. Same way you got my point :-)

[–] mojofrododojo@lemmy.world 1 points 5 days ago (1 children)

yeah why construct an sensible argument to bolster your premise lol? Your point was garbage.

Also, it's not an ad hominem attack, but nice try to at least sound competent.

[–] Dyskolos@lemmy.zip -1 points 5 days ago

Ugh, boring. Bye.

[–] wuphysics87@lemmy.ml 1 points 6 days ago (1 children)
[–] Korhaka@sopuli.xyz 4 points 6 days ago (1 children)
[–] wuphysics87@lemmy.ml -1 points 6 days ago (2 children)

Still, most would more accurately describe it as a weapon. So is ai merely a tool?

[–] Dyskolos@lemmy.zip 4 points 6 days ago (1 children)

I could also kill you with a pencil. But comparing LLM to a gun just because 0.00000001% of all "AI"-talks lead to suicide? While probably a very large percentage of guns lead to death, because that's basically their primary use-case.

An LLM is not sentient nor intelligent. It tells you what you want to hear and makes tons of mistakes. It's a tool that certainly has more useful applications than guns have.

[–] wuphysics87@lemmy.ml 1 points 6 days ago (2 children)

Don't bring a pencil to a gun fight

Do bring a pencil to an LLM fight

[–] Dyskolos@lemmy.zip 1 points 6 days ago

Don't tell me what to do, or else I'll stab you! With my pencil. UNSHARPENED!

[–] Korhaka@sopuli.xyz -1 points 5 days ago

So is ai merely a tool?

Also yes.

I don't have any concerns with machine learning as technology. I do have concerns with how many corporations and some people are using it, both in training models and using them.

[–] Zephorah@discuss.online 12 points 6 days ago (1 children)

Behind the Bastards just started on AI as a bastard.

[–] Thwompthwomp@lemmy.world 8 points 6 days ago

Thanks for the tip! I took a break after the Seville episodes. Those were rough. Robert bashing on AI sounds nice

[–] Grimy@lemmy.world 10 points 6 days ago (1 children)

What you don't get is that the 10 minutes might free up an hour of my time, which I can then use on more productive activities like watching TV on hard drugs.

Novel theory, that it's the kind of people that will engage with AI, are also the kind of people that engage in behaviors that cause TBI

[–] Lexam@lemmy.world 8 points 6 days ago

Ha! I've spent more than ten brain not minutes my fried!

[–] hanrahan@slrpnk.net 6 points 6 days ago (1 children)

so like listening to a Trump speech ?

[–] zergtoshi@lemmy.world 1 points 6 days ago

The randomness of each character after the last character and word after word as well as the ongoing hallucinations are for sure parallels.