this post was submitted on 19 May 2025
341 points (96.0% liked)

A Boring Dystopia

12190 readers
493 users here now

Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.

Rules (Subject to Change)

--Be a Decent Human Being

--Posting news articles: include the source name and exact title from article in your post title

--If a picture is just a screenshot of an article, link the article

--If a video's content isn't clear from title, write a short summary so people know what it's about.

--Posts must have something to do with the topic

--Zero tolerance for Racism/Sexism/Ableism/etc.

--No NSFW content

--Abide by the rules of lemmy.world

founded 2 years ago
MODERATORS
 

cross-posted from: https://hexbear.net/post/4958707

I find this bleak in ways it’s hard to even convey

top 50 comments
sorted by: hot top controversial new old
[–] markovs_gun@lemmy.world 31 points 14 hours ago (1 children)

I can't wait until ChatGPT starts inserting ads into its responses. "Wow that sounds really tough. You should learn to love yourself and not be so hard on yourself when you mess up. It's a really good thing to treat yourself occasionally, such as with an ice cold Coca-Cola or maybe a large order of McDonald's French fries!"

[–] thermal_shock@lemmy.world 11 points 13 hours ago* (last edited 13 hours ago) (1 children)
[–] Retrograde@lemmy.world 5 points 13 hours ago* (last edited 13 hours ago)

That episode was so disturbing 😅

[–] SoftestSapphic@lemmy.world 21 points 14 hours ago (1 children)

Nothing will meaningfully improve until the rich fear for their lives

[–] DeathsEmbrace@lemm.ee 1 points 13 hours ago

In a way that the relief is to give us our demands subliminally. This way the only rich person who is safe is our subject.

[–] Cyberflunk@lemmy.world 12 points 14 hours ago

I've tried this ai therapist thing, and it's awful. It's ok to help you work out what you're thinking, but absymal at analyzing you. I got some structured timelines back fro. It that I USED in therapy, but AI is a dangerous alternative to human therapy.

My $.02 anyway.

[–] idunnololz@lemmy.world 18 points 16 hours ago* (last edited 16 hours ago)

This is terrible. I'm going to ignore the issues concerning privacy since that's already been brought up here and highlight another major issue: it's going to get people hurt.

I did a deep dive with gen AI for a month a few weeks ago.

It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it's actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.

I also noticed that as gen AI's context grew, it became less "objective". This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.

If people started to use gen AI for therapy, it's very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.

Gen AI cannot "think" of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can't "think" period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it's very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.

If people then act on this advice, the consequences can be disastrous. I've read enough horror stories about this.

Anyway, I think therapy might be one of the worst uses for gen AI.

[–] match@pawb.social 7 points 15 hours ago (1 children)

unlike humans, the ai listens to and remembers me to me [for the number of characters allotted]. this will help me feel seen i guess

[–] Viking_Hippie@lemmy.dbzer0.com 3 points 15 hours ago

You know a reply's gonna be good when it starts with "unlike humans" 😁

[–] november@lemmy.vg 15 points 20 hours ago
[–] ZILtoid1991@lemmy.world 36 points 1 day ago (1 children)

Am I old fashioned for wanting to talk to real humans instead?

[–] GreenMartian@lemmy.dbzer0.com 45 points 1 day ago* (last edited 1 day ago) (2 children)

No. But when the options are either:

  • Shitty friends who have better things to do than hearing you vent,
  • Paying $400/hr to talk to a psychologist, or
  • A free AI that not only pays attention to you, but actually remembers what you told them last week,

it's quite understandable that some people choose the one that is a privacy nightmare but keeps them sane and away from some dark thoughts.

[–] Natanael@infosec.pub 33 points 23 hours ago (1 children)
[–] anus@lemmy.world 3 points 16 hours ago

Ahh yes the random rolling stone article that refutes the point

Let's revisit the list, shall we?

[–] ZILtoid1991@lemmy.world 14 points 1 day ago (3 children)

But I want to hear other people's vents...😥

Maybe a career in HVAC repair is just the thing for you!

[–] lilmo037@infosec.pub 11 points 22 hours ago

Please continue to be you, we need more folks like you.

[–] GreenMartian@lemmy.dbzer0.com 11 points 23 hours ago

You're a good friend. I wish everyone has someone like this. I have a very small group of mates where I can be vulnerable without being judged. But not everyone are as privileged, unfortunately...

[–] SouthEndSunset@lemm.ee 17 points 21 hours ago (1 children)

Cheaper than paying people better, I suppose.

[–] OsrsNeedsF2P@lemmy.ml 12 points 20 hours ago* (last edited 20 hours ago) (1 children)

Let's not pretend people aren't already skipping therapy sessions over the cost

[–] SouthEndSunset@lemm.ee 5 points 17 hours ago

I’m not, I’m saying people’s mental health would be better if pay was better.

[–] ryedaft@sh.itjust.works 43 points 1 day ago (1 children)
[–] drmoose@lemmy.world 4 points 22 hours ago (1 children)

Yeah we have spiritual delusions at home already!

Seriously, no new spiritual delusions could ever be more harmful than what we have right now.

[–] DeceasedPassenger@lemmy.world 9 points 21 hours ago* (last edited 21 hours ago) (2 children)

Totally fair point but I really don't know if that's true. Most mainstream delusions have the side effect of creating community and bringing people together, other negative aspects notwithstanding. The delusions referenced in the article are more akin to acute psychosis, as the individual becomes isolated, nobody to share delusions with but the chatbot.

With traditional mainstream delusions, there also exists a relatively clear path out, with corresponding communities. ExJW, ExChristian, etc. People are able to help others escape that particular in-group when they're familiar with how it works. But how do you deprogram someone when they've been programmed with gibberish? It's like reverse engineering a black box. This is scaring me as I write it.

[–] drmoose@lemmy.world -1 points 9 hours ago

You mean the guys who put kids in suicide bombs don't have acute psychosis?

What about almost of the rvaibg Christian hermits that sit in their basements and harass people online?

Its full on lovecraftian level psychosis. In the US they sell out stadiums and pretend to heal people by touch lmao

[–] theneverfox@pawb.social 4 points 20 hours ago (1 children)

This isn't a new thing, people have gone off alone into this kind of nonsensical journey for a while now

The time cube guy comes to mind

There's also temple OS written in holy C, he was close to some of the stuff in the article

And these are just two people functional and loud enough to be heard. This is a thing that happens, maybe LLMs exacerbate a pre existing condition, but people have been going off the deep end like this long before LLMs came into the picture

[–] DeceasedPassenger@lemmy.world 3 points 18 hours ago* (last edited 18 hours ago) (1 children)

Your point is not only valid but also significant, and I feel stands in addition, not contradiction, to my point. These people now have something to continuously bounce ideas off; a conversational partner that never says no. A perpetual yes-man. The models are heavily biased towards the positive simply by nature of what they are, predicting what comes next. You (may or may not) know how in improv acting there's a saying called "yes, and" which serves to keep things always moving forward. These models effectively exist in this state, in perpetuity.

Previously, people who have ideas such as these will experience near-universal rejection from those around them (if they don't have charisma in which case they start a cult) which results in a (relatively, imo) small number of extreme cases. I fear the presence of such a perpetual yes-man will only accelerate all kinds of damage that can emerge from nonsensical thinking.

[–] theneverfox@pawb.social 2 points 7 hours ago

I agree, it's certainly not going to help people losing touch. But that's not what worries me - that's a small slice of the population, and models are beginning to get better at rejection/assertion

What I'm more worried about is the people who are using it almost codependently to make decisions. It's always there, it'll always give you advice. Usually it's somewhat decent advice, even. And it's a normal thing to talk through decisions with anyone

The problem is people are offloading their thinking to AI. It's always there, it's always patient with you... You can literally have it make every life decision for you.

It's not emotional connection or malicious AI I worry about... You can now walk around with a magic eight ball that can guide you through life reasonably well, and people are starting to trust it above their own judgement

[–] drmoose@lemmy.world 19 points 22 hours ago (1 children)

People's lack of awareness of how important accessibility is really shows in this thread.

Privacy leaking is much lesser issue than not having anyone to talk to for many people, especially in poorer countries.

load more comments (1 replies)
[–] ininewcrow@lemmy.ca 97 points 1 day ago (7 children)

A human therapist might not or is less likely to share any personal details about your conversations with anyone.

An AI therapist will collate, collect, catalog, store and share every single personal detail about you with the company that owns the AI and share and sell all your data to the highest bidder.

[–] DaddleDew@lemmy.world 54 points 1 day ago* (last edited 1 day ago) (1 children)

Neither would a human therapist be inclined to find the perfect way to use all this information to manipulate people while they are being at their weakest. Let alone do it to thousands, if not millions of them all at the same time.

They are also pushing for the idea of an AI "social circle" for increasingly socially isolated people through which world view and opinions can be bent to whatever whoever controls the AI desires.

To that we add the fact that we now know they've been experimenting with tweaking Grok to make it push all sorts of political opinions and conspiracy theories. And before that, they manipulated Twitter's algorithm to promote their political views.

Knowing all this, it becomes apparent that we are currently witnessing is a push for a whole new level of human mind manipulation and control experiment that will make the Cambridge Analytica scandal look like a fun joke.

Forget Neuralink. Musk already has a direct connection into the brains of many people.

[–] fullsquare@awful.systems 13 points 1 day ago

PSA that Nadella, Musk, saltman (and handful of other techfash) own dials that can bias their chatbots in any way they please. If you use chatbots for writing anything, they control how racist your output will be

[–] desktop_user@lemmy.blahaj.zone 1 points 13 hours ago

the AI therapist probably can't force you into a psych ward though, a human psychologist is obligated to (under the right conditions).

load more comments (5 replies)
[–] JustJack23@slrpnk.net 28 points 1 day ago (2 children)

If the title is a question, the answer is no

[–] sawdustprophet@midwest.social 10 points 23 hours ago (1 children)

If the title is a question, the answer is no

A student of Betteridge, I see.

[–] JustJack23@slrpnk.net 7 points 23 hours ago

Actually I read it in a forum somewhere, but I am glad I know the source now!

[–] Viking_Hippie@lemmy.dbzer0.com 4 points 1 day ago* (last edited 1 day ago)

What is a sarcastic rhetorical question?

[–] SpicyLizards@reddthat.com 10 points 22 hours ago

Enter the Desolatrix

[–] Kyrgizion@lemmy.world 33 points 1 day ago

I suppose this can be mitigated by installing a local LLM that doesn't phone home. But there's still a risk of getting downright bad advice since so many LLM's just tell their users they're always right or twist the facts to fit that view.

I've been guilty of this as well, I've used ChatGPT as a "therapist" before. It actually gives decently helpful advice, compared to what's out there available after a google search. But I'm fully aware of the risks "down the road", so to speak.

[–] Zagorath@aussie.zone 25 points 1 day ago
[–] adarza@lemmy.ca 18 points 1 day ago (1 children)

how long will it take an 'ai' chatbot to spiral downward to bad advice, lies, insults, and/or promotion of violence and self-harm?

[–] Whats_your_reasoning@lemmy.world 9 points 23 hours ago

We're already there. Though that violence didn't happen due to insults, but due to a yes-bot affirming the ideas of a mentally-ill teenager.

[–] Luffy879@lemmy.ml 12 points 1 day ago

So you are actively documenting yourself sharing sensitive information about your patients?

load more comments
view more: next ›