this post was submitted on 19 May 2025
347 points (96.0% liked)
A Boring Dystopia
12190 readers
482 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
This is terrible. I'm going to ignore the issues concerning privacy since that's already been brought up here and highlight another major issue: it's going to get people hurt.
I did a deep dive with gen AI for a month a few weeks ago.
It taught me that gen AI is actually brilliant at certain things. One thing that gen AI does is it learns what you want and makes you believe it’s giving you exactly what you want. In a sense it's actually incredibly manipulative and one of the things gen AI is brilliant at. As you interact with gen AI within the same context window, it quickly picks up on who you are, then subtly tailors its responses to you.
I also noticed that as gen AI's context grew, it became less "objective". This makes sense since gen AI is likely tailoring the responses for me specifically. However, when this happens, the responses also end up being wrong more often. This also tracks, since correct answers are usually objective.
If people started to use gen AI for therapy, it's very likely they will converse within one context window. In addition, they will also likely ask gen AI for advice (or gen AI may even offer advice unprompted because it loves doing that). However, this is where things can go really wrong.
Gen AI cannot "think" of a solution, evaluate the downsides of the solution, and then offer it to you because gen AI can't "think" period. What gen AI will do is it will offer you what sounds like solutions and reasons. And because gen AI is so good at understanding who you are and what you want, it will frame the solutions and reasons in a way that appeals to you. On top of all of this, due to the long-running context window, it's very likely the advice gen AI gives will be bad advice. For someone who is in a vulnerable and emotional state, the advice may seem reasonable, good even.
If people then act on this advice, the consequences can be disastrous. I've read enough horror stories about this.
Anyway, I think therapy might be one of the worst uses for gen AI.
Thank you for the more detailed run down. I would set it against two other things, though. One, that for someone who is suicidal or similar, and can't face or doesn't know how to find a person to talk to, those beginning interactions of generic therapy advice might (I imagine; I'm not speaking from experience here) do better than nothing.
From that, secondly, more general about AI. Where I've tried it it's good with things people have already written lots about. E.g. a programming feature where people have already asked the question a hundred different ways on stack overflow. Not so good with new things - it'll make up what its training data lacks. The human condition is as old as humans. Sure, there's some new and refined approaches, and values and worldviews change over the generations, but old good advice is still good advice. I can imagine in certain ways therapy is an area where AI would be unexpectedly good...
...Notwithstanding your point, which I think is quite right. And as the conversation goes on the risk gets higher and higher. I, too, worry about how people might get hurt.
I agree that this like everything else is nuanced. For instance, I think if people who use gen AI as a tool to help with their mental health are knowledgeable about the limitations, then they can craft some ways to use it while minimizing the negative sides. Eg. Maybe you can set some boundaries like you talk to the AI chat bot but you never take any advice from it. However, i think in the average case it's going to make things worse.
I've talked to a lot of people around me about gen AI recently and I think the vast majority of people are misinformed about how it works, what it does, and what the limitations are.