this post was submitted on 19 May 2025
369 points (96.2% liked)
A Boring Dystopia
12190 readers
577 users here now
Pictures, Videos, Articles showing just how boring it is to live in a dystopic society, or with signs of a dystopic society.
Rules (Subject to Change)
--Be a Decent Human Being
--Posting news articles: include the source name and exact title from article in your post title
--If a picture is just a screenshot of an article, link the article
--If a video's content isn't clear from title, write a short summary so people know what it's about.
--Posts must have something to do with the topic
--Zero tolerance for Racism/Sexism/Ableism/etc.
--No NSFW content
--Abide by the rules of lemmy.world
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You're missing the most important point here; quoting:
Plus, an AI cannot really have your best interest at heart, plus these sorts of things open up a whole slew of very dytopian scenarios.
OK, you said "capitalism" but that's way too broad.
Also I find the example of a "mental health emergency" (as in, right now, not tonight or tomorrow) in a remote area, presumably with nobody else around to help, a bit contrived. But OK, in such extremely rare cases - presuming broadband internet still works, and the person in question is savvy enough to use the chatbot - it might be better than nothing.
You don't actually know what you're talking about but like many others in here you put this over the top anti-AI current thing sentiment above everything including simple awareness that you don't know anything. You clearly haven't interacted with many therapists and medical professionals in general as a non-patient if you think they're guaranteed to respect privacy. They're supposed to but off the record and among friends plenty of them yap about everything. They're often obligated to report patients in case of self harm etc which can get them involuntarily sectioned, and the patients may have repercussions from that for years like job loss, healthcare costs, homelessness, legal restrictions, stigma etc.
There's nothing contrived or extemely rare about mental health emergencies and they don't need to be "emergencies" the way you understand it because many people are undiagnosed or misdiagnosed for years, with very high symptom severity and episodes lasting for months and chronically barely coping. Someone may be in any big city and won't change a thing, hospitals and doctors don't have magic pills that automatically cure mental illness assuming that patients have insight (not necessarily present during episodes of many disorders) or awareness that they have some mental illness and aren't just sad etc (because mental health awareness is in the gutter, example: your pretentious incredulity here). Also assuming they have friends available or that they even feel comfortable enough to talk about what bothers them to people they're acquainted with.
Some LLM may actually end up convincing them or informing them that they do have medical issues that need to be seen as such. Suicidal ideation may be present for years but active suicidal intent (the state in which people actually do it) rarely lasts more than 30 minutes or a few hours at worst and it's highly impulsive in nature. Wtf would you or "friends" do in this case? Do you know any techniques to calm people down during episodes? Even unspecialized LLMs have latent knowledge of these things so there's a good chance they'll end up getting life saving advice as opposed to just doing it or interacting with humans who default to interpreting it as "attention seeking" and becoming even more convinced that they should go ahead with it because nobody cares.
This holier than thou anti-AI bs had some point when it was about VLMs training on scraped art but some of you echo chamber critters turned it into some imaginary high moral prerogative that even turns off your empathy for anyone using AI even in use cases where it may save lives. Its some terminally online "morality" where supposedly "there is no excuse for the sin of using AI" and just echo chamber boosted reddit brainworms and fully performative unless all of you use fully ethical cobalt-free smartphones so you're not implicitly gaining convenience from the six million victims of the Congo cobalt wars so far, you never use any services on AWS and magically avoid all megadatacenters etc. Touch grass jfc.
OK, you're angry. I'm just going to say this: I also have mental health issues and I also don't live in a city. Still, I just don't see how a chatbot could help me in an emergency. Sorry.
But if you are facing mental health issues and a free or inexpensive AI that is available and doesn't burden your friends actually helps you, do you really care about your information and being profited from?
Put it this way, if Google was being super transparent with you and said, "we'll help treat you, and in exchange we use your info to make a few thousand dollars." Will you the individual say, "no thanks I'd rather pay a few hundred per therapy session instead"?
Even if you hate it, you have to admit it's hard to say no. Especially if it works.
Another sad aspect of non-socialised healthcare.
Yeah, well, that's just, like, your opinion, man. And if you remove the very concept of capital gain from your "important point", I think you'll find your point to be moot.
I'm also going to assume you haven't been in such a situation as I described with the whole mental health emergency? Because I have. At best I went to the emergency and calmed down before ever seeing a doctor, and at worst I was committed to inpatient care (or "the ward" as it's also known) before I calmed down, taking resources from the treatment of people who weren't as unstable as I was, a problem which could've been solved with a chatbot. And I can assure you there are people who live outside the major metropolitan areas of North America, it isn't an extremely rare case as you claim.
Anyway, my point stands.
Profit or not: How is it OK if your personal data is shared with third and fourth parties? How is it OK that AI allows for manipulating vulnerable people in new and unheard of ways?
I'm not saying that's ok, did you even read my reply or are you just being needlessly contrarian? Or was I just being unclear in my message, because if so I'm sorry. It tends to happen to me.
You're not the only one who doesn't live urban and who has mental health issues. I did not want to make it a contest so I did not reply to that.
But.
So.
If I imagine being in such a situation I just don't see how a chatbot could help me. Even if it was magically available already, possibly as a phone app, and I wouldn't have to seek it out first.
Sorry.