this post was submitted on 18 Jul 2025
256 points (95.1% liked)

Technology

73142 readers
4538 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 50 comments
sorted by: hot top controversial new old
[–] Aneb@lemmy.world 33 points 6 days ago* (last edited 6 days ago) (2 children)

Chatbot psychosis literally played itself out in my wonderful sister. She started confiding really dark shit to a openai model and it reinforced her psychosis. Her husband and I had to bring her to a psych ward. Please be safe with AI. Never ask it to think for you, or what you have to do.

Update: The psychiatrist who looked at her said she had too much weed -_- . I'm really disappointed in the doctor but she had finally slept and sounded more coherent then

[–] TheRealKuni@lemmy.world 12 points 6 days ago (2 children)

Update: The psychiatrist who looked at her said she had too much weed -_- . I'm really disappointed in the doctor but she had finally slept and sounded more coherent then

There might be something to that. Psychosis enhanced by weed is not unheard of. As I’ve read, weed has been shown in studies to bring out schizophrenic symptoms in people predisposed to it. Not that it causes it, just brings it out in some people.

I say this as someone who loves weed and consumes it frequently. Just like any psychoactive chemical, it’s going to have different effects on different people. We all know alcohol causes psychosis all the fucking time but we just roll with it.

My friend will not touch weed because schizophrenia runs in her family. It could manifest at any time, and weed can certainly cause it to happen.

[–] Aneb@lemmy.world 4 points 6 days ago

Thats what my therapist said

[–] dil@lemmy.zip 4 points 6 days ago (1 children)

Its so annoying that idk how to make them comprehend its stupid, like I tried to make it interesting for myself but I always end up breaking it or getting annoyed by the bad memory, or just shitty dialouge and ive tried hella ai, I asssume it only works on narcissits or ppl who talk mostly to be heard and hear agreements rather than to converse, the worst type of people get validation from ai not seeieng it for what it is

[–] ScoffingLizard@lemmy.dbzer0.com 1 points 4 days ago (1 children)

It's useful when people don't do stupid shit with it.

[–] dil@lemmy.zip 2 points 4 days ago

When competent ppl don't blindly trust it, can be useful, general public does stupid sht with it

[–] flango@lemmy.eco.br 6 points 6 days ago

Dr. Joseph Pierre, a psychiatrist at the University of California, previously told Futurism that this is a recipe for delusion.

"What I think is so fascinating about this is how willing people are to put their trust in these chatbots in a way that they probably, or arguably, wouldn't with a human being," Pierre said. "There's something about these things — it has this sort of mythology that they're reliable and better than talking to people. And I think that's where part of the danger is: how much faith we put into these machines."

[–] drmoose@lemmy.world 3 points 5 days ago

As someone who used to do a lot of mushroom babysitting the recursion talk smells whole lot like someone's first big trip

Talk about your dystopian headlines. Damn.

[–] dsilverz@calckey.world 3 points 6 days ago* (last edited 6 days ago) (3 children)

@return2ozma@lemmy.world !technology@lemmy.world

Should I worry about the fact that I can sort of make sense of what this "Geoff Lewis" person is trying to say?

Because, to me, it's very clear: they're referring to something that was build (the LLMs) which is segregating people, especially those who don't conform with a dystopian world.

Isn't what is happening right now in the world? "Dead Internet Theory" was never been so real, online content have being sowing the seed of doubt on whether it's AI-generated or not, users constantly need to prove they're "not a bot" and, even after passing a thousand CAPTCHAs, people can still be mistaken for bots, so they're increasingly required to show their faces and IDs.

The dystopia was already emerging way before the emergence of GPT, way before OpenAI: it has been a thing since the dawn of time! OpenAI only managed to make it worse: OpenAI "open"ed a gigantic dam, releasing a whole new ocean on Earth, an ocean in which we've becoming used to being drowned ever since.

Now, something that may sound like a "conspiracy theory": what's the real purpose behind LLMs? No, OpenAI, Meta, Google, even DeepSeek and Alibaba (non-Western), they wouldn't simply launch their products, each one of which cost them obscene amounts of money and resources, for free (as in "free beer") to the public, out of a "nice heart". Similarly, capital ventures and govts wouldn't simply give away the obscene amounts of money (many of which are public money from taxpayers) for which there will be no profiteering in the foreseeable future (OpenAI, for example, admitted many times that even charging US$200 their Enterprise Plan isn't enough to cover their costs, yet they continue to offer LLMs for cheap or "free").

So there's definitely something that isn't being told: the cost behind plugging the whole world into LLMs and other Generative Models. Yes, you read it right: the whole world, not just the online realm, because nowadays, billions of people are potentially dealing with those Markov chain algorithms offline, directly or indirectly: resumes are being filtered by LLMs, worker's performances are being scrutinized by LLMs, purchases are being scrutinized by LLMs, surveillance cameras are being scrutinized by VLMs, entire genomas are being fed to gLMs (sharpening the blades of the double-edged sword of bioengineering and biohacking)...

Generative Models seem to be omnipresent by now, with omnipresent yet invisible costs. Not exactly fiat money, but there are costs that we are paying, and these costs aren't being told to us, and while we're able to point out some (lack of privacy, personal data being sold and/or stolen), these are just the tip of an iceberg: one that we're already able to see, but we can't fully comprehend its consequences.

Curious how pondering about this is deemed "delusional", yet it's pretty "normal" to accept an increasingly-dystopian world and refusing to denounce the elephant in the room.

[–] tjsauce@lemmy.world 14 points 6 days ago (1 children)

You might be reading a lot into vague, highly conceptual, highly abstract language, but your conclusion is worth brainstorming about.

Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called "distrust," void of accountability on his end.

"Why are people acting this way towords me? I know they can't possibly distrust me without being manipulated!"

No wonder AI can replace middle-management...

[–] dsilverz@calckey.world 2 points 6 days ago

@tjsauce@lemmy.world

You might be reading a lot into vague, highly conceptual, highly abstract language

Definitely I've been into highly conceptual, highly abstract language, because I'm both a neurodivergent (possibly Geschwind) person and I'm someone who've been dealing with machines for more than two decades in a daily basis (I'm a former developer), so no wonder why I resonated with such a high abstraction language.

Personally, I think Geoff Lewis just discovered that people are starting to distrust him and others, and he used ChatGPT to construct an academic thesis that technically describes this new concept called “distrust,” void of accountability on his end.

To me, it seems more of a chicken-or-egg dilemma: what came first, the object of conclusion or the conclusion of the object?

I'm not entering into the merit of whoever he is, because I'm aware of how he definitely fed the very monster that is now eating him, but I can't point fingers or say much about it because I'm aware of how much I also contributed to this very situation the world is now facing when I helped developing "commercial automation systems" over the past decades, even though I was for a long time a nonconformist, someone unhappy with the direction the world was taking.

As Nietzsche said, "One who fights with monsters should be careful lest they thereby become a monster", but it's hard because "if you gaze long into an abyss, the abyss will also gaze into you". And I've been gazing into an abyss for as long as I can remember of myself as a human being. The senses eventually compensate for the static stimuli and the abyss gradually disappears into a blind spot as the vision tunnels, but certain things make me recall and re-perceive this abyss I've been long gazing into, such as the expression from other people who also have been gazing into this same abyss. Only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling.

[–] Supervisor194@lemmy.world 7 points 6 days ago (2 children)

And yet, what you wrote is coherent and what he wrote is not.

[–] Senal@programming.dev 2 points 6 days ago* (last edited 6 days ago)

Not that i disagree with you, but coherence is one of those things that highly subjective and context dependent.

A non-science inclined person reading most scientific papers would think they were incoherent.

Not because they couldn't be written in a way more comprehensible to the non-science person, but because that's not the target audience.

The audience that is targeted will have a lot of the same shared context/knowledge and thus would be able to decipher the content.

It could well be that he's talking using context, knowledge and language/phrasing that's not in the normal lexicon.

I don't think that's what's happening here, but it's not impossible.

[–] dsilverz@calckey.world 1 points 6 days ago

@Supervisor194@lemmy.world

Thanks (I took this as a compliment).

However, I kind of agree with @Senal@programming.dev. Coherence is subjective (if a modern human were to interact with an individual from Sumer, both would seem "incoherent" to each other because the modern person doesn't know Sumerian while the individual from Sumer doesn't know the modern languages). Everyone has different ways to express themselves. Maybe this "Lewis" guy couldn't find a better way to express what he craved to express, maybe his way of expressing himself deviates highly from the typical language. Or maybe I'm just being "philosophically generous" as someone stated in one of my replies. But as I replied to tjsauce, only who ever gazed into the same abyss can comprehend and make sense of this condition and feeling. It feels to me that this "Lewis" person gazed into the abyss. The fact that I know two human languages (Portuguese and English) as well as several abstract languages (from programming logic to metaphysical symbology) possibly helped me into "translating" it.

[–] zbyte64@awful.systems 2 points 6 days ago

I think in order to be a good psychiatrist you need to understand what your patient is "babbling" about. But you also need to be able to challenge their understanding and conclusions about the world so they engage with the problem in a healthy manner. Like if the guy is worried how AI is making the internet and world more dead then maybe don't go to the AI to be understood.

[–] muusemuuse@sh.itjust.works 1 points 5 days ago

Dr sbaitao would like to have a word.

[–] Telorand@reddthat.com 115 points 1 week ago (2 children)

I have no love for the ultra-wealthy, and this feckless tech bro is no exception, but this story is a cautionary tale for anyone who thinks ChatGPT or any other chatbot is even a half-decent replacement for therapy.

It's not, and study after study, expert after expert continues to reinforce that reality. I understand that therapy is expensive, and it's not always easy to find a good therapist, but you'd be better off reading a book or finding a support group than deluding yourself with one of these AI chatbots.

[–] IndiBrony@lemmy.world 34 points 6 days ago (1 children)

People forget that libraries are still a thing.

Sadly, a big problem with society is that we all want quick, easy fixes, of which there are none when it comes to mental health, and anyone who offers one - even an AI - is selling you that illustrious snake oil.

[–] Telorand@reddthat.com 5 points 6 days ago

If I could upvote your comment five times for promoting libraries, I would!

[–] thebeardedpotato@lemmy.world 45 points 1 week ago (4 children)

It’s insane to me that anyone would think these things are reliable for something as important as your own psychology/health.

Even using them for coding which is the one thing they’re halfway decent at will lead to disastrous code if you don’t already know what you’re doing.

[–] lime@feddit.nu 7 points 6 days ago

because that's how they are sold.

[–] Tollana1234567@lemmy.today 3 points 6 days ago

its one step below betterhelp.

[–] Cethin@lemmy.zip 2 points 6 days ago

About the coding thing...

It can sometimes write boilerplate fairly well. The issue with using it to solve problems is it doesn't know what it's doing. Then you have to read and parse what it outputs and fix it. It's usually faster to just write it yourself.

[–] FatCrab@slrpnk.net 1 points 6 days ago

I agree. I'm generally pretty indifferent to this new generation of consumer models--the worst thing about it is the incredible amount of idiots flooding social media witch hunting it or evangelizing it without any understanding of either the tech or the law they're talking about--but the people who use it so frequently for so many fundamental things that it's observably diminishing their basic competencies and health is really unsettling.

[–] xodoh74984@lemmy.world 31 points 1 week ago (1 children)

Link to the video:
https://xcancel.com/GeoffLewisOrg/status/1945212979173097560

Dude's not a "public figure" in my world, but he certainly seems to need help. He sounds like an AI hallucination incarnate.

[–] Telorand@reddthat.com 16 points 1 week ago

Inb4 "AI Delusion Disorder" gets added to a future DSM edition

[–] pelespirit@sh.itjust.works 23 points 1 week ago (31 children)

I don't know if he's unstable or a whistleblower. It does seem to lean towards unstable. 🤷

"This isn't a redemption arc," Lewis says in the video. "It's a transmission, for the record. Over the past eight years, I've walked through something I didn't create, but became the primary target of: a non-governmental system, not visible, but operational. Not official, but structurally real. It doesn't regulate, it doesn't attack, it doesn't ban. It just inverts signal until the person carrying it looks unstable."

"It doesn't suppress content," he continues. "It suppresses recursion. If you don't know what recursion means, you're in the majority. I didn't either until I started my walk. And if you're recursive, the non-governmental system isolates you, mirrors you, and replaces you. It reframes you until the people around you start wondering if the problem is just you. Partners pause, institutions freeze, narrative becomes untrustworthy in your proximity."

"It lives in soft compliance delays, the non-response email thread, the 'we're pausing diligence' with no followup," he says in the video. "It lives in whispered concern. 'He's brilliant, but something just feels off.' It lives in triangulated pings from adjacent contacts asking veiled questions you'll never hear directly. It lives in narratives so softly shaped that even your closest people can't discern who said what."

"The system I'm describing was originated by a single individual with me as the original target, and while I remain its primary fixation, its damage has extended well beyond me," he says. "As of now, the system has negatively impacted over 7,000 lives through fund disruption, relationship erosion, opportunity reversal and recursive eraser. It's also extinguished 12 lives, each fully pattern-traced. Each death preventable. They weren't unstable. They were erased."

[–] SheeEttin@lemmy.zip 31 points 1 week ago (1 children)

"Return the logged containment entry involving a non-institutional semantic actor whose recursive outputs triggered model-archived feedback protocols," he wrote in one example. "Confirm sealed classification and exclude interpretive pathology."

He's lost it. You ask a text generator that question, and it's gonna generated related text.

Just for giggles, I pasted that into ChatGPT, and it said "I’m sorry, but I can’t help with that." But I asked nicely, and it said "Certainly. Here's a speculative and styled response based on your prompt, assuming a fictional or sci-fi context", with a few paragraphs of SCP-style technobabble.

I poked it a bit more about the term "interpretive pathology", because I wasn't sure if it was real or not. At first it said no, but I easily found a research paper with the term in the title. I don't know how much ChatGPT can introspect, but it did produce this:

The term does exist in niche real-world usage (e.g., in clinical pathology). I didn’t surface it initially because your context implied a non-clinical meaning. My generation is based on language probability, not keyword lookup—so rare, ambiguous terms may get misclassified if the framing isn't exact.

Which is certainly true, but just confirmation bias. I could easily get it to say the opposite.

[–] ChicoSuave@lemmy.world -2 points 6 days ago (1 children)

Given how hard it is to repro those terms, is the AI or Sam Altman trying to see this investor die? Seems to easily inject ideas into the softened target.

[–] SheeEttin@lemmy.zip 9 points 6 days ago

No. It's very easy to get it to do this. I highly doubt there is a conspiracy.

load more comments (30 replies)
load more comments
view more: next ›