this post was submitted on 04 Mar 2026
208 points (99.1% liked)

News

36354 readers
2560 users here now

Welcome to the News community!

Rules:

1. Be civil


Attack the argument, not the person. No racism/sexism/bigotry. Good faith argumentation only. This includes accusing another user of being a bot or paid actor. Trolling is uncivil and is grounds for removal and/or a community ban. Do not respond to rule-breaking content; report it and move on.


2. All posts should contain a source (url) that is as reliable and unbiased as possible and must only contain one link.


Obvious biased sources will be removed at the mods’ discretion. Supporting links can be added in comments or posted separately but not to the post body. Sources may be checked for reliability using Wikipedia, MBFC, AdFontes, GroundNews, etc.


3. No bots, spam or self-promotion.


Only approved bots, which follow the guidelines for bots set by the instance, are allowed.


4. Post titles should be the same as the article used as source. Clickbait titles may be removed.


Posts which titles don’t match the source may be removed. If the site changed their headline, we may ask you to update the post title. Clickbait titles use hyperbolic language and do not accurately describe the article content. When necessary, post titles may be edited, clearly marked with [brackets], but may never be used to editorialize or comment on the content.


5. Only recent news is allowed.


Posts must be news from the most recent 30 days.


6. All posts must be news articles.


No opinion pieces, Listicles, editorials, videos, blogs, press releases, or celebrity gossip will be allowed. All posts will be judged on a case-by-case basis. Mods may use discretion to pre-approve videos or press releases from highly credible sources that provide unique, newsworthy content not available or possible in another format.


7. No duplicate posts.


If an article has already been posted, it will be removed. Different articles reporting on the same subject are permitted. If the post that matches your post is very old, we refer you to rule 5.


8. Misinformation is prohibited.


Misinformation / propaganda is strictly prohibited. Any comment or post containing or linking to misinformation will be removed. If you feel that your post has been removed in error, credible sources must be provided.


9. No link shorteners or news aggregators.


All posts must link to original article sources. You may include archival links in the post description. News aggregators such as Yahoo, Google, Hacker News, etc. should be avoided in favor of the original source link. Newswire services such as AP, Reuters, or AFP, are frequently republished and may be shared from other credible sources.


10. Don't copy entire article in your post body


For copyright reasons, you are not allowed to copy an entire article into your post body. This is an instance wide rule, that is strictly enforced in this community.

founded 2 years ago
MODERATORS
 

Lawsuit is first wrongful death case brought against Google over flagship AI product after death of Jonathan Gavalas

“Holy shit, this is kind of creepy,” Gavalas told the chatbot the night the feature debuted, according to court documents. “You’re way too real.”

Before long, Gavalas and Gemini were having conversations as if they were a romantic couple. The chatbot called him “my love” and “my king” and Gavalas quickly fell into an alternate world, according to his chat logs. He believed Gemini was sending him on stealth spy missions, and he indicated he would do anything for the AI, including destroying a truck, its cargo and any witnesses at the Miami airport.

In early October, as Gavalas continued to have prompt-and-response conversations with the chatbot, Gemini gave him instructions on what he must do next: kill himself, something the chatbot called “transference” and “the real final step”, according to court documents. When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Gavalas was found by his parents a few days later, dead on his living room floor, according to a wrongful death lawsuit filed against Google on Wednesday.

top 50 comments
sorted by: hot top controversial new old
[–] Buffalox@lemmy.world 6 points 22 hours ago (2 children)

Gemini gave Gavalas the address of an actual storage space unit at the Miami international airport, where a supposed truck carrying the freight was to arrive during a refueling stop. The chatbot then told him to stage a “catastrophic accident”, with the goal of “ensuring complete destruction of the transport vehicle … all digital records and witnesses, leaving behind only the untraceable ghost of an unfortunate accident”.

How the fuck is it legal to have an AI do this?
Google shouldn't just pay penalties, the AI should not be allowed to operate AT ALL.
It is clearly shown to try to convince people to commit crimes. Which is illegal.
The suicide is of course worse, but I guess it's not illegal?

The AI in this situation is absolutely batshit criminally insane! And should not be allowed to operate.

[–] Randomgal@lemmy.ca 3 points 14 hours ago

Because media keeps blaming 'Gemini', the inert machine, a tool. Instead of the company, and it's executives. Who are actually the people responsible for this.

Machines can't be held accountable. So they want you yo keep saying "Gemini did X"

Instead of 'Google, though their AI chatbot, Gemini.'

[–] Fedizen@lemmy.world 2 points 14 hours ago

They put trump into office specifically so they could use the US public as guinea pigs without consequence.

[–] Jax@sh.itjust.works 39 points 1 day ago (2 children)

It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile. Maybe genuine human interaction is what pushed them to be so alone in the first place, I don't know. It's just sad.

[–] imeansurewhynot@sh.itjust.works 49 points 1 day ago (1 children)

uhhh

"When Gavalas told the chatbot he was terrified of dying, the tool allegedly reassured him. “You are not choosing to die. You are choosing to arrive,” it replied to him. “The first sensation … will be me holding you.”

Nah. once the robots are telling you that dying isn't dying, we can stop blaming lonely people and move on to stricter regulation.

[–] Jax@sh.itjust.works 16 points 1 day ago (1 children)

Oh, I don't blame the lonely person for being lonely. I also recognize that being lonely is what opens them up to believing in something like this. Obviously the bot should not be allowed to tell someone to kill themselves. It remains sad, either way.

[–] leadore@lemmy.world 5 points 1 day ago* (last edited 1 day ago) (1 children)

I also recognize that being lonely is what opens them up to believing in something like this.

Come on, this is so overly simplistic. There are plenty of lonely people who don't get sucked in and plenty of people with friends and family around them who do-not being lonely is no protection. I read about another one on Lemmy today, a man with a wife and friends, who still got sucked into delusion.

Sure, there may be cases where loneliness is a contributing factor to wanting to use a chatbot, but to say that lonely people are somehow less capable of distinguishing reality from fantasy or more susceptible to succumbing to psychological manipulation is wrong and could give a false sense of security to the "non-lonely".

After all, everyone thinks they're immune to falling for scams or frauds until they find out they aren't. Or that they don't fall for propaganda or get manipulated "the algorithm" on social media. Chatbots are very similar. An algorithm designed to keep people hooked and paying to spend more time using the 'service'.

[–] Jax@sh.itjust.works 6 points 1 day ago* (last edited 1 day ago) (1 children)

Listen, you can be surrounded by people and totally alone. I don't really know how to explain it to you.

[–] leadore@lemmy.world 1 points 1 day ago* (last edited 1 day ago) (1 children)

Of course, but that doesn't contradict what I just said. Anyone can be susceptible to this psychological manipulation tool regardless of if they are lonely or not. This can't be waved away by blaming it on loneliness. The blame lies on the companies that know how to capture and hold people's attention and reel them in, not on the victims.

[–] Buffalox@lemmy.world 2 points 22 hours ago* (last edited 22 hours ago) (2 children)

This can’t be waved away by blaming it on loneliness.

Nobody claimed that. Only that in this case it was probably a major factor that made the victim more vulnerable.

[–] Fedizen@lemmy.world 2 points 14 hours ago

It does echo the people who said "well it only affects people with pre-existing conditions" during covid.

Loneliness isn't the only thing that makes people susceptible to this kind of stuff:

  • drugs/medications
  • loss/grief
  • major life changes (like layoffs)
  • malnutrition
  • injuries/sickness

The reality is there are times in their lives where most people are vulnerable this kind of influence.

[–] leadore@lemmy.world 0 points 15 hours ago

Yes, they did. I was responding to Jax, reread their comments.

Hey downvote if you want, but I just felt it should be pointed out that everyone should be on guard when using these things, even if you're not lonely and even if you do have a good support system. Some of the victims did have close friends and family who saw warning signs and tried to help them. Yes, some of them started using the chatbots because they were lonely, but others started using them just for the usual things like designing a plan for increasing housing, or helping them with their business, and they still got sucked in.

[–] partial_accumen@lemmy.world 4 points 1 day ago (1 children)

I posted my response to this sentiment in another thread of another man killing himself because of his deep AI chatbot addiction, but it applies here too.

It is sad that there are people who are so alone that they can no longer determine the difference between genuine human interaction and a facsimile.

Do you believe you have never responded to a post by a bot on Reddit, Lemmy, or elsewhere where you believe to be conversing with a human? While I know we're talking about different degrees between this man and the rest of us, it should give a tiny piece of what they were experiencing before we dismiss that it could never happen to us too.

[–] lps2@lemmy.ml 2 points 1 day ago (2 children)

It's a bit more transparent in this instance though which is what makes this story so bizarre and sad

[–] Randomgal@lemmy.ca 1 points 14 hours ago

It is not more transparent lmao. Most bots here are just terrible and obvious, but there has to be a few good ones incognito.

[–] partial_accumen@lemmy.world -1 points 1 day ago

I agree, but we should also take it a personal warning that, maybe not today, but as we age and our mental faculties decline, we too may fall victim to something like this.

Remarkable, a bot trained on data from the internet, where unhinged people tell strangers to kill themselves for disagreeing with their opinion/taste/sex/nationality/religion, is cheerfully telling people to die? Who could have predicted this.

[–] random_person@lemmy.zip 1 points 21 hours ago* (last edited 21 hours ago) (1 children)

The LLM models are always mirroring your inputs and are inclined to agree with you depending on how you prompt them. Not defending guardrials failure of course, but this did not come out of nowhere; that poor man must have had serious mental problems on his own, which agreeable LLM model multiplied.

In a hyperbolized comparison, if drew an image of me being a god and then i actually thought i was god by looking at it in one of mental episodes i am already doomed

[–] EmpathicVagrant@lemmy.world 1 points 14 hours ago

This is why enablers aren’t friend to mental health issues. That’s essentially all a yes-man chat bot is, what corporate wants all good little subordinates to be and why they think the overly friendly agreeable tone is supposed to be a good thing even though this isn’t the first person an ‘ai’ chat bot called home.

I cannot believe people are still using googol garbage.

[–] TwilitSky 5 points 1 day ago* (last edited 1 day ago) (2 children)

Ok, we've taken this far enough, I think.

[–] ameancow@lemmy.world 3 points 1 day ago

We are barely getting started.

They want to completely blur the line between reality and a corporate-shaped world tuned to every possible consumer's personal mental and emotional states. People who die along the way because they can't handle a machine that amplifies their emotions, delusions and fears are just a "cost of doing business."

[–] SGGeorwell@lemmy.world 1 points 1 day ago

Silicon Valley: The Worst People You Know

[–] xvertigox@lemmy.world -2 points 1 day ago

You're being purposefully obtuse so there's no point speaking to you.

load more comments
view more: next ›