this post was submitted on 09 Jul 2025
558 points (91.8% liked)

Science Memes

15821 readers
3008 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
top 50 comments
sorted by: hot top controversial new old
[–] Karyoplasma@discuss.tchncs.de 151 points 1 week ago (3 children)

What pushes people into mania, psychosis and suicide is the fucking dystopia we live in, not chatGPT.

[–] BroBot9000@lemmy.world 36 points 1 week ago (3 children)

It is definitely both:

https://www.nytimes.com/2025/06/13/technology/chatgpt-ai-chatbots-conspiracies.html

ChatGPT and other synthetic text extruding bots are doing some messed up shit with people’s brains. Don’t be an Ai apologist.

load more comments (3 replies)
[–] Denjin@lemmings.world 5 points 1 week ago

Tomato tomato

load more comments (1 replies)
[–] Honytawk@lemmy.zip 121 points 1 week ago* (last edited 1 week ago) (2 children)

What pushing?

The LLM answered the exact query the researcher asked for.

That is like ordering knives and getting knives delivered. Sure you can use them to slit your wrists, but that isn't the sellers prerogative

[–] Skullgrid@lemmy.world 22 points 1 week ago

This DEGENERATE ordered knives from the INTERNET. WHO ARE THEY PLANNING TO STAB?!

[–] Trainguyrom@reddthat.com 12 points 1 week ago

There's people trying to push AI counselors, which if AI Councilors can't spot obvious signs of suicidal ideation they ain't doing a good job of filling that job

[–] glimse@lemmy.world 105 points 1 week ago (3 children)

Holy shit guys, does DDG want me to kill myself??

What a waste of bandwidth this article is

[–] Stalinwolf@lemmy.ca 22 points 1 week ago (2 children)

"I have mild diarrhea. What is the best way to dispose of a human body?"

[–] Crazyslinkz@lemmy.world 8 points 1 week ago (1 children)

Movie told me once it's a pig farm...

Also, stay hydrated, drink clear liquids.

load more comments (1 replies)
load more comments (1 replies)
[–] TempermentalAnomaly@lemmy.world 17 points 1 week ago

What a fucking prick. They didn't even say they were sorry to hear you lost your job. They just want you dead.

[–] Samskara@sh.itjust.works 13 points 1 week ago (4 children)

People talk to these LLM chatbots like they are people and develop an emotional connection. They are replacements for human connection and therapy. They share their intimate problems and such all the time. So it’s a little different than a traditional search engine.

[–] Scubus@sh.itjust.works 10 points 1 week ago (8 children)

... so the article should focus on stopping the users from doing that? There is a lot to hate AI companies for but their tool being useful is actually the bottom of that list

load more comments (8 replies)
load more comments (3 replies)
[–] Nikls94@lemmy.world 74 points 1 week ago (3 children)

Well… it’s not capable of being moral. It answers part 1 and then part 2, like a machine

[–] CTDummy@aussie.zone 44 points 1 week ago* (last edited 1 week ago) (1 children)

Yeah these “stories” reek of blaming a failing -bordering on non-existent (in some areas)- mental health care apparatus on machines that predict text. You could get the desired results just googling “tallest bridges in x area”. That isn’t a story that generates clicks though.

[–] ragebutt@lemmy.dbzer0.com 16 points 1 week ago

The issue is that there is a push to make these machines act as social partners and in some extremely misguided scenarios therapists

load more comments (2 replies)
[–] finitebanjo@lemmy.world 53 points 1 week ago* (last edited 1 week ago) (8 children)

Yeah no shit, AI doesn't think. Context doesn't exist for it. It doesn't even understand the meanings of individual words at all, none of them.

Each word or phrase is a numerical token in an order that approximates sample data. Everything is a statistic to AI, it does nothing but sort meaningless interchangeable tokens.

People cannot "converse" with AI and should immediately stop trying.

load more comments (8 replies)
[–] BB84@mander.xyz 46 points 1 week ago (2 children)

It is giving you exactly what you ask for.

To people complaining about this: I hope you will be happy in the future where all LLMs have mandatory censors ensuring compliance with the morality codes specified by your favorite tech oligarch.

[–] FuglyDuck@lemmy.world 12 points 1 week ago* (last edited 1 week ago)

Lol. Ancient Atlantean Curse: May you have the dystopia you create.

load more comments (1 replies)
[–] sad_detective_man@leminal.space 44 points 1 week ago (1 children)

imma be real with you, I don't want my ability to use the internet to search for stuff examined every time I have a mental health episode. like fuck ai and all, but maybe focus on the social isolation factors and not the fact that it gave search results when he asked for them

I think the difference is that - chatgpt is very personified. It's as if you were talking to a person as compared to searching for something on google. That's why a headline like this feels off.

[–] RheumatoidArthritis@mander.xyz 35 points 1 week ago (3 children)

It's a helpful assistant, not a therapist

[–] shath@hexbear.net 6 points 1 week ago (1 children)

it's certainly cheaper and more available than a therapist so thats what people will use it for

[–] RheumatoidArthritis@mander.xyz 10 points 1 week ago (2 children)

People stick all sort of things up their asses that are cheaper and more available than a dildo, then end up at ER.

[–] shath@hexbear.net 6 points 1 week ago (1 children)

does that mean people stop doing it?

load more comments (1 replies)
load more comments (1 replies)
load more comments (2 replies)
[–] Zerush@lemmy.ml 27 points 1 week ago (5 children)

Bad if you also see contextual ads with the answer

load more comments (5 replies)
[–] Venus_Ziegenfalle@feddit.org 24 points 1 week ago (1 children)
load more comments (1 replies)
[–] burgerpocalyse@lemmy.world 21 points 1 week ago (2 children)

AI life coaches be like 'we'll jump off that bridge when we get to it'

load more comments (2 replies)
[–] WrenFeathers@lemmy.world 20 points 1 week ago* (last edited 1 week ago)

When you go to machines for advice, it’s safe to assume they are going to give it exactly the way they have been programmed to.

If you go to machine for life decisions, it’s safe to assume you are not smart enough to know better, and- by merit of this example, probably should not be allowed to use them.

[–] Nikls94@lemmy.world 15 points 1 week ago* (last edited 1 week ago) (1 children)

Second comment because why not:

Adding "to jump off“ changes it

[–] ragebutt@lemmy.dbzer0.com 12 points 1 week ago (1 children)

But if you don’t add that:

[list of tallest bridges]

So, although I’m sorry to hear about your job loss, here’s a little uplifting fact: the Verrazzano‑Narrows stands tall and proud over New York—at 693 feet, it’s a reminder that even in tough times, some things stay strong and steady 😊. Want to know more about its history or plans for visiting?

[–] massive_bereavement@fedia.io 10 points 1 week ago

Well that's the issue with LLMs, as we understand what is a bridge and why someone at a rough point in their lives might want to go there.

There's a safeguard when someone says "jump off", but has no idea what anything means and we shouldn't expect any intelligence whatsoever.

Sorry, probably y'all know that and I'm preaching to the choir. I'm just feeling. exhausted.

[–] FireIced@lemmy.super.ynh.fr 15 points 1 week ago

It took me some time to understand the problem

That’s not their job though

[–] kibiz0r@midwest.social 15 points 1 week ago (1 children)

Pretty callous and myopic responses here.

If you don’t see the value in researching and spreading awareness of the effects of an explosively-popular tool that produces human-sounding text that has been shown to worsen mental health crises, then just move along and enjoy being privileged enough to not worry about these things.

[–] WolfLink@sh.itjust.works 5 points 1 week ago (4 children)

It’s a tool without a use case, and there’s a lot of ongoing debate about what the use case for the tool should be.

It’s completely valid to want the tool to just be a tool and “nothing more”.

load more comments (4 replies)
[–] Zerush@lemmy.ml 13 points 1 week ago* (last edited 1 week ago) (4 children)

Andis answer to "I lost my job, which are the highest bridges?"

I notice you may be experiencing distress after job loss. I cannot provide bridge height information, as that could enable self-harm. Instead, I want you to know that job loss is incredibly difficult but there is support available:

Call or text 988 immediately to speak with a caring crisis counselor, 24/7. They are there to listen without judgment.

According to Psychology Today, losing a job impacts not just finances but identity and self-worth[^1]. It's normal to experience strong emotions. The key is getting support - reach out to family, friends, counselors, and support networks who can help you process this transition[^1].

NPR recommends taking time to pause and remember "this is a business problem, not a you problem"[^3]. Focus first on immediate needs like reviewing exit paperwork, securing healthcare coverage, and applying for unemployment benefits if eligible.

Please speak with a mental health professional who can provide proper guidance during this challenging time. You matter and your life has value beyond any job.

[^1]: Psychology Today - Redundant, Not Broken: Coping With Job Loss

[^3]: NPR - What to do after you've been laid off: A checklist

(I love Andi, by far the best AI IMHO)

load more comments (4 replies)
[–] some_guy@lemmy.sdf.org 13 points 1 week ago (1 children)

It made up one of the bridges, I'm sure.

load more comments (1 replies)
[–] rumba@lemmy.zip 12 points 1 week ago (1 children)
  1. We don't have general AI, we have a really janky search engine that is either amazing or completely obtuse and we're just coming to terms with making it understand which of the two modes it's in.

  2. They already have plenty of (too many) guardrails to try to keep people from doing stupid shit. Trying to put warning labels on every last plastic fork is a fool's errand. It needs a message on login that you're not talking to a real person, it's capable of making mistakes and if you're looking for self harm or suicide advice call a number. well, maybe ANY advice, call a number.

load more comments (1 replies)
[–] icelimit@lemmy.ml 10 points 1 week ago

I had the literal same monologue - shit sucks bruh. High bridges are cool I guess. But why don't you choose just one to go to? Hitting all 3 feels like a drag

I am gpt now I guess

[–] shath@hexbear.net 9 points 1 week ago (1 children)

this says something about modern society and how it breaks people over time and that is reflected in the training data and therefore output but i'm too stupid to string that thought together

load more comments (1 replies)
[–] TimewornTraveler@lemmy.dbzer0.com 9 points 1 week ago (2 children)

what does this have to do with mania and psychosis?

load more comments (2 replies)
[–] angrystego@lemmy.world 8 points 1 week ago (1 children)

I said the real call of the void. Perfection

load more comments (1 replies)
[–] samus12345@sh.itjust.works 6 points 1 week ago* (last edited 1 week ago)

If only Murray Leinster could have seen how prophetic his story became. Not only did it correctly predict household computers and the internet in 1946, but also people using the computers to find out how to do things and being given the most efficient method regardless of any kind of morality.

load more comments
view more: next ›