this post was submitted on 09 Jul 2025
558 points (91.8% liked)

Science Memes

15837 readers
1997 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS
 
(page 2) 50 comments
sorted by: hot top controversial new old
[–] MystikIncarnate@lemmy.ca 4 points 1 week ago

AI is the embodiment of "oh no, anyways"

[–] RaivoKulli@sopuli.xyz 4 points 1 week ago

"Hammer hit the nail you decided to strike"

Wow

[–] OldChicoAle@lemmy.world 4 points 1 week ago

Do we honestly think OpenAI or tech bros care? They just want money. Whatever works. They're evil like every other industry

[–] jjjalljs@ttrpg.network 2 points 1 week ago

AI is a mistake and we would be better off if the leadership of OpenAI was sealed in an underground tomb. Actually, that's probably true of most big org's leadership.

[–] blargh513@sh.itjust.works 2 points 1 week ago

There's nothing wrong with AI, these contextual problems are not a mistake--they're a choice.

AI can be trained for deeper analysis and to root out issues like this. But that costs compute cycles. If you're selling a service, you want to spend as little on compute power as possible while still being able to have a product that is viewed as good enough to pay for.

As with all things, the root of this problem is greed.

[–] catty@lemmy.world 2 points 1 week ago

Headlines like this is comedy I'd pay for. Or, at least laugh at on Have I got news for you.

[–] nebulaone@lemmy.world 1 points 1 week ago (1 children)

These people must have been seriously mentally unstable before. I highly doubt AI is the only reason.

[–] fullsquare@awful.systems 5 points 1 week ago* (last edited 1 week ago)

nah, what happened is that they were non-psychotic before contact with chatbot and weren't even usually considered at risk. chatbot trained on entire internet will also ingest all schizo content, the timecubes and dr bronner shampoo labels of the world. learned to respond in the same style, when a human starts talking conspirational nonsense it'll throw more in while being useless sycophant all the way. some people trust these lying idiot boxes; net result is somebody caught in seamless infobubble containing only one person and increasing amounts of spiritualist, conspirational or whatever the person prefers content. this sounds awfully like qanon made for audience of one, and by now it's known that the original was able to maul seemingly normal people pretty badly, except this time they can get there almost by an accident, getting hooked into qanon accidentally would be much harder.

load more comments
view more: ‹ prev next ›