this post was submitted on 11 Aug 2025
785 points (98.8% liked)

Fuck AI

3754 readers
613 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

founded 1 year ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] jj4211@lemmy.world 11 points 4 days ago (3 children)

It's actually a bit frighening to see this.

Have seen people start being validated because 'even' ChatGPT agreed with them, that ChatGPT had the same opinion as they did. The more they get validated, the more unhinged they will go, because they get what seems to be 'external validation'.

The internet was already kind of bad for validating people in ways they shouldn't be validated, but the LLM text generators are making that seem tame by comparison.

[–] Droggelbecher@lemmy.world 4 points 3 days ago

In 2007 when I was ten, you'd almost certainly get laughed out of the room by other ten year olds when you said you were right because someone on club penguin agreed with you. It's beyond me how those ten year olds are now 28 year olds that think they're right because a text generator agreed with you.

[–] Valmond@lemmy.world 4 points 4 days ago* (last edited 4 days ago) (2 children)

The idea of the "virtual friend" has been around for a long time. I think it's curious that like Star Trek or other franchises hasn't used that idea yet, for what I know.

[–] jj4211@lemmy.world 4 points 3 days ago (1 children)

SeaQuest DSV did it in a recurring way, without really touching on the dark side of it...

And of course the TNG holodeck had numerous one-shots of the concept. Barclay and recreating all his colleagues but in 'better' ways, Geordi making the idealized Leah Brahms in one episode, and then latter having to face the creepiness of that scenario. TNG at least eventually held things up to the probelmatic consequences...

[–] Valmond@lemmy.world 1 points 3 days ago (1 children)

Well they fiddle a bit with it, but full blown AI companions for everyone? Guess it'd be boring as hell to watch...

[–] jj4211@lemmy.world 1 points 3 days ago (1 children)

Full blown AI as in a person, albeit synthetic, well you have Data, and the EMH in Voyager.

Striking upon the version of a synthetic 'intelligence', but just an echo chamber for the person that it is made in service for, yeah, those are there and better as a one shot, at least in a show where the recurring cast shouldn't be completely dysfunctional. It's a way to show a character growing and facing the negative consequences of 'the easy way out', and not very good if the character has to just stay in the muck for the duration.

[–] Valmond@lemmy.world 1 points 3 days ago

Well the ai friend could be "a la Matrix" and thus not the easy way out but like a parenting/concerned friend. But I still imagine it would be not very funny.

[–] SwingingTheLamp@midwest.social 2 points 3 days ago (1 children)

Does Rimmer on Red Dwarf count?

[–] Valmond@lemmy.world 1 points 3 days ago

Ha ha ha ha, well I don't really know 😁

[–] slaneesh_is_right@lemmy.org 3 points 3 days ago (1 children)

I have way more problems with that than with people "falling in love with AI" dating sites are riddled with people who proudly ask Chatgpt for advice. And at least from my experience, they are very smug about it and feel super smart because the all knowing AI thinks they are smart too and agrees all the time

[–] jj4211@lemmy.world 5 points 3 days ago (1 children)

Recently had an exchange where someone declared that 'we' had come to a conclusion, and I asked who else and he said ChatGpt. He got way defensive when I said that isn't really another party coming to a conclusion, that's just text being generated to be consistent with the text submitted at it so far, with a goal of being agreeable no matter what.

I've no idea how this mindset works and persists even as you open up the exact same model and get it to say exactly the opposite opinion.

Many people have no idea how LLMs work. No clue at all. They think it's actually AI, what we now call AGI, and they often don't have enough baseline knowledge to understand even basic explanations about how it works.

The rest just are looking for external validation and will ignore anything that doesn't confirm their biases. These people are nothing new, they've just been given a more convenient tool.