this post was submitted on 04 Apr 2026
155 points (98.7% liked)

Fuck AI

6638 readers
971 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
 

Test subjects who consulted AI were overwhelmingly willing to accept its answers without scrutiny, whether correct or not.

all 21 comments
sorted by: hot top controversial new old
[–] hostileempathy@lemmy.zip 13 points 1 day ago

Not surprising. We always want to take the path of least resistance. I mean 20 years ago, you could access the world's information via the internet, but you had to know how to search for things. We slowly went from that to "I'll just google it." and "Well, google says." Now that we have LLM's (which IMHO are mostly just fancier, faster google search) we have people legitimately saying "let me ask ChatGPT" and "ChatGPT says."

I think there is a positive future for LLM's and true AI, but its not right now, and it definitely is not in the hands of capitalists.

[–] Madrigal@lemmy.world 33 points 2 days ago (1 children)

A lot of people never adopted logical thinking in the first place.

[–] Zacryon@feddit.org 9 points 1 day ago (1 children)

Which makes careless use of big babble machines even more problematic.

[–] hperrin@lemmy.ca 30 points 2 days ago (3 children)

I mentioned that I don’t use AI today, and the person I was talking to was really surprised. They didn’t understand how I could not use AI for anything.

[–] cheers_queers@lemmy.zip 14 points 2 days ago (1 children)

its almost like the world has been turning for millions of years without it

[–] hperrin@lemmy.ca 8 points 2 days ago (1 children)

They were like, “so you don’t Google anything?”

Well, first, yes, I don’t use Google. But second, you can turn AI off! xD

[–] cheers_queers@lemmy.zip 4 points 2 days ago

thats crazy.

[–] FlashMobOfOne@lemmy.world 3 points 1 day ago

Partly why I support age-gating some things are the scary stories I hear from schoolteachers I know: about whole classrooms of kids that have trouble concentrating on anything for more than 60 seconds, or how they hear every day: "If AI can do this, why do I have to learn it in the first place?"

(And don't come at me about age gating because I don't care to argue about it.)

[–] valkyre09@lemmy.world 2 points 1 day ago (1 children)

I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

I probably wouldn’t trust it to write a config and upload it back, but for an assistant to an untrained eye it was pretty solid.

I’ve also used copilot for silly things like

“Take these 10 lines of process steps, make them sound professional and format them for easy reading”.

Stuff like that isn’t my job, but when it lands on my desk it’s a quick way to get it down and back to what I’m supposed to be focussing on.

This is a long way of saying, there are definitely use cases, but nobody’s being replaced.

[–] okamiueru@lemmy.world 6 points 1 day ago* (last edited 1 day ago)

I saw somebody in work upload a firewall config xml and start querying if stuff was blocked. I actually thought it was a pretty clever use of it.

I would find it some place between worrisome and you-should-lose your-job, depending on how important that firewall is. This might seem exaggerated, but if your colleague had showed that config to a child, and then asked them yes and no questions, a game to which the child happily participated in. I would consider that exactly as reasonable, and exactly as responsible, as asking an LLM. Imagine someone doing this, for an important firewall config... and taking the child's answers at face value. It's not unreasonable to think that this person is grossly unqualified, and showing a dangerous lack of judgment.

And, that's just the issues I would have regarding using a bullshit generator as a source of truth. If the firewall config could be considered sensitive information, uploading that to a third party, would be grounds for dismissal for entirely separate reasons.

[–] BananaOnionJuice@lemmy.dbzer0.com 16 points 2 days ago (3 children)

But LLM's are bad at math and logic.

[–] Tyrq@lemmy.dbzer0.com 15 points 2 days ago (1 children)

Yeah, thats kinda the problem, they couldn't think for themselves, and would rather trust a hallucinating autocomplete program

[–] Dumhuvud@programming.dev 4 points 1 day ago

would rather trust a hallucinating autocomplete program

I mean, outsourcing your thinking would still negatively affect your cognitive capabilities even if you were to rely on something actually intelligent.

[–] Naich@piefed.world 10 points 1 day ago

And facts. They are very good at sounding confident though.

[–] Zacryon@feddit.org 1 points 1 day ago
[–] Zacryon@feddit.org 5 points 1 day ago

Extend and assist. Not externalize.

It's fine to use as assistive tool. But outsourcing thinking is problematic. Unfortunately, this outcome was likely.

Thinking is hard.

[–] orioler25@lemmy.world 2 points 1 day ago

Yeah, what a new and scary thing that started with AI and nothing else ever. Jfc, AI has been the best thing for liberal moralistic arguments of social degeneracy since social media.

[–] imjustmsk@lemmy.ml 1 points 1 day ago