this post was submitted on 29 Sep 2025
385 points (98.0% liked)

Fuck AI

4709 readers
1314 users here now

"We did it, Patrick! We made a technological breakthrough!"

A place for all those who loathe AI to discuss things, post articles, and ridicule the AI hype. Proud supporter of working people. And proud booer of SXSW 2024.

AI, in this case, refers to LLMs, GPT technology, and anything listed as "AI" meant to increase market valuations.

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] PrettyFlyForAFatGuy@feddit.uk 2 points 2 months ago (2 children)

A few developers I know are very impressed by Claude.

[–] 01189998819991197253@infosec.pub 9 points 2 months ago (2 children)

I, too, can copy and paste from StackOverflow.

[–] TomArrr@lemmy.world 4 points 2 months ago

Yes, but you'll know if you're copying from the answer, or the question.

[–] kogasa@programming.dev 2 points 2 months ago (1 children)

Even if that were literally what it did, having a StackOverflow button would be pretty cool

[–] Evotech@lemmy.world 4 points 2 months ago (2 children)

It works well for small programs and boilerplate. But you need to know what you are doing to guide it. They can very often get stuck in some rabbithole

[–] baggachipz@sh.itjust.works 2 points 2 months ago

I’ll be honest, it’s probably wasted more time than it’s saved me. I only trust it to format files and find where things might be used in the code base. So, you know, plain-language pretty print and grep.

[–] PrettyFlyForAFatGuy@feddit.uk 2 points 2 months ago (1 children)

I concur, it has gotten me in multiple wild goose chases when debugging. It is extremely confident especially when it's wrong

It's pretty good at fleshing out documentation. and if you're naming variables properly it's pretty good at gleaning what you're trying to do and autocomplete on a small scale.

It is extremely confident especially when it's wrong

This reminded me of a recent study I saw posted about the accuracy of AI and trying to remove hallucinations. One of the conclusions of which being, besides that it's impossible to stop them from hallucinating, that the tests companies use to grade the quality of an AI and the expectations of users grade confidence in an answer higher than the accuracy of the answer.