this post was submitted on 30 Jun 2025
83 points (100.0% liked)

TechTakes

2003 readers
196 users here now

Big brain tech dude got yet another clueless take over at HackerNews etc? Here's the place to vent. Orange site, VC foolishness, all welcome.

This is not debate club. Unless it’s amusing debate.

For actually-good tech, you want our NotAwfulTech community

founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] HedyL@awful.systems 10 points 3 hours ago (2 children)

And then we went back to “it’s rarely wrong though.”

I am often wondering whether the people who claim that LLMs are "rarely wrong" have access to an entirely different chatbot somehow. The chatbots I tried were rarely ever correct about anything except the most basic questions (to which the answers could be found everywhere on the internet).

I'm not a programmer myself, but for some reason, I got the chatbot to fail even in that area. I took a perfectly fine JSON file, removed one semicolon on purpose and then asked the chatbot to fix it. The chatbot came up with a number of things that were supposedly "wrong" with it. Not one word about the missing semicolon, though.

I wonder how many people either never ask the chatbots any tricky questions (with verifiable answers) or, alternatively, never bother to verify the chatbots' output at all.

[–] dgerard@awful.systems 3 points 39 minutes ago

AI fans are people who literally cannot tell good from bad. They cannot see the defects that are obvious to everyone else. They do not believe there is such a thing as quality, they think it's a scam. When you claim you can tell good from bad, they think you're lying.

[–] paequ2@lemmy.today 9 points 3 hours ago (1 children)

never bother to verify the chatbots’ output at all

I feel like this is happening.

When you're an expert in the subject matter, it's easier to notice when the AI is wrong. But if you're not an expert, it's more likely that everything will just sound legit. Or you won't be able to verify it yourself.

[–] HedyL@awful.systems 5 points 3 hours ago

But if you’re not an expert, it’s more likely that everything will just sound legit.

Oh, absolutely! In my field, the answers made up by an LLM might sound even more legit than the accurate and well-researched ones written by humans. In legal matters, clumsy language is often the result of facts being complex and not wanting to make any mistakes. It is much easier to come up with elegant-sounding answers when they don't have to be true, and that is what LLMs are generally good at.