this post was submitted on 18 Feb 2026
144 points (97.4% liked)

Technology

81534 readers
3980 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
top 17 comments
sorted by: hot top controversial new old
[–] Iconoclast@feddit.uk 41 points 1 day ago (1 children)

It's a Large Language Model designed to generate natural-sounding language based on statistical probabilities and patterns - not knowledge or understanding. It doesn't "lie" and it doesn't have the capability to explain itself. It just talks.

That speech being coherent is by design; the accuracy of the content is not.

This isn't the model failing. It's just being used for something it was never intended for.

[–] THB@lemmy.world 23 points 1 day ago (1 children)

I puke a little in my mouth every time an article humanizes LLMs, even if they're critical. Exactly as you said they do not "lie" nor are they "trying" to do anything. It's literally word salad that organized to look like language.

[–] Khanzarate@lemmy.world 2 points 1 day ago

I think humanizing them is a fairly trivial thing, in this sort of context.

Yes, it's true, it didn't "lie" about health.

But it has the same result as someone lying, it's another bulletpoint in the list of reasons not to trust AI, even if it pulls from the right sources and presents information generally correctly, it may in fact just not present information it could have presented because the sources it learned from have done so in a way that would get those sources deemed "liars".

Could write that out every time, I suppose, but people will say their dog is trying to trick them when he goes to the bowl 5 minutes after dinner, or goes to their partner for the same, and everyone understands the dog isn't actually attempting to deceive them, and just wants more.

Same thing, to me at least. It lied, but in a similar way to how my dog lies, not in the way a human can lie.

[–] FancyPantsFIRE@lemmy.world 42 points 2 days ago (3 children)

The thing I find amusing here is the direct quoting of Gemini’s analysis of its interactions as if it is actually able to give real insight into its behaviors, as well as the assertion that there’s a simple fix to the hallucination problem which, sycophantic or otherwise, is a perennial problem.

[–] CosmoNova@lemmy.world 11 points 1 day ago

That‘s what annoys me the most about all of this. The reasoning of the LLM doesn‘t matter because that‘s not actually why it happened. Once again bad journalism falls on it‘s face when talking about word salad as if it was a person.

[–] MolochHorridus@lemmy.ml 5 points 1 day ago* (last edited 1 day ago) (2 children)

There is no hallucination problems, just design flaws and errors. The so called AI bots are not sentient and cannot hallucinate.

[–] draco_aeneus@mander.xyz 6 points 1 day ago (1 children)

It's not really even errors. It is well-suited for what it was designed. It produced pretty good text. It's just that we're using it for stuff it's not suited for. Like digging a hole with a spoon, then complaining your hands hurt.

[–] silverneedle@lemmy.ca 2 points 1 day ago* (last edited 1 day ago) (1 children)

It's a convenient way of looking at things. Saying that it's good at one thing and bad at others. What I have come to realize with LLMs is that anywhere where experts deal with them, they are very aware of their shortcomings with respect to someone's area of expertise. Sure, you might say they're good at producing text, yet a journalist or someone who simply writes a ton might be able to spot generated text in an instant. The same way a photographer or painter can spot these statistical methods instantly. Rinse and repeat for coding, translation, medicine and all other tasks specific to current societal roles. That is not to say that you need to be an expert to spot LLMs or other generative ANNs, it comes down to attention and what you condition yourself to be attentive to. Of course pictures or code, or whatever will be convincing if you treat these things as secondary, like a doctor would treat creative writing as secondary to their job though necessary or a biologist would treat writing python scripts.

[–] Iconoclast@feddit.uk 1 points 1 day ago (1 children)

Saying that it’s good at one thing and bad at others.

But that's exactly the difference between narrow AI and a generally intelligent one. A narrow AI can be "superhuman" at one specific task - like generating natural-sounding language - but that doesn't automatically carry over to other tasks.

People give LLMs endless shit for getting things wrong, but they should actually get credit for how often they get it right too. That's a pure side effect of their training - not something they were ever designed to do.

It's like cruise control that's also kinda decent at driving in general. You might be okay letting it take the wheel as long as you keep supervising - but never forget it's still just cruise control, not a full autopilot.

[–] silverneedle@lemmy.ca 1 points 1 day ago (1 children)

generally intelligent one.

What does this word mean? Does this refer to something that does not exist? If so why are we using it as a practical benchmark or distinction to make statements about the world?

but they should actually get credit for how often they get it right too.

My text compression algorithm for tape gets the facts right to the exact character. Beat that.

[–] Iconoclast@feddit.uk 1 points 21 hours ago

General intelligence refers to human level intelligence where it's not only limited to one task like playing chess or generating language. General intelligence exists - just not artificial one.

[–] FancyPantsFIRE@lemmy.world 2 points 1 day ago

My gut response is that everyone understands that the models aren’t sentient and hallucination is short hand for the false information that llms inevitably and apparently inescapably produce. But taking a step back you’re probably right, for anyone who doesn’t understand the technology it’s a very anthropomorphic term which adds to the veneer of sentience.

[–] jeeva@lemmy.world 2 points 1 day ago

This mischaracterisation really struck me during the coverage and commentary of the recent "AI blogged about my rejection" as if that weren't something prompted by a human for.

[–] aeronmelon@lemmy.world 18 points 2 days ago (1 children)

“I just want you to be happy, Dave.”

[–] THX1138@lemmy.ml 6 points 2 days ago (1 children)

"Daisy, Daisy, give me your answer do. I'm half crazy all for the love of you. It won't be a stylish marriage, I can't afford a carriage. But you'll look sweet upon the seat of a bicycle built for two...."

[–] Broadfern@lemmy.world 5 points 1 day ago

Completely irrelevant but I hear that in Bender’s voice every time

[–] panda_abyss@lemmy.ca 4 points 1 day ago

Aww that’s sweet!