this post was submitted on 23 Feb 2026
584 points (97.6% liked)

Technology

84431 readers
5582 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
 

Screenshot of this question was making the rounds last week. But this article covers testing against all the well-known models out there.

Also includes outtakes on the 'reasoning' models.

you are viewing a single comment's thread
view the rest of the comments
[–] DarrinBrunner@lemmy.world 42 points 2 months ago (5 children)

I think it's worse when they get it right only some of the time. It's not a matter of opinion, it should not change its "mind".

The fucking things are useless for that reason, they're all just guessing, literally.

[–] merc@sh.itjust.works 5 points 2 months ago

It's not literally guessing, because guessing implies it understands there's a question and is trying to answer that question. It's not even doing that. It's just generating words that you could expect to find nearby.

[–] XLE@piefed.social 3 points 2 months ago

Even if you retooled the LLM to not randomize the output it generates, it can still create contradictory outputs based on a slightly reworded question. I'm talking about a misspelling, different punctuation, things that simply wouldn't cause a person to change their answer.

(And that's assuming the LLM just got started from scratch. If you had any previous conversation with it, it could have influenced the output as well. It's such a mess.)

[–] Tetragrade@leminal.space -4 points 2 months ago* (last edited 2 months ago) (1 children)

Same takeaway as the article (everyone read the article, right?).

Applying it to yourself, can you recall instances when you were asked the same question at different points in time? How did you respond?

[–] CileTheSane@lemmy.ca 0 points 2 months ago (1 children)

Having read the article (you read the article right?) what gave you the impression the AI was asked the question at different points in time?

[–] Tetragrade@leminal.space 0 points 2 months ago* (last edited 2 months ago) (1 children)

The AI was asked the same question repeatedly and gave different answers, due to its randomised structure.

People will also often do this (I have, personally), but because our actions seem to be strongly influenced by time-dependent stuff (like sense perception and short-term memory contents), I'd expect you'd need to ask at different times.

[–] CileTheSane@lemmy.ca 1 points 2 months ago (1 children)

My answer to this question will not change if you ask me a year from now, because as OP said this is not a matter of opinion; there is a factually correct answer.

[–] Tetragrade@leminal.space -2 points 2 months ago* (last edited 2 months ago) (1 children)
[–] CileTheSane@lemmy.ca 1 points 2 months ago

Good talk, great contribution.