this post was submitted on 01 May 2026
201 points (93.1% liked)

Technology

84274 readers
3325 users here now

This is a most excellent place for technology news and articles.


Our Rules


  1. Follow the lemmy.world rules.
  2. Only tech related news or articles.
  3. Be excellent to each other!
  4. Mod approved content bots can post up to 10 articles per day.
  5. Threads asking for personal tech support may be deleted.
  6. Politics threads may be removed.
  7. No memes allowed as posts, OK to post as comments.
  8. Only approved bots from the list below, this includes using AI responses and summaries. To ask if your bot can be added please contact a mod.
  9. Check for duplicates before posting, duplicates may be removed
  10. Accounts 7 days and younger will have their posts automatically removed.

Approved Bots


founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
[–] eager_eagle@lemmy.world 29 points 1 day ago (2 children)

Waste of energy. It's like asking a person to estimate a non-trivial angle. Either use a model trained for that task, or don't bother.

[–] Corkyskog@sh.itjust.works 28 points 1 day ago (1 children)

The point is they are advertising that these models can do it.

[–] eager_eagle@lemmy.world 0 points 23 hours ago* (last edited 23 hours ago) (1 children)

You’d expect the same answer each time. It’s the same photo, the same model, the same question. But you won’t get the same answer.

I don't know what ads show that, but anyone who knows the first thing about LLMs knows you don't get the same answer twice.

I'd get this expectation 5 years ago when most people weren't familiar with it, but come on... you don't need to feed it an image 500 times to see that.

[–] Sandbar_Trekker@lemmy.today 1 points 22 hours ago (1 children)

Technically, you can get the same answer twice from an LLM, but only when you control the full input. When a model is being run, a random seed/hash is applied to the input. If you run the model locally you could force the seed to always be the same so that you would always get the same answer for a given question.

[–] eager_eagle@lemmy.world 1 points 19 hours ago

Barely. Even with the code and seeds, it's still a struggle to do that. There's plenty of questions from people running pytorch and tensorflow models that can't reproduce results. Maybe you isolate enough variables that consecutive runs actually produce the same output, but the study is about commercial models. You'll never get deterministic output from those.

[–] Alvaro@lemmy.blahaj.zone 9 points 20 hours ago

The point is that:

  1. It is being used for ut, even though it is obviously not capable of giving a reliable and realistic answer
  2. It allows this usage, even though it is dangerous and not within it's capabilities
  3. Each model gives answers that vary wildly, something that a human wouldn't do. A human wouldn't give you answers that are 10x more for the same question randomly.