this post was submitted on 03 Mar 2026
57 points (96.7% liked)
Health - Resources and discussion for everything health-related
4260 readers
136 users here now
Health: physical and mental, individual and public.
Discussions, issues, resources, news, everything.
See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.
Nothing here shall be taken as medical or any other kind of professional advice.
Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.
Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.
Regular rules of lemmy.world apply. Be civil.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
You keep using that word. I do not think it means what you think it means.
I don’t think you do either, or at least you don’t understand LLMs.
LLMs are absolutely deterministic but to make them useful we use sampling to focus or diversify results, and adjust the probability distribution etc to control variability/randomness.
Without those variables (top-p and temperature, etc), transformers are indeed deterministic.
There’s other aspects that influence such as floating point optimisations, hardware latency etc. But the main reason you don’t encounter deterministic LLMs day to day is because they’re built not to be. But the core of what makes an LLM an LLM is technically deterministic.
I mean, that's kinda like saying a random number generator can be deterministic. It can be, but that's not how it's used.
Sure LLMs can be deterministic, but they aren't in practice cause it makes the results worse. If you prompt any production LLM with the same inputs, you aren't guaranteed the same outputs.
LLMs like all computer software is deterministic. It has a stable output for all inputs. LLMs as users use them have random parameters inserted to make it act nondeterministically if you assume this random info is nondeterministic.
You're being down voted because LLMs aren't deterministic, it's basically the biggest issue in productizing them. LLMs have a setting called "temperature" that is used to randomize the next token selection process meaning LLMs are inherently not deterministic.
If you se the temperature to 0, then it will produce consistent results, but the "quality" of output drops significantly.
If you give whatever random data source it uses the same seed, it will output the same thing.
So question then, what parameter controls deterministic results for an LLM?
I honestly dont know. I think all that matters is the token window and a random seed used foe a random weighted choice.
I encourage you to do some additional research on LLMs and the underlying mathematical models before making statements on incorrect information
The answer to this question was Temperature. It’s one of the many hyperparameters available to the engineer loading the model. Begin with looking into the difference between hyperparameters and parameters, as they relate to LLMs.
I’m one of the contributors to the LIDA cognitive architecture. This is my space and I want to help people learn so we can begin to use this technology as was intended - not all this marketing wank.
Listen, this is going to sound like a loaded inflammatory question and I don’t really know how to fix that over text, but you say you’re in the space and I’m genuinely curious as to your take on this:
Do you think it’s possible to build LLM technology in a way that:
The core problem with this technology is the misuse/misunderstanding that:
Thank you for coming to my autistic TED talk <3
Edit: Also, fantastic question and never apologize for wanting to learn; keep that hunger and run with it
Not who you asked but
https://lemmy.world/comment/22464598
Showing that someone hasn't answered your quiz question correctly isn't a great way to make an argument.
You’ve missed the point - I was responding to someone answering in an authoritative manner about something of which they were mis-informed. I posed a question someone in the space would immediately know. The disappointing part is simply pasting my question into any search engine or LLM would immediately have said “Temperature.”
This is a perfect example of how we’re using our brain less and less and simply relying on “something” else to answer it for us. Do your research. Learn and teach.
Nothing Kairos is saying is misinformation though. Temperature applies randomness to a generated probability distribution for tokens. That doesn't mean the probability distribution wasn't generated deterministically. That doesn't mean the randomness applied couldn't be deterministic. How they describe it working is accurate, they don't need to prove their qualifications and knowledge of jargon for that to be a good argument, and by focusing on that aspect of things in a way that doesn't contradict the point, you are making a bad argument.
What's lost is the question of what determinism even means in this context or why a property of being deterministic would even matter. It is unclear how being deterministic or not deterministic, by any definition, would have anything to do with how good a LLM is at making correct medical decisions, like the person starting this comment chain was implying.