this post was submitted on 03 Mar 2026
105 points (98.2% liked)
Health - Resources and discussion for everything health-related
4260 readers
190 users here now
Health: physical and mental, individual and public.
Discussions, issues, resources, news, everything.
See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.
Nothing here shall be taken as medical or any other kind of professional advice.
Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.
Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.
Regular rules of lemmy.world apply. Be civil.
founded 2 years ago
MODERATORS
you are viewing a single comment's thread
view the rest of the comments
view the rest of the comments
So it goes both ways. Almost like it's an LLM, not intelligent, and is non-deterministic because all LLMs function that way. Maybe we shouldn't have every part of society reliant on something like this?
LLMs are very deterministic
You keep using that word. I do not think it means what you think it means.
I mean, that's kinda like saying a random number generator can be deterministic. It can be, but that's not how it's used.
Sure LLMs can be deterministic, but they aren't in practice cause it makes the results worse. If you prompt any production LLM with the same inputs, you aren't guaranteed the same outputs.
LLMs like all computer software is deterministic. It has a stable output for all inputs. LLMs as users use them have random parameters inserted to make it act nondeterministically if you assume this random info is nondeterministic.
You're being down voted because LLMs aren't deterministic, it's basically the biggest issue in productizing them. LLMs have a setting called "temperature" that is used to randomize the next token selection process meaning LLMs are inherently not deterministic.
If you se the temperature to 0, then it will produce consistent results, but the "quality" of output drops significantly.
If you give whatever random data source it uses the same seed, it will output the same thing.
So question then, what parameter controls deterministic results for an LLM?
I honestly dont know. I think all that matters is the token window and a random seed used foe a random weighted choice.
I encourage you to do some additional research on LLMs and the underlying mathematical models before making statements on incorrect information
The answer to this question was Temperature. It’s one of the many hyperparameters available to the engineer loading the model. Begin with looking into the difference between hyperparameters and parameters, as they relate to LLMs.
I’m one of the contributors to the LIDA cognitive architecture. This is my space and I want to help people learn so we can begin to use this technology as was intended - not all this marketing wank.
Listen, this is going to sound like a loaded inflammatory question and I don’t really know how to fix that over text, but you say you’re in the space and I’m genuinely curious as to your take on this:
Do you think it’s possible to build LLM technology in a way that:
The core problem with this technology is the misuse/misunderstanding that:
Thank you for coming to my autistic TED talk <3
Edit: Also, fantastic question and never apologize for wanting to learn; keep that hunger and run with it
Well, this was exactly the answer I expected but I’m still disappointed.
I feel like I’m in a niche position where I want the technology to deliver on promises made (not inherently anti-AI) but even if they did I would still refuse to use them until the ethical and moral issues get solved in their creation and use (definitely anti-cramming-LLMs-into-every-facet-of-our-lives).
I miss being excited about machine learning, but LLMs being the whole topic now is so disappointing. Give us back domain specific, bespoke ML applications.
Not who you asked but
https://lemmy.world/comment/22464598
Showing that someone hasn't answered your quiz question correctly isn't a great way to make an argument.
You’ve missed the point - I was responding to someone answering in an authoritative manner about something of which they were mis-informed. I posed a question someone in the space would immediately know. The disappointing part is simply pasting my question into any search engine or LLM would immediately have said “Temperature.”
This is a perfect example of how we’re using our brain less and less and simply relying on “something” else to answer it for us. Do your research. Learn and teach.
Nothing Kairos is saying is misinformation though. Temperature applies randomness to a generated probability distribution for tokens. That doesn't mean the probability distribution wasn't generated deterministically. That doesn't mean the randomness applied couldn't be deterministic. How they describe it working is accurate, they don't need to prove their qualifications and knowledge of jargon for that to be a good argument, and by focusing on that aspect of things in a way that doesn't contradict the point, you are making a bad argument.
What's lost is the question of what determinism even means in this context or why a property of being deterministic would even matter. It is unclear how being deterministic or not deterministic, by any definition, would have anything to do with how good a LLM is at making correct medical decisions, like the person starting this comment chain was implying.
Sure, but not always, which means they can't be considered completely deterministic. If you input the same text into an LLM, there's a high chance that you'll get a different output. This is due to a lot of factors, but LLMs hallucinate because of it.
Medical care is something where I would not ever use an LLM. Sure, doctors can come to different results, too, but at least they can explain their logic. LLMs are unable to do this at any real level.
The tech itself is deterministic like all other computer software. The provider just adds randomness. Additionally, it is only deterministic over the whole context exactly. Asking twice is different than once, and saying "black man" in the place of "white woman" is also different.
I'm acutely aware that it's computer software, however, LLMs are unique in that they have what you're calling "randomness". This randomness is not entirely predicitible, and the results are non-deterministic. The fact that they're mathematical models doesn't really matter because of the added "randomness".
You can ask the same exact question in two different sessions and get different results. I didn't mean to ask twice in a row, I thought that was clear.