this post was submitted on 03 Mar 2026
105 points (98.2% liked)

Health - Resources and discussion for everything health-related

4260 readers
188 users here now

Health: physical and mental, individual and public.

Discussions, issues, resources, news, everything.

See the pinned post for a long list of other communities dedicated to health or specific diagnoses. The list is continuously updated.

Nothing here shall be taken as medical or any other kind of professional advice.

Commercial advertising is considered spam and not allowed. If you're not sure, contact mods to ask beforehand.

Linked videos without original description context by OP to initiate healthy, constructive discussions will be removed.

Regular rules of lemmy.world apply. Be civil.

founded 2 years ago
MODERATORS
 

Researchers tested different medical scenarios with the chatbot. In more than half of cases in which doctors would send patients to the ER, the chatbot said it was OK to delay care.

ChatGPT Health


OpenAI's new health-focused chatbot


frequently underestimated the severity of medical emergencies, according to a study published last week in the journal Nature Medicine.

In the study, researchers tested ChatGPT Health's ability to triage, or assess the severity of, medical cases based on real-life scenarios.

Previous research has shown that ChatGPT can pass medical exams, and nearly two-thirds of physicians reported using some form of AI in 2024. But other research has shown that chatbots, including ChatGPT, don't provide reliable medical advice.

you are viewing a single comment's thread
view the rest of the comments
[–] CorrectAlias@piefed.blahaj.zone 5 points 10 hours ago* (last edited 10 hours ago) (1 children)

Sure, but not always, which means they can't be considered completely deterministic. If you input the same text into an LLM, there's a high chance that you'll get a different output. This is due to a lot of factors, but LLMs hallucinate because of it.

Medical care is something where I would not ever use an LLM. Sure, doctors can come to different results, too, but at least they can explain their logic. LLMs are unable to do this at any real level.

[–] LodeMike@lemmy.today 0 points 9 hours ago (1 children)

The tech itself is deterministic like all other computer software. The provider just adds randomness. Additionally, it is only deterministic over the whole context exactly. Asking twice is different than once, and saying "black man" in the place of "white woman" is also different.

[–] CorrectAlias@piefed.blahaj.zone 1 points 2 hours ago

I'm acutely aware that it's computer software, however, LLMs are unique in that they have what you're calling "randomness". This randomness is not entirely predicitible, and the results are non-deterministic. The fact that they're mathematical models doesn't really matter because of the added "randomness".

You can ask the same exact question in two different sessions and get different results. I didn't mean to ask twice in a row, I thought that was clear.