this post was submitted on 15 May 2026
23 points (100.0% liked)

Technology

42955 readers
210 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 
you are viewing a single comment's thread
view the rest of the comments
[–] Lyrl@lemmy.dbzer0.com 4 points 15 hours ago* (last edited 15 hours ago) (1 children)

My medical records are riddled with artisinal fully human-generated hallucinations and missing information. The AI note takers, even with the rate of issues noted in the article, seem to have crossed the threshold to be an improvement on the current system. The records still suck at a similar level, but as you noted it frees up clinicians to add a few more minutes of engagement per patient visit.

[–] Stopwatch1986@lemmy.ml 1 points 13 hours ago

Yes, and you might also say that time-starved humans just reviewing LLM output may generate more accurate reports than having to write them from scratch in a rush. That's until humans get complacent or are expected to do even more per minute. But there is a fundamental difference. Unlike humans, LLMs don't understand context and don't do sanity checks. When they hallucinate they can do so wildly, without a sense of implications, but always with confidence.