this post was submitted on 15 May 2026
22 points (100.0% liked)

Technology

42955 readers
192 users here now

A nice place to discuss rumors, happenings, innovations, and challenges in the technology sphere. We also welcome discussions on the intersections of technology and society. If it’s technological news or discussion of technology, it probably belongs here.

Remember the overriding ethos on Beehaw: Be(e) Nice. Each user you encounter here is a person, and should be treated with kindness (even if they’re wrong, or use a Linux distro you don’t like). Personal attacks will not be tolerated.

Subcommunities on Beehaw:


This community's icon was made by Aaron Schneider, under the CC-BY-NC-SA 4.0 license.

founded 4 years ago
MODERATORS
 
top 9 comments
sorted by: hot top controversial new old
[–] Gaywallet@beehaw.org 3 points 1 day ago* (last edited 1 day ago) (1 children)

What the hell is this article? It has no source, it sounds like a half-baked thought and is all of 3 short paragraphs in length. This is low effort engagement bait at best.

I tried googling what the hell this might be based off of, and found this article. It appears to be a review conducted by the office of the auditor general (full report can be found here). The audit was of the process for the request for bids for the scribe system - that is to say, the 'pre-approved' vendors. There is nothing about whether any of this software is used, let alone how it is used.

Like yes, it's important to be looking at this, and it's good that the auditor is telling the government to improve its RFB process to better screen these tools, but this article is making it out like actual doctors are using this software and blatantly using it in ways that would harm the patient. That's just not true.

Frankly speaking I should probably just remove this article entirely as its half baked at best, AI slop at worst, but I'm going to leave it up because hopefully folks will see something like this and stop reacting to a headline immediately and instead take a closer look at articles that are shared as engagement bait.

[–] Fifrok@discuss.tchncs.de 4 points 14 hours ago

That audit you found is linked in the article though, at the start of the second paragraph. This is still slop, but at least it has a source I guess.

[–] Piatro@programming.dev 10 points 1 day ago

The "make shit up" machine was found to be making shit up? Huh, if only we could have predicted this!

[–] bold_omi@lemmy.today 1 points 23 hours ago
[–] ech@lemmy.ca 10 points 1 day ago

It's definitely making things up. That's how they work.

[–] Stopwatch1986@lemmy.ml 1 points 1 day ago* (last edited 1 day ago) (1 children)

A policy I saw coming out of an NHS (UK) department mandated 'human-in-the-loop' which is essentially what the article mentions in the end. The risk is that over time clinicians may become complacent with 'good enough' and don't bother to review thoroughly. And it may be easy to spot mistakes, but not necessarily omissions unless you keep your own notes. More so after a long session, although medical appointments are typically short and focused.

On a positive note, in my experience clinicians using LLMs do indeed spend more time engaging with service users. In an ideal world, they would be given time to engage and take notes, but this is not going to happen.

[–] Lyrl@lemmy.dbzer0.com 4 points 12 hours ago* (last edited 12 hours ago) (1 children)

My medical records are riddled with artisinal fully human-generated hallucinations and missing information. The AI note takers, even with the rate of issues noted in the article, seem to have crossed the threshold to be an improvement on the current system. The records still suck at a similar level, but as you noted it frees up clinicians to add a few more minutes of engagement per patient visit.

[–] Stopwatch1986@lemmy.ml 1 points 10 hours ago

Yes, and you might also say that time-starved humans just reviewing LLM output may generate more accurate reports than having to write them from scratch in a rush. That's until humans get complacent or are expected to do even more per minute. But there is a fundamental difference. Unlike humans, LLMs don't understand context and don't do sanity checks. When they hallucinate they can do so wildly, without a sense of implications, but always with confidence.

[–] Mothra@mander.xyz 1 points 1 day ago

Fortunately last time I saw my doctor I saw her type everything herself as I spoke.