view the rest of the comments
Technology
This is the official technology community of Lemmy.ml for all news related to creation and use of technology, and to facilitate civil, meaningful discussion around it.
Ask in DM before posting product reviews or ads. All such posts otherwise are subject to removal.
Rules:
1: All Lemmy rules apply
2: Do not post low effort posts
3: NEVER post naziped*gore stuff
4: Always post article URLs or their archived version URLs as sources, NOT screenshots. Help the blind users.
5: personal rants of Big Tech CEOs like Elon Musk are unwelcome (does not include posts about their companies affecting wide range of people)
6: no advertisement posts unless verified as legitimate and non-exploitative/non-consumerist
7: crypto related posts, unless essential, are disallowed
Medicine relies on verification. AI operates without that.
AI would be terrible in medicine.
The Gospel is a good example, although I'd argue it's intentionally used for that purpose - that, and so that no person can be held to account for their decisions.
I agree that in actual use, medicine needs to verifiably work. I believe "AI", if you wanna call it that, probably has its place in effectively speedrunning theoretical testing and bruteforcing of results that would take humans much longer to even think of.
The problem arises when people trust whatever the machine spits out. But thats not a new problem with AI its a general problem that any form of media has.
AI is a tool. Just like all tools, it's only as good as the tool that's using it.
And, the material it has to work with, which for AI, is gathered information
Yep, exactly.
As a doctor who’s into tech, before we implemented something like AI-assisted diagnostics, we’d have to consider what the laziest/least educated/most tired/most rushed doctor would do. The tools would have to be very carefully implemented such that the doctor is using the tool to make good decisions, not harmful ones.
The last thing you want to do is have a doctor blindly approve an inappropriate order suggested by an AI without applying critical thinking and causing harm to a real person because the machine generated a factually incorrect output.