1522
Breast Cancer (mander.xyz)
you are viewing a single comment's thread
view the rest of the comments
[-] Maven@lemmy.zip 71 points 4 months ago

Another big thing to note, we recently had a different but VERY similar headline about finding typhoid early and was able to point it out more accurately than doctors could.

But when they examined the AI to see what it was doing, it turns out that it was weighing the specs of the machine being used to do the scan... An older machine means the area was likely poorer and therefore more likely to have typhoid. The AI wasn't pointing out if someone had Typhoid it was just telling you if they were in a rich area or not.

[-] KevonLooney@lemm.ee 19 points 4 months ago

That's actually really smart. But that info wasn't given to doctors examining the scan, so it's not a fair comparison. It's a valid diagnostic technique to focus on the particular problems in the local area.

"When you hear hoofbeats, think horses not zebras" (outside of Africa)

[-] chonglibloodsport@lemmy.world 8 points 4 months ago

AI is weird. It may not have been given the information explicitly. Instead it could be an artifact in the scan itself due to the different equipment. Like if one scan was lower resolution than the others but you resized all of the scans to be the same size as the lowest one the AI might be picking up on the resizing artifacts which are not present in the lower resolution one.

[-] KevonLooney@lemm.ee 3 points 4 months ago

I'm saying that info is readily available to doctors in real life. They are literally in the hospital and know what the socioeconomic background of the patient is. In real life they would be able to guess the same.

[-] Maven@lemmy.zip 2 points 4 months ago

The manufacturing date of the scanner was actually saved as embedded metadata to the scan files themselves. None of the researchers considered that to be a thing until after the experiment when they found that it was THE thing that the machines looked at.

[-] Tja@programming.dev 1 points 4 months ago

That is quite a statement that it still had a better detection rate than doctors.

What is more important, save life or not offend people?

[-] Maven@lemmy.zip 2 points 4 months ago

The thing is tho... It has a better detection rate ON THE SAMPLES THEY HAD but because it wasn't actually detecting anything other than wealth there was no way for them to trust it would stay accurate.

[-] Tja@programming.dev 2 points 4 months ago

Citation needed.

Usually detection rates are given on a new set of samples, on the samples they used for training detection rate would be 100% by definition.

[-] 0ops@lemm.ee 3 points 4 months ago* (last edited 4 months ago)

Right, there's typically separate "training" and "validation" sets for a model to train, validate, and iterate on, and then a totally separate "test" dataset that measures how effective the model is on similar data that it wasn't trained on.

If the model gets good results on the validation dataset but less good on the test dataset, that typically means that it's "over fit". Essentially the model started memorizing frivolous details specific to the validation set that while they do improve evaluation results on that specific dataset, they do nothing or even hurt the results for the testing and other datasets that weren't a part of training. Basically, the model failed to abstract what it's supposed to detect, only managing good results in validation through brute memorization.

I'm not sure if that's quite what's happening in maven's description though. If it's real my initial thoughts are an unrepresentative dataset + failing to reach high accuracy to begin with. I buy that there's a correlation between machine specs and positive cases, but I'm sure it's not a perfect correlation. Like maven said, old areas get new machines sometimes. If the models accuracy was never high to begin with, that correlation may just be the models best guess. Even though I'm sure that it would always take machine specs into account as long as they're part of the dataset, if actual symptoms correlate more strongly to positive diagnoses than machine specs do, then I'd expect the model to evaluate primarily on symptoms, and thus be more accurate. Sorry this got longer than I wanted

[-] Tja@programming.dev 0 points 4 months ago

It's no problem to have a longer description if you want to get nuance. I think that's a good description and fair assumptions. Reality is rarely as black and white as reddit/lemmy wants it to be.

[-] Maven@lemmy.zip -1 points 4 months ago

What if one of those lower economic areas decides that the machine is too old and they need to replace it with a brand new one? Now every single case is a false negative because of how highly that was rated in the system.

The data they had collected followed that trend but there is no way to think that it'll last forever or remain consistent because it isn't about the person it's just about class.

[-] Tja@programming.dev 0 points 4 months ago

The goalpost has been moved so far I now need binoculars to see it now

this post was submitted on 02 Aug 2024
1522 points (98.4% liked)

Science Memes

11442 readers
1416 users here now

Welcome to c/science_memes @ Mander.xyz!

A place for majestic STEMLORD peacocking, as well as memes about the realities of working in a lab.



Rules

  1. Don't throw mud. Behave like an intellectual and remember the human.
  2. Keep it rooted (on topic).
  3. No spam.
  4. Infographics welcome, get schooled.

This is a science community. We use the Dawkins definition of meme.



Research Committee

Other Mander Communities

Science and Research

Biology and Life Sciences

Physical Sciences

Humanities and Social Sciences

Practical and Applied Sciences

Memes

Miscellaneous

founded 2 years ago
MODERATORS