87
ChatGPT plays doctor with 72% success
(www.axios.com)
This is a most excellent place for technology news and articles.
Don't forget the inherent biases that are introduced with AI training! Women especially have a history of having their symptoms dismissed out of hand - if the LLM training data includes these biases, in combination with the bad diagnosis women could be really screwed.
similarly to people from different races/countries … it’s not only that their conditions might vary and require more data, it is also that some communities don’t visit/trust hospitals to even have their data collected to be in the training set. Or they can’t afford to visit.
Sometimes, people from more vulnerable communities (eg LGBT) might prefer not to have such data collected in the first place, making data sparser.