The unintended consequences of AI in medicine

Some of the most promising applications of artificial intelligence are in medicine. Many tasks that are currently the domain of physicians, such as detecting cancer and recommending treatments, could become jobs for AI. Already, physicians get help from ChatGPT.
However, the AI fervor has obscured several unintended consequences that must also be considered, which our research lab has been exploring.
First, because no AI system is perfect, physicians who use AI will sometimes receive an output that is incorrect — a false positive or false negative. And in one study, we found that radiologists were less accurate in their own interpretations of X-rays if they were given inaccurate feedback about the images from an AI system than if they looked at the X-rays without any AI guidance.
Get The Gavel
As we discussed this study with our physician colleagues, a common thread emerged. They were worried that if AI said it found an abnormality but the physician overruled it, they could be put in jeopardy of losing a lawsuit. So we conducted a follow-up study to see if this fear was warranted.
We asked people to imagine they were on a jury, and we presented them with a short description of a case in which a doctor was being sued because they failed to detect an abnormality, such as a brain bleed. Some participants were told that AI had detected the abnormality but that the physician overruled AI and said the image was normal; others were not given any information about AI. Just as our colleagues anticipated, hypothetical jurors were more likely to say that medical malpractice occurred when the physician contradicted AI.
At first glance, it might not seem bad for a doctor to err on the side of following an AI’s guidance when it finds a possible abnormality. Better for doctors to be safe than sorry, right? The problem is that any system designed to detect rare abnormalities will drastically “overcall”: The vast majority of times AI says a case is suspicious, the patient is actually healthy. This is because of the natural limits of what imaging can reveal.
Consider breast cancer detection. For every 67 mammograms that one AI algorithm labeled as suspicious, only one turned out to really be breast cancer. When far less than 1 percent of patients have an abnormality, as is the case for women who get routine screening mammograms, lots of false positives are almost inevitable.
But if physicians are punished for going against AI, they will be very reluctant to do so. And while a failure to detect a disease like breast cancer is the greater risk to patients’ health (by far), there is still a cost that comes with being sent for follow-up testing when nothing is wrong. That cost can be financial, logistical, and psychological. And doctors have no incentive to resist recommending that people get follow-up testing that isn’t necessary. The doctors don’t have to pay for those extra tests, or take a day off work, or endure sleepless nights worrying about their own health.
The end result, we fear, is that AI will bring about an era in health care when patients will be frequently sent for one test after another only to discover that there was no underlying problem. Society at large will also pay a price. The annual cost of diagnostic imaging is more than $100 billion. More imaging may also mean higher deductibles and copays.
But we believe there’s a solution: Don’t give in to magical thinking about AI. It’s easy to believe that AI is perfect, even though AI — just like physicians themselves — will inevitably make mistakes.
Our research has shown that educating people on this fact — by simply informing patients about the percentage of times AI makes erroneous judgments in medical care — can make them less likely to pursue legal action and more sympathetic to a physician who is being sued for making a mistake after overruling AI. These AI error rates could be provided in electronic patient portals. And physicians who use AI to help with diagnosis could give patients a brief handout to explain what AI is, how it works, and how often it is wrong.
Not all patients will read this information. But some will, and that’s a good start.
We think AI will be an overall positive for medicine. However, it is not an unmitigated positive. We must think carefully about the issues it presents and be proactive in strategizing solutions.




