Who's afraid of AI in the healthcare industry?

Knowing the ups and downs of AI in the medical world


At a glance

  • Feedback is essential to learn from mistakes and mastery is achieved over many years of practice.


dr_edcel_maurice_final_page_0001_910x1024_1_86c3423095.jpg

CLINICAL MATTERS

With the explosion of the use of artificial intelligence (AI) in everyday life, many people fear it is just a matter of time before AI takes over every single job that humans currently do. Already there are AI actors, composers, singers, and artists. Call centers are increasingly using AI to route calls and help customers troubleshoot their problems. AI assists scientists in looking for new molecules that may become better drugs or antibiotics. AI designs buildings and helps architects and engineers ensure these are safe and stable. AI helps predict the weather through different models. AI helps students with their research work, sometimes too much to the point that teachers have to use AI to detect whether their pupils are using AI to cheat.

 

AI when used properly can indeed increase productivity and automate mundane tasks. However, there are many instances when human input is still needed, especially when the stakes are high. For instance, autonomous cars still occasionally run over pedestrians or cause accidents when their algorithms get overwhelmed or there is no previous experience to draw upon. While computers are getting better and better, it is very difficult to assign accountability when there is no one behind the wheel. The fact that mistakes happen shows the inherent weakness of AI: it is only as good as the information that is given to it. It can make good facsimiles of pictures and music, but all of its output is inherently derivative and humans are still needed to come up with new ideas and concepts.

 

What about AI in medicine? Healthcare is an especially critical field because people’s lives are at stake. There are already numerous applications for AI especially in diagnostics. A properly trained AI has been shown to be better at detecting abnormalities on chest X-rays and in biopsy samples. However, a radiologist or pathologist still needs to review the results and correlate it to the clinical presentation, otherwise, the AI may overcall pneumonia on an X-ray or a cancer on a biopsy slide. There are specific applications where AI can help doctors make decisions, but it is important that these situations are chosen carefully, and the AI itself is of excellent quality with proper safety oversight. 

 

Doctors are trained in pattern recognition. Learning to recognize patterns within complex systems, usually with incomplete or even conflicting data, is a field of science known as heuristics. Doctors get better over time because their experiences become broader and they see the outcomes of their decisions. For instance, when I was a medical student, I had a very limited range of differential diagnoses – the list of possible diagnoses that an undifferentiated patient had – when I started seeing patients. A person who saw me in the clinic when I was just starting out would tell me his or her symptoms and I would do a physical examination and then I would generate my differentials. Because I only had limited experience, some of these differential diagnoses would be unlikely, and I had a more senior doctor to review those decisions. I then settled on a Primary Working Impression, which is basically my best guess as to what the patient had and I then formulated a plan of action. These plans include diagnostics and medicines if I thought the patient needed immediate treatment. The senior doctor (usually a resident who was also training) reviewed the plans and approved them. When the patient returned to me on follow-up, I got feedback through the diagnostics that confirmed what I was thinking, and whether my treatment worked.

 

Learning the ropes in medicine is risky without supervision, because if the student makes a mistake the patient can die. This is why, even as we learn, there are multiple layers of oversight from the resident, fellow, and consultant/attending physician. Feedback is essential to learn from mistakes and mastery is achieved over many years of practice. In addition, due to the complexity of modern medicine, patients with multiple problems are seen by different specialties and subspecialties who collaboratively treat the patient. Without these safeguards, learning medicine can result in poor outcomes for patients.

 

Heuristics is also how machines learn in a neural network but at a much higher speed. However, these are all in a virtual space and external feedback needs to be given to help the AI learn the most optimal answers to the problems without risking patients. In a way, the AI becomes an extension and repository of knowledge, but it cannot validate its output without feedback from the end user. Where it is most useful is generating differential diagnoses that a medical student with limited experience may not even consider, as well as evaluating thousands of scenarios and suggesting specific patterns for consideration. This helps the doctor to be more thorough and consider other probable scenarios. AI can also rank probabilities and help prioritize diagnostics and interventions if given the correct feedback.

Toward the end of my infectious diseases fellowship in the US, the Veterans Affairs Hospital in Cleveland where I did clinical work started using automated robots that brought medicines from the pharmacy to the nurses’ stations on different hospital floors. My consultant would deliberately try to confuse these robots by suddenly jumping in front of them and blocking their way. The robots would pause and back up and try to go around him, but then he would move where the robot was going. When the robot became totally confused, my consultant would then laughingly explain that he was just standing up for the human race. I thought it was hilarious, but in the back of my mind, I was a bit worried if somehow this would come to pass.

 

After a lot of angst on whether the machines would decide to eliminate humans if they got sufficiently intelligent akin to the Terminator movies, things are now starting to settle down. The future of medicine most definitely includes AI because it offers so many advantages and, if properly used, will make us better doctors. I look at AI as just another step in the evolution of tools we use to do our jobs better. From the abacus to electronic calculators, to computers and smartphones, and now AI, technology is not likely an existential threat but instead makes us more productive. I for one am not too worried that these machines will put us out of a job. There is so much more to medicine than getting blood for tests and giving medicine. In fact, patients frequently complain that doctors nowadays don’t spend enough time talking to them. Comforting patients and offering empathy are still the lifeblood of modern medicine. The one thing AI will never replace is the human touch.