When I was a young bonnie lad of just 16, I was lucky enough to score a meeting with Sandy Napel, PhD at Stanford University’s Radiological Sciences Laboratory (RSL). I demonstrated a prototype algorithm that automatically identified the position and course of arteries on CT scans. It was the first shot in a nearly decade-long academic research career that included early publications on using neural networks (the precursor to today’s LLMs). I was asked then, “Would AI replace doctors?” My answer would have been an emphatic NO. Humans still needed to figure out the “how” and “teach” the computer. Even with early neural networks, we had to engineer the data extensively before feeding it to the system for classification.
For years, “classification,” the science (and art) of getting a computer to recognize a cat in a picture, was de rigueur. A significant amount of human work had to go into every model to make it work. Successful companies like Hologic and R2 Technologies created computer-aided diagnosis (CAD) systems for breast imaging to detect cancers. As many radiologists will tell you, these tools were just “OK.” As late as 2015, well-designed studies concluded that AI offered no overall outcomes benefit and that it was a $400 million boondoggle to the U.S. health care system. Essentially, you bought a CAD system because you could charge for it, not because it conferred a real-world advantage.
Fast forward a decade, and now deep learning neural networks and then LLMs, paired with much more data available for training, can learn how to do something without humans teaching them. Even a novice can use and train an AI for their needs. The pace of development has increased exponentially and so has AI’s capabilities. Reinforcement learning is making a comeback with LLMs, but it’s being used to incentivize better reasoning, which is one step above simply getting a question right.
So, we ask again. Will AI eventually replace doctors? The answer is YES, but perhaps not for the reason you think.
To understand why, we must first see that the very thing that makes humans unique as a species is that each human is unique. If we tried to predict what every neuron in our brain does at any moment in time, we would likely still be doing it when the universe ends. It’s not quite Heisenberg’s uncertainty principle, but it comes close. Yet, we have been trying for centuries to quash uniqueness. Everything from religion to our school system imparts a specific set of knowledge and ways of thinking. In medicine, we increasingly adhere to a specific set of guidelines that purport to improve population health based on the best available data, often ignoring subpopulation and individual variation.
You might be asking: Why is this a bad thing? Shouldn’t we improve population health as a whole? Yes, I fully agree with that. However, when you go to the doctor with a cough, you don’t go because you expect to be diagnosed with a cold. You go because you think the cough could be something more serious or rare. You don’t know what you don’t know. That’s why you go to the doctor. AI cannot (yet) conceive of what it doesn’t know because it lacks imagination. An AI-only approach will only lead to the stagnation of knowledge and experience. Humans provide variation and imagination, driving progress.
Here’s the kicker. In a sense, the replacement has already begun.
Hospitals, provider groups, and now insurance companies have already asked themselves the fundamental question: If all we’re doing is following a prescribed set of guidelines and 90 percent of our patients come with common conditions, who needs experience and knowledge? Also, in both the fee-for-service and the “value-based” capitation frameworks, reducing cost and increasing profit per patient is paramount. Radiologists read 80–100 scans a day without AI help, and primary care providers spend barely 15 minutes per patient. There simply isn’t even enough time to bring experience to bear because simply getting through the work is tough enough. Hence, the industry has already decided to replace doctors with those with less experience, education, and breadth of knowledge. The game has been set up in such a way that we cannot show our value.
So, stop asking if AI will replace doctors, because we see the equivalent happening right before our eyes. We already see the results for patient outcomes, patient experience, and cost. Even now, patients who are difficult to treat or have an uncommon condition just bounce around the health care system without a diagnosis. The replacement has already begun because the required underlying tradeoffs have already been made, and the adverse incentive systems are already in place.
Only the tool has changed.
Now, it may look like I’ve painted a very dark picture, but don’t despair. Next week, we will explore how an AI-driven medical system might operate and how humans fit in that system. Fundamental changes from training to reimbursement are needed, and I look forward to exploring that with you.
Bhargav Raman is a physician executive.