I love technology. I love the health care AI I use every week including Viz.ai and OpenEvidence and Glass Health. I am generally a tech optimist. And even as cries of overhype on artificial intelligence start wading into the public discourse, I truly believe that AI will as fundamentally alter society and health care as the internet or the mobile phone.
But, I am convinced the whole industry is doing AI development backwards. We are building advanced toys for enterprise users (for me or my hospital or the nurses I work with) when we ought to be building for patients.
Take a stroll through any medical conference these days and you are likely to see one exhibited after another hawking tools that purport to bring the power of AI to doctors and nurses. Clinical decision support tools. Radiology AI that highlights possible abnormalities. AI for pathology. This is promised to make us more efficient, more accurate, more productive. Yet here is what many of these solutions completely fail to grasp: they are optimizing for a pre-AI health care system. One is that they assume that the system as it is now (the idea that the patient is completely reliant on the doctor to be the gatekeeper of medical information and decision-making) is a system that is here to stay. They are building better stethoscopes when what we need is to build for the physician-patient relationship of the future.
Think about the work process of clinical decision support tools, which nowadays are a dime and dozen and baked into most EMRs. A patient walks in with some symptoms. I review the case and ask OpenEvidence of Glass Health or the little chatbot in my EMR some questions. The AI provides recommendations. The doctor translates the advice and discusses it with the patient. The patient departs. This is black box health care AI, for the medical degree-havers only.
AI stands ready to transform how we regulate, how we access and how we pay for health care. The changes will be democratizing in ways that our enterprise-heavy approach does not even come close to considering. Patients are already employing AI to make sense of their symptoms, research conditions and get second opinions. Anecdotally I am as likely to have a patient come having consulted ChatGPT as Dr. Google nowadays. We are throwing them into the whirlpool of complicated medical information, without guardrails and the specialist knowledge that might make AI genuinely helpful.
In the meantime, we are developing complex tools that maintain patient reliance on doctor interpretation rather than to empower an even more informed patient directly via AI. Regulation is changing, too. As it becomes clear that AI can be both safe and effective for patient-facing uses, we can expect new frameworks that enable patients to receive direct guidance in certain areas of health care from AI systems. The FDA is already looking at ways to regulate medical software or software as medical devices that can work outside traditional clinical settings. It is nearly inevitable.
Constructing health care AI for patients at its core is not about replacing doctors; it is about redefining our role. Rather than offering access to medical knowledge, we are participants in patient care. Imagine patients have conversations with an AI who knows their symptoms, medical history, and concerns. The AI gives patients preliminary guidance, educational resources, it helps them know when it is time to seek out human clinical intervention versus handling it themselves.
This is not a matter of replacing clinical judgment; it is about democratizing access to the tools of analysis that can be used to drive healthier decisions. When a patient can contextualize their symptoms, understand pertinent questions to ask and show up to a visit armed with AI-powered insights, I think the quality of that clinical exchange is likely to be orders of magnitude better.
I get why health care adopts enterprise AI solutions. These systems are in place within our current workflows. They do not confront the fundamental power dynamics in health care. They make us feel smarter without causing us to re-confront our own complicity in an AI-chauffeured future. But that resistance to patient-facing AI is a product of the same paternalistic tendencies that have long curtailed innovation in health care. The belief is that patients are not up to the task of direct access to high-end medical analysis. The idea that only a doctor can understand the insights made by AI. The fear that democratizing medical knowledge makes us less valuable.
These concerns are not entirely unfounded. Patient-facing health care AI needs a different kind of guardrails, clearer disclaimers, and stronger systems for kicking up to humans when it should. But the answer is not to stop building for patients at all; it is to build better patient-facing tools with the right guardrails. And this is going to require physician involvement. Constructing health care AI directly for patients is not a danger to our profession, but rather a way to elevate it. When patients show up knowing more, and knowing what to ask, and knowing better what we are talking about, then we can focus on doing what we do best: complex clinical reasoning, procedures, emotional support, and coordination of care.
And to be honest there is some inevitably to this. The question that truly faces us is not whether patients will use AI for health care; the question is if we will help to construct AI, specifically for their protection and benefit. The future of health care AI is not developing more advanced tools for doctors to wield on behalf of patients. It should be about enabling patients to access AI-powered insights directly, while allowing for the right level of clinical oversight and intervention.
The technology exists. The patient demand is evident.
As doctors, we can help lead that transformation by creating patient-facing AI with the expertise and protections we need, or we can keep optimizing for a health care system that AI will soon render irrelevant. We have a choice to make, but patients cannot wait forever for us to make it.
Colin Son is a neurosurgeon.