Let us be honest about what is really happening: Automation is coming for medicine, and equity is still waiting in line. Across hospital systems, medical schools, and conference stages, a profession is holding its breath. The question consuming medicine right now is not about patients. It is about survival. About whether artificial intelligence (AI) will hollow out what took decades to build, whether the clinical judgment earned through sleepless residencies and hard-won experience will one day be rendered optional by a machine that never doubted itself and never burned out. It is a profoundly human anxiety. And it is not the only question worth asking.
While medicine turns inward, a quieter and more consequential question is going largely unanswered. The patients who were already navigating a system with longstanding gaps in access and representation, the uninsured, the undocumented, the digitally disconnected, the ones whose zip codes predict their outcomes better than their lab values, are watching the most transformative technological moment in health care history unfold. The question is whether it will finally include them. Or whether it will arrive, brilliant and gleaming, and pass them by once more. That question deserves full attention. And it is not getting enough.
The gap between innovation and inclusion
AI in health care is not the enemy. The possibilities are extraordinary. Earlier disease detection, reduced diagnostic error, clinical decision support that extends the reach of overstretched health systems, tools that could give a physician in an underserved community access to the same analytical power as a major academic medical center. These are not abstractions. They are within reach. And they represent a genuine opportunity to do something medicine has always promised and rarely delivered at scale: equitable care for every patient, not just the ones the system was designed around. But opportunity is not outcome. And in health care, innovation and inclusion have not always arrived together.
The mechanics matter here. Machine learning models learn from data, and American health care data is, in many ways, a precise record of American health care inequality. When training datasets do not reflect the full diversity of the patients they are meant to serve, people whose encounters with the system were briefer, later, and less thoroughly documented, the model does not know what it is missing. It performs with confidence, at scale, on a version of the patient population that was never complete to begin with. The goal was never to encode inequity. But intention and impact are not the same thing, and the gap between them is where vulnerable patients have always fallen through.
The hidden infrastructure of health equity
There is also the question of access, not to the AI itself, but to the infrastructure it assumes. Remote monitoring platforms, AI-powered patient portals, digital therapeutics, ambient documentation tools. These advances are real and valuable. They also require reliable internet, compatible devices, health literacy, and time. The communities carrying the heaviest burden of chronic disease are often the least resourced to benefit from the tools designed to address it. The digital divide and the health equity gap are not parallel problems. They are the same problem, and without deliberate intention, AI risks widening both simultaneously while promising to solve each.
None of this means slowing down. It means building better. Physicians are essential here, not as passive adopters of whatever system a hospital deploys, but as active voices in how these tools are designed, evaluated, and implemented. Those at the bedside observe what the algorithm cannot. Which patients never made it to the appointment the model assumed they attended. Which risk scores cannot account for the second job, the absent interpreter, the medical distrust earned across generations. That knowledge does not become less valuable in an AI-augmented world. It becomes more valuable, because it is exactly what responsible implementation requires.
Aligning artificial intelligence with intentional design
The identity crisis sweeping the medical community and the equity question are, at their core, asking the same thing: What is medicine really for? If the answer is that it exists to serve every patient, not just the ones whose data is clean and whose lives fit neatly into a training set, then that answer must live in the architecture of the tools being built. Not in a footnote. Not in a diversity statement appended after the fact. In the design, from the beginning.
This is medicine’s moment to align its tools with its highest intentions. AI gives health care professionals tools that have never existed before. The question is whether they will be aimed deliberately, at the gaps, at the margins, at the patients who have been waiting the longest for a system that finally sees them. Automation is coming for medicine. That is certain. Whether equity comes with it depends on the questions being asked right now, and whether the answers are demanded to include everyone.
Judith Eguzoikpe is a physician and public health advocate.









![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)





