I do not fear artificial intelligence—nor do I revere it. What I feel toward the rise of intelligent systems in medicine is something quieter: a tempered trust, a measured hope. I recognize their immense potential, but I also hold firm to the belief that the deepest work of medicine happens not through automation, but through human connection. There is a quiet power that lives between symptom and story, between numbers and nuance. It is there, in those silent spaces, that medicine breathes. And that is something no machine can fully replace.
Like many clinicians today, I have observed the increasing integration of digital tools into our work. Intelligent platforms now generate note templates, offer differential suggestions, and flag high-risk patients. They respond swiftly and uncover patterns with astonishing precision; yet their usefulness depends entirely on the quality of the input—on the prompt, the question, the framing. These systems do not know what matters unless we teach them. In this sense, prompting becomes a form of clinical communication. Much like a patient history, it reveals not just what is asked, but how carefully and intentionally we have learned to listen.
Sometimes, even I have paused when a suggestion felt algorithmically right—but intuitively wrong.
Still, there are limitations no model can overcome. A machine cannot recognize the tremble in a patient’s voice. It cannot discern that a daughter’s silence might carry more fear than words ever could. It does not feel the moral weight of choosing when to speak and when to simply remain present. Computational systems are trained to detect patterns; physicians, by contrast, are trained to hold paradox. And modern medicine requires both.
I support the use of intelligent systems in health care—not because I believe they are perfect, but because I recognize that they are incomplete. And incomplete things must be shaped. Too often, the technologies that enter our clinical spaces are designed far from the realities of patient care. Predictive models and decision-support tools are introduced without sufficient clinical involvement in their development, validation, or implementation. The result is a system intended to assist physicians, yet built with minimal input from those who know the stakes of its success or failure.
I’ve watched colleagues question the conclusions of tools they had no voice in shaping. It’s a discomfort that lingers, even when the output is accurate.
And when the stakes are high, exclusion is not neutral—it is dangerous. I have seen clinical tools misfire precisely because they were created without understanding the complexity of patient presentations or the subtleties of clinical reasoning. Well-meaning algorithms can cause harm when they do not account for the lived wisdom of those on the ground.
Physicians should not be passive consumers of these tools. We must be active participants in the infrastructure that defines them—engaged in model design, data stewardship, product refinement, and ethical oversight. When clinicians are part of the development process, we bring more than expertise. We bring judgment, context, and a profound awareness of what is at risk when systems fall short. The values embedded in these tools will always reflect the priorities of those who build them. If clinicians are absent from that conversation, then so too are the complexities of care.
Moreover, diversity in this shaping process is not optional—it is foundational. Clinicians from underrepresented backgrounds, from multilingual communities, and from under-resourced settings offer perspectives that are often missing from both datasets and design rooms. Their inclusion ensures that the systems we create do not merely reflect the majority, but accommodate the full spectrum of human experience. When we participate, we do not simply “represent”; we recalibrate. We remind the system that not every patient speaks textbook English, that not every case follows protocol, and that not every human story fits neatly within a clinical box.
These tools are listening—but to the voices they’ve been taught to hear, and the ones courageous enough to speak with intention.
And so, I believe in the potential of intelligent systems. I believe in their capacity to support us, to reduce burdens, to sharpen insight. But I believe more deeply in clinical wisdom, in moral imagination, and in the quiet decisions made by people who understand that medicine is not merely a science of precision—it is an act of presence.
Let the machine assist.
Let the mind remain ours.
Our voices are needed now—before the algorithms decide without us.
Shanice Spence-Miller is an internal medicine resident.
