“Can AI scale in medicine?” is the wrong question.
The real question is: Can AI protect the soul of medicine while people who have never sat with our patients decide their fate?
The ADVOCATE program: an overview
The Trump administration recently launched the ADVOCATE program: Agentic AI-Enabled Cardiovascular Care Transformation. Run by ARPA-H, this initiative aims to build and deploy autonomous “clinical AI agents” for cardiovascular disease, with an FDA authorization pathway of roughly three years.
Three years. That is all the time we have before agentic AI systems could be embedded in our hospitals, making real-time decisions about medications, triage, and care coordination for patients with heart disease.
A physician’s perspective on AI in medicine
As a physician practicing internal and functional medicine, I am not a Luddite. I use AI, temporal biomarkers, and circadian rhythm analysis to guide patients toward longevity. I believe technology can transform care.
But I also believe we are sleepwalking into a future that could strip the soul from medicine if we do not act now.
What ADVOCATE actually proposes
ARPA-H is recruiting teams from big tech, health systems, and academia to build two types of clinical AI agents:
- Patient-facing agent: Connects to EHRs, reads wearable data, adjusts medications, and performs assessments a cardiologist might do over the phone.
- Supervisory agent: Monitors and “corrects” other clinical AI systems.
These agents are designed to operate at scale, 24/7, extending overstretched clinicians into the homes of millions of patients.
On paper, this sounds like progress. Cardiovascular disease remains the leading cause of death in America. We do not have enough cardiologists. If AI can close these gaps, why would any physician object?
Because the question is not whether AI can help. The question is who designs it, who controls it, and what happens to the healer when the algorithm starts making the core clinical decisions.
The unspoken gap
A clinical AI agent can read labs, vitals, and guidelines. It can optimize a blood pressure regimen. It can flag a patient whose heart rate variability suggests early decompensation.
But it cannot see the unspoken.
It cannot hear the crack in a patient’s voice when they say, “I’m fine.” It cannot feel the cold sweat on a palm that signals fear no wearable can measure. It cannot recognize the cultural shame, the financial stress, or the quiet grief that changes everything about a patient’s willingness to fight.
AI sees the metric. The healer sees the human.
In my practice, the most important clinical data often arrives in the silence between words: a hesitation, a glance at the floor, the way a patient holds their body. These signals trigger the intuition that takes decades of bedside experience to develop.
No algorithm can replicate that. And if we build systems that bypass the healer’s judgment in pursuit of efficiency, we lose the very thing that makes medicine a healing art.
The liability black hole
If an autonomous AI agent gets it wrong in high-risk cardiovascular care, misses heart failure, pushes the wrong medication change, fails to escalate in time, who carries the burden?
The coder who wrote the algorithm? The ARPA-H team that designed the system? The health system that deployed it? Or the clinician standing in front of the family, left to explain a machine’s decision they did not fully control?
We are racing toward “AI dominance” and “minimally burdensome” approval pathways, but our ethical and legal frameworks are lagging far behind.
Built by tech, not by healers
ADVOCATE does include physicians. The program manager is a cardiologist. But the core architecture, code, and commercialization are driven by technology teams and vendors, many of whom sit in offices far from the exam room.
They have never held a patient’s hand through a crisis. Never delivered a terminal diagnosis. Yet their code may soon outrank the clinician’s intuition at the bedside.
We cannot allow the future of healing to be designed by those who have never experienced its weight.
A human-first path forward
I am not calling for the rejection of AI. I am calling for its proper place.
AI should serve the healer, not replace the healer. Technology must meet the human before it decides the fate of the human.
Here is what we must demand:
- High-risk clinical AI agents must remain clinician-supervised and clinician-overridable.
- Physicians, nurses, and patients must be co-designers, not afterthought “end users.”
- AI should clear the noise, including documentation, logistics, and data overload, so clinicians can return to presence and deep human judgment.
- Clear accountability frameworks must exist before deployment.
- The soul of medicine, the relationship, the intuition, the sacred space between healer and patient, must be protected as non-negotiable.
The mountain pass
We are standing at a mountain pass in medical history. In three years, agentic clinical AI could be live in American hospitals.
I did not spend decades training to become a system monitor for a black box. I became a physician to listen, to witness, and to heal.
Will we let code quietly redefine what it means to heal? Or will we fight for a human-first, AI-empowered future where the soul of medicine is non-negotiable?
Shiv K. Goel is a board-certified internal medicine and functional medicine physician based in San Antonio, Texas, focused on integrative and root-cause approaches to health and longevity. He is the founder of Prime Vitality, a holistic wellness clinic, and TimeVitality.ai, an AI-driven platform for advanced health analysis. His clinical and educational work is also shared at drshivgoel.com.
Dr. Goel completed his internal medicine residency at Mount Sinai School of Medicine in New York and previously served as an assistant professor at Texas Tech University Health Science Center and as medical director at Methodist Specialty and Transplant Hospital and Metropolitan Methodist Hospital in San Antonio. He has served as a principal investigator at Mount Sinai Queens Hospital Medical Center and at V.M.M.C. and Safdarjung Hospital in New Delhi, with publications in the Canadian Journal of Cardiology and presentations at the American Thoracic Society International Conference.
He regularly publishes thought leadership on LinkedIn, Medium, and Substack, and hosts the Vitality Matrix with Dr. Goel channel on YouTube. He is currently writing Healing the Split Reconnecting Body Mind and Spirit in Modern Medicine.






![True peace in medicine requires courage not silence [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-3-190x100.jpg)