AI is rolling out in medicine faster than most of us can process. Ambient scribes documenting visits. Clinical decision support algorithms. Automated prior authorizations. The promises are compelling: reduced clerical burden, more face-time with patients, less burnout.
I wanted this. As a palliative care doctor and director of physician well-being at my institution, I’ve spent years watching colleagues drown in documentation and burn out from relentless task loads. When AI tools promised relief, I advocated for them.
And now it’s happening. My health system, like many across the country, is scaling AI scribes and other tools. Leadership is bringing well-being champions into the conversation. They seem genuinely trying to help us do our jobs.
But something feels unsettled. And I’m not the only one feeling it.
The unasked question
Last week, I attended a virtual discussion on AI in health care with fellow palliative care clinicians. We all felt the tension between promise and threat. The promise is real: AI could free us from documentation drudgery. But the fear is also real. What if instead of giving us our time back, administrators demand we use that time to see more patients? Worse, what if institutions use AI not to support physicians but to reduce the need for us?
Then someone said it: “Hospice and palliative medicine is truly the human side of medicine.” That felt true. But it begged the central question: What is it that a human offers that AI can’t?
The discussion was robust. We’re empathetic communicators, but AI models can mimic empathy already. We think outside the box, but AI struggles with improvising in the messy reality of bedside medicine. Someone commented: “Presence. That’s what we offer. That’s what AI can never replace.”
That felt right. Sort of. But I left needing to think about the question a whole lot more.
Why this question matters
Without clarity on what makes us irreplaceable, we can’t advocate effectively for how AI should be implemented. We can’t recognize when efficiency gains come at the cost of what matters most. We can’t spot when we’re being asked to participate in our own displacement. And we can’t lead this transition instead of being swept along by it.
What’s really driving AI in health care
As we navigate becoming “augmented” by AI, it’s prudent to pause and be skeptical about what’s driving this surge. Venture capital has poured billions into health care AI companies. These aren’t nonprofits; they’re businesses that need to generate returns for investors.
The economics matter because they shape incentives. When vendors pitch AI tools to health systems, business cases typically center on ROI, operational efficiency, and productivity gains.
But it’s worth asking: Are the features being built optimized for physician well-being and patient outcomes? Or for demonstrable returns on investment? These aren’t necessarily incompatible goals, but they’re not automatically aligned either.
A concerning pattern
Early data shows ambient scribes can modestly reduce documentation time. But we should pay attention to what’s happening as AI gets deployed across other industries.
Research from Upwork found that while 96 percent of C-suite leaders expect AI to boost productivity, 77 percent of employees using AI say these tools have actually increased their workload. And 88 percent of the highest-performing AI users report significant burnout.
The efficiency gains aren’t translating to workers going home earlier. Instead, many report being asked to do more work as a direct result of AI, and a World Economic Forum survey found that 40 percent of employers anticipate workforce reductions in areas where AI can automate tasks.
Health care isn’t exempt from these economic dynamics. We’ve seen this with EHRs; supposed to give us more time with patients, but became a burnout driver optimized for billing, not care. The risk is that physicians become trainers for systems that will justify tighter staffing, higher patient volumes, and greater productivity expectations, all while we shoulder the liability and emotional labor that AI can’t automate.
The answer
In the aftermath of my palliative care discussion group, I realized something about presence. The impact of that uniquely human presence is bidirectional. It doesn’t only touch patients. Being with patients influences how doctors think, feel, and act. We have proximity; we see patients daily, know their stories, share in their hopes and fears. We care about what happens to them.
In a health care system increasingly driven by profit, human clinicians may be the only stakeholders positioned to choose a different mission: patients.
Why are physicians uniquely positioned to resist profit extraction in health care
An AI can’t choose patient welfare over profit. A human doctor can. An AI will execute the algorithm. A human doctor can say “no, this is wrong.”
You face consequences that create different incentives. You carry what happens emotionally, legally, professionally. Those stakes shape your decisions in ways shareholder value never will.
You can organize collectively. AI can’t unionize. AI can’t refuse. AI can’t build professional coalitions. You can.
Professional norms exist independent of corporate goals. The Hippocratic tradition, medical ethics, your professional identity; these give you a separate allegiance that competes with profit.
These distinguishing characteristics are powerful. But that power requires moral clarity about what, and who, you’re committed to.
Individual clarity, collective power
The institutional pace of AI implementation doesn’t allow for this kind of reflection. But that doesn’t mean the reflection isn’t necessary. It just means we must create that space for ourselves.
Coaching, group discussions, journaling, therapy (whatever you do to think through confusing things) can provide the clarity needed to move forward with this sea change of AI. These spaces help you discover your own answers about how to do good work in a bad system and develop the moral clarity and agency to act on those answers.
In a system where money has become the mission, maintaining your commitment to patients requires clarity about what you’re fighting for and strength to sustain that fight without burning out.
What parts of your work feel most human? What are you willing to automate and what must stay in human hands? How do you want to show up as AI changes your practice? And crucially: What do you offer that AI never will?
Getting clear on these questions matters for your own practice and well-being. But it also matters for something bigger. Individual physicians getting clear on what they’re protecting is the foundation for collective action.
The physicians who will effectively advocate for thoughtful AI governance are the ones who’ve articulated what they’re fighting for. The ones who will push back against productivity creep are the ones who know their own boundaries. The ones who will organize to ensure AI augments rather than displaces physician work are the ones who’ve done their own internal work first.
Christie Mulholland is a palliative care physician and certified physician development coach who helps physicians reclaim their sense of purpose and connection in medicine. Through her work at Reclaim Physician Coaching, she guides colleagues in rediscovering fulfillment in their professional lives.
At the Icahn School of Medicine, Dr. Mulholland serves as associate professor of palliative medicine and director of the Faculty Well-being Champions Program. Affiliated with Mount Sinai Hospital, she leads initiatives that advance physician well-being by reducing administrative burden and improving access to mental health resources.
Her recent scholarship includes a chapter in Empowering Wellness: Generalizable Approaches for Designing and Implementing Well-Being Initiatives Within Health Systems and the article, “How to Support Your Organization’s Emotional PPE Needs during COVID-19.” Her peer-reviewed publications have appeared in Cancers and the Journal of Science and Innovation in Medicine.
She shares reflections on professional growth and physician well-being through Instagram, Facebook, and LinkedIn. Dr. Mulholland lives in New York City with her husband, James, and their dog, Brindi.






![Catching type 1 diabetes before it becomes life-threatening [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-2-190x100.jpg)

![How political polarization causes real psychological trauma [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)
![A doctor’s humbling journey through prostate cancer recovery [PODCAST]](https://kevinmd.com/wp-content/uploads/The-Podcast-by-KevinMD-WideScreen-3000-px-4-190x100.jpg)