The denial came back in less than three seconds.
A physician had just submitted a renewal for a medication her patient had taken for years, one that kept her stable, out of the hospital, and able to function. She expected the usual wait time. Maybe an hour. Maybe a day.
Instead, an automated message appeared: “Denied: automated appropriateness determination.”
No reviewer. No rationale. No path for appeal. Only an algorithm, silent, opaque, and final.
This is the emerging reality many clinicians now face: Artificial intelligence has quietly taken a seat between the prescription and the pharmacy. And with it comes a profound shift in access, trust, and the psychology of clinical work.
When AI becomes a gatekeeper
AI has entered the health care ecosystem not with splashy announcements, but through administrative infrastructure. While diagnostic algorithms and predictive models get the attention, a far more consequential transformation is happening in prior authorization.
Payers are deploying machine learning tools that:
- Parse documentation
- Compare cases to historical approval patterns
- Predict appropriateness
- Auto-deny based on model outputs
- Escalate specific cases using algorithmic rules
On paper, this is framed as efficiency. In practice, it represents a shift in power, one that is faster, less transparent, and significantly harder to challenge. And early evidence suggests we should proceed with caution.
Bias is already documented and not subtle.
A landmark Science investigation revealed that a widely used population-health algorithm underestimated the needs of Black patients because it used prior health care spending as a proxy for illness severity. Black patients with the same risk score as white patients were significantly sicker, indicating that the model encoded bias directly into its logic.
The Agency for Health Care Research and Quality echoed similar concerns in its 2023 federal review, warning that health care algorithms can “embed or amplify” racial and ethnic disparities unless rigorously governed.
If algorithms misclassify risk based on biased data, what happens when the same systems determine whether patients receive medication? We risk hard-coding inequity into the very systems responsible for gatekeeping access.
Clinicians are already feeling the psychological cost
For years, clinicians have reported that prior authorization undermines their ability to care for patients. AI has intensified that strain. Physicians now describe:
- Moral injury: “I know what my patient needs, but something I can’t see or override says no.”
- Loss of agency: Automated denial pathways make it unclear who (if anyone) reviewed the case.
- Trust erosion: Patients assume the physician failed to prescribe appropriately, not that an algorithm denied access.
- Identity disruption: Clinical judgment is sidelined by systems clinicians cannot interpret or challenge.
This mirrors well-documented patterns in organizational psychology: When power shifts without transparency or psychological preparation, it creates transition fractures, burnout, and disengagement. AI didn’t create prior authorization problems. But it has accelerated them and changed the emotional landscape for clinicians.
The innovation-access gap
There is a growing paradox in health care. AI is accelerating pharmaceutical innovation, optimizing drug discovery, simulating trials, and advancing precision therapeutics. But the downstream systems that determine whether patients can access those same therapies are becoming more restrictive through automation.
The result is what I call the innovation-access gap: Innovation moves quickly. Access does not.
A therapy can be groundbreaking, but if an algorithm quietly flags it as unnecessary or non-standard, the innovation never reaches the patient. The consequences are profound, particularly for patients requiring oncology treatments, rare-disease therapies, and complex medication regimens.
This is no longer simply a system problem. It is a leadership problem.
The clinician-algorithm collision
One of the most painful dynamics physicians describe is the collision between professional judgment and algorithmic authority.
A clinician prescribes. Their name appears on the order. The patient trusts the clinician’s expertise. But when an automated denial arrives:
- The physician must defend a decision they didn’t make
- The patient loses trust in the system
- The clinician absorbs the emotional consequences of an algorithmic decision
The physician-patient relationship, central to good medicine, becomes mediated by a black box no one can explain. This is a quiet but deeply harmful form of moral distress.
What health care leaders must do now
AI is not inherently harmful. The absence of governance, equity safeguards, and transparency is. Health care leaders, payers, and policymakers must insist on:
- Explainability: No denial should occur without an accessible explanation that clinicians can understand and contest.
- Human override authority: AI should inform decisions, not finalize them.
- Equity audits: Algorithms must be reviewed regularly to ensure no disparate impact across racial, ethnic, age, gender, or geographic lines.
- Clinician involvement: AI models affecting access should be designed with direct input from frontline clinicians.
- Transparency with patients: Patients deserve to know when an algorithm plays a role in their care decisions.
Without these safeguards, AI risks magnifying existing inequities and worsening clinician burnout, patient frustration, and systemic distrust.
Conclusion: integrity, not efficiency, must lead
AI can reduce administrative burden. It can expedite approvals. It can support consistency and reduce friction. But if deployed without accountability, explainability, and equity checks, it becomes a lock on the pharmacy door.
Used wisely (with transparency and human-centered governance) AI can be the key that unlocks access rather than restricts it. Technology alone will not determine the outcome. Leadership will.
The gate is shifting. The guard must be ready.
Tiffiny Black is a health care consultant.








![Why learning specialists are central to medical education [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-3-190x100.jpg)
