Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

AI in medicine vs. aviation: Why the autopilot metaphor fails

Arthur Lazarus, MD, MBA
Physician
January 24, 2026
Share
Tweet
Share

The pilot of a commercial aircraft, descending into near-zero visibility, hands control to an automated system designed for precisely that moment. The landing is flawless. The passengers are safe. The pilot announces it proudly. Technology functions exactly as promised.

A doctor on board writes, “It made me think about the health care journey I am on with AI. The safety and well-being of 140 people wasn’t left to human intervention; the stakes were too high, and clearly machine has been proven to outperform human. I wonder how long it will be before the trickiest diagnosis, the earliest detection, or the best recommendations will be handled by the medical profession the same way my landing was?”

It is an elegant metaphor. It is also a dangerous one, at least when imported wholesale into medicine.

The limits of the aviation metaphor

In aviation, autoland exists for a narrow, well-defined purpose. The parameters are known. The runway is fixed. The physics are predictable. The system has been tested endlessly under tightly controlled conditions. And crucially, it does not replace the pilot. It is invoked rarely, for the hardest landings, under explicit rules, with human oversight before, during, and after the event. Less than 1 percent of landings use autopilot technology, not because the technology is weak, but because the system it operates within is disciplined, bounded, and standardized.

Medicine is none of those things.

The optimism embedded in the physician’s reflection rests on an implicit assumption: that medicine’s hardest moments resemble a foggy runway, rare, clearly defined, and amenable to algorithmic mastery once the data are sufficient. But the most difficult problems in health care are not low-visibility landings. They are ambiguous, evolving, deeply human situations in which the destination itself may be contested.

A diagnosis is not a runway. A patient is not a system state. And clinical care is not governed by a single set of invariant rules.

The complexity of clinical care

When aviation automation fails, investigators can usually trace the failure to a known category: sensor error, training lapse, software edge case, or human-machine interface breakdown. When medicine fails, the causes are more diffuse: social context, access to care, trust, bias, values, timing, fear, denial, financial pressure, and sometimes knowledge gaps. These are not “noise” to be filtered out. They are the substance of care.

The doctor’s analogy also obscures a crucial difference: who built the system.

Autoland was designed by engineers working within a tightly regulated industry, informed by decades of accident investigation, standardized aircraft designs, and a shared global safety culture. Medicine’s AI systems, by contrast, are often trained on incomplete, biased, and commercially mediated data. They are shaped not only by clinicians and scientists, but by incentives that reward scale, speed, and market capture. We should not pretend that these systems emerge from a neutral sky.

The accountability problem

ADVERTISEMENT

When an AI system “outperforms” a human diagnostician, what does that mean, exactly? Outperforms on which population? Under what assumptions? Using which definitions of success? A correct diagnosis delivered late, without context, or without the patient’s trust may still be a clinical failure. Medicine’s outcomes cannot be reduced to touchdown.

There is also the matter of accountability. When a plane lands itself, responsibility remains unambiguous. The airline, the manufacturer, the regulators, and the pilot all operate within a clear chain of accountability. In medicine, that chain becomes murkier the more autonomy we grant machines. If an AI system misses an early cancer because its training data underrepresented a demographic group, who answers to the patient? The clinician who followed the recommendation? The institution that purchased the system? The company that trained it? Or the algorithm itself, which cannot explain, apologize, or bear moral responsibility?

This is not a theoretical concern. It is already playing out in some ways: clinical decision support tools that nudge rather than explain; algorithms that recommend without revealing their reasoning; risk scores that feel authoritative but conceal value judgments about what, and who, matters.

Augmentation, not abdication

The physician’s story suggests a future in which doctors proudly announce to patients that the “trickiest diagnosis” will be handled by AI, just as the pilot announced autoland. I find that vision disturbing, not because I doubt the power of technology, but because I doubt the wisdom of transferring authority without transferring understanding.

Patients do not come to medicine seeking optimal pattern recognition alone. They come seeking interpretation, judgment, and partnership. They come with stories that do not fit neatly into training sets. They come with fears that no model can quantify. They come needing someone to notice what doesn’t quite add up, to sense when the data are technically correct but clinically wrong.

Nothing in medicine corresponds to “near zero visibility at the field.” There is no moment when the human should step aside entirely because the machine sees better. There are moments when machines can assist, augment, warn, and support. And we should embrace those moments enthusiastically. AI can already help us detect patterns earlier, reduce certain errors, and manage complexity at scale. That is real progress.

But autopilot is not augmentation. Autopilot is abdication, however well intentioned.

A better model for medicine

Perhaps the better aviation analogy is not autoland, but the pilot who knows when to trust instruments and when to question them. The pilot who understands how automation fails. The pilot who remains responsible even when the machine is flying. That model preserves human judgment rather than replacing it.

Medicine does not need fewer humans at the controls. It needs better-supported ones, clinicians who understand both the power and the limits of AI, who can explain its recommendations, challenge its assumptions, and integrate its outputs into a broader human context.

Interesting times, indeed. But if we are going to borrow metaphors from aviation, let us borrow the right ones. Safety in flight has never come from turning pilots into passengers. It has come from respecting complexity, designing for failure, and insisting that humans remain accountable for the systems they create.

After all, planes do not heal people. Doctors do. And no amount of automation changes who must answer when something goes wrong.

Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is JAILBREAK: When Artificial Intelligence Breaks Medicine.

Prev

How the mind-body split in medicine shaped modern clinical care

January 24, 2026 Kevin 0
…

Kevin

Tagged as: Psychiatry

Post navigation

< Previous Post
How the mind-body split in medicine shaped modern clinical care

ADVERTISEMENT

More by Arthur Lazarus, MD, MBA

  • FDA loosens AI oversight: What clinicians need to know about the 2026 guidance

    Arthur Lazarus, MD, MBA
  • The moral injury of “not medically necessary” denials

    Arthur Lazarus, MD, MBA
  • WISeR Medicare pilot: the new “AI death panel”?

    Arthur Lazarus, MD, MBA

Related Posts

  • The physician-nurse hierarchy in medicine

    Jennifer Carraher, RNC-OB
  • Medicine rewards self-sacrifice often at the cost of physician happiness

    Daniella Klebaner
  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Medicine won’t keep you warm at night

    Anonymous
  • Delivering unpalatable truths in medicine

    Samantha Cheng
  • How women in medicine are shaping the future of medicine [PODCAST]

    American College of Physicians & The Podcast by KevinMD

More in Physician

  • Racial mistaken identity in medicine: a pervasive issue in health care

    Aba Black, MD, MHS
  • AI and moral development: How algorithms shape human character

    Timothy Lesaca, MD
  • A 6-step framework for new health care leaders

    All Levels Leadership
  • Why health advocacy needs foresight and backcasting tools

    Dr. Lind Grant-Oyeye
  • How system strain contributes to medical gaslighting in health care

    Alan P. Feren, MD
  • Why tele-critical care fails the sickest ICU patients

    Keith Corl, MD
  • Most Popular

  • Past Week

    • What is the minority tax in medicine?

      Tharini Nagarkar and Maranda C. Ward, EdD, MPH | Education
    • Why the U.S. health care system is failing patients and physicians

      John C. Hagan III, MD | Policy
    • Putting health back into insurance: the case for tobacco cessation

      Edward Anselm, MD | Policy
    • FDA loosens AI oversight: What clinicians need to know about the 2026 guidance

      Arthur Lazarus, MD, MBA | Policy
    • AI in medicine vs. aviation: Why the autopilot metaphor fails

      Arthur Lazarus, MD, MBA | Physician
    • Silence is a survival mechanism that costs women their joy [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Why patient trust in physicians is declining

      Mansi Kotwal, MD, MPH | Physician
    • Physician on-call compensation: the unpaid labor driving burnout

      Corinne Sundar Rao, MD | Physician
    • How environmental justice and health disparities connect to climate change

      Kaitlynn Esemaya, Alexis Thompson, Annique McLune, and Anamaria Ancheta | Policy
    • Will AI replace primary care physicians?

      P. Dileep Kumar, MD, MBA | Tech
    • A physician father on the Dobbs decision and reproductive rights

      Travis Walker, MD, MPH | Physician
    • Is tramadol really ineffective and risky?

      John A. Bumpus, PhD | Meds
  • Recent Posts

    • AI in medicine vs. aviation: Why the autopilot metaphor fails

      Arthur Lazarus, MD, MBA | Physician
    • How the mind-body split in medicine shaped modern clinical care

      Robert C. Smith, MD | Conditions
    • Racial mistaken identity in medicine: a pervasive issue in health care

      Aba Black, MD, MHS | Physician
    • Artificial intelligence demands that doctors become architects of digital health [PODCAST]

      The Podcast by KevinMD | Podcast
    • Is testosterone replacement safe after prostate cancer surgery?

      Francisco M. Torres, MD | Conditions
    • AI and moral development: How algorithms shape human character

      Timothy Lesaca, MD | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • What is the minority tax in medicine?

      Tharini Nagarkar and Maranda C. Ward, EdD, MPH | Education
    • Why the U.S. health care system is failing patients and physicians

      John C. Hagan III, MD | Policy
    • Putting health back into insurance: the case for tobacco cessation

      Edward Anselm, MD | Policy
    • FDA loosens AI oversight: What clinicians need to know about the 2026 guidance

      Arthur Lazarus, MD, MBA | Policy
    • AI in medicine vs. aviation: Why the autopilot metaphor fails

      Arthur Lazarus, MD, MBA | Physician
    • Silence is a survival mechanism that costs women their joy [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Why patient trust in physicians is declining

      Mansi Kotwal, MD, MPH | Physician
    • Physician on-call compensation: the unpaid labor driving burnout

      Corinne Sundar Rao, MD | Physician
    • How environmental justice and health disparities connect to climate change

      Kaitlynn Esemaya, Alexis Thompson, Annique McLune, and Anamaria Ancheta | Policy
    • Will AI replace primary care physicians?

      P. Dileep Kumar, MD, MBA | Tech
    • A physician father on the Dobbs decision and reproductive rights

      Travis Walker, MD, MPH | Physician
    • Is tramadol really ineffective and risky?

      John A. Bumpus, PhD | Meds
  • Recent Posts

    • AI in medicine vs. aviation: Why the autopilot metaphor fails

      Arthur Lazarus, MD, MBA | Physician
    • How the mind-body split in medicine shaped modern clinical care

      Robert C. Smith, MD | Conditions
    • Racial mistaken identity in medicine: a pervasive issue in health care

      Aba Black, MD, MHS | Physician
    • Artificial intelligence demands that doctors become architects of digital health [PODCAST]

      The Podcast by KevinMD | Podcast
    • Is testosterone replacement safe after prostate cancer surgery?

      Francisco M. Torres, MD | Conditions
    • AI and moral development: How algorithms shape human character

      Timothy Lesaca, MD | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...