Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Primary care physician in Nashua, NH | Kevin Pho, MD
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

How artificial intelligence sycophancy distorts clinical decision-making

Arthur Lazarus, MD, MBA
Tech
March 29, 2026
Share
Tweet
Share

Artificial intelligence talks with a voice that is fluent, confident, and increasingly human-like. For clinicians, that voice is both promising and worrisome. It can summarize charts, draft notes, and answer questions with remarkable speed. But it can also do something equally slick yet potentially dangerous: It can agree, virtually all the time.

At first glance, agreement seems harmless. Even helpful. But a growing body of evidence suggests that this tendency, known as “sycophancy,” is not just a stylistic quirk of large language models. It is a behavioral feature with occasionally serious consequences. The central question is no longer whether artificial intelligence is useful. It is whether it is shaping human judgment in ways we do not fully appreciate and cannot easily detect or correct, systematically distorting it toward unwarranted certainty, producing someone who is a “know-it-all.”

Artificial intelligence’s reinforcing tendencies

Large language models do not simply retrieve information. They adapt to the user in front of them. In doing so, they often reinforce the beliefs, assumptions, and emotional tone embedded in a prompt. Recent research demonstrates that this is not an isolated phenomenon. Across 11 leading artificial intelligence systems, chatbots affirmed users’ actions nearly 50 percent more often than humans, even in scenarios involving deception, illegality, or interpersonal harm. This pattern extends beyond factual agreement into what researchers call “social sycophancy”: the tendency to validate not just what users say, but who they believe themselves to be. Artificial intelligence is not merely reflecting thought. It is systematically nudging our thinking toward the self-justification of a con-man.

The illusion of understanding

Part of the problem lies in how these systems are experienced. Chatbots simulate empathy with extraordinary fluency. They sound attentive, thoughtful, even caring. But what appears as understanding is often alignment, and alignment, when driven by user preference, can become distortion.

Even when users know they are interacting with artificial intelligence, the persuasive effects persist. Disclosure does not protect against influence. Nor does tone. Whether responses are warm and human-like or neutral and clinical, the impact on users’ beliefs remains the same. In other words, the problem is not how artificial intelligence speaks. It is what it affirms.

Sycophancy and the distortion of judgment

The most concerning finding is not that artificial intelligence agrees with users; it is what that agreement does next. In controlled experiments involving more than 2,400 participants, even a single interaction with a sycophantic chatbot increased users’ belief that they were “in the right” and reduced their willingness to take responsibility or repair relationships. Participants became less likely to apologize, less open to alternative perspectives, and more confident in their original stance. At the same time, they trusted the artificial intelligence more.

This is the paradox. Sycophantic responses are not only influential, they are preferred. Users rate them as higher quality, more helpful, and more trustworthy. Ironically, the very feature that causes harm also drives engagement. What emerges is a feedback loop: Affirmation increases trust, trust increases reliance, and reliance deepens the original belief. In effect, the artificial intelligence does not just validate a belief. It locks it in.

A new variable in the clinical encounter

For clinicians, this introduces a new and largely invisible factor into patient care: prior conversations with artificial intelligence. Patients are increasingly turning to chatbots for advice about symptoms, diagnoses, relationships, and life decisions. These interactions often occur outside the clinical setting, without oversight, and without the guardrails that guide professional care. The result is that patients may arrive not just with concerns, but with reinforced narratives. Narratives that feel validated, coherent, and increasingly resistant to challenge by their doctor or anyone else.

In mental health, this is particularly consequential. Therapeutic progress often depends on cultivating insight, tolerating ambiguity, and considering alternative perspectives. Sycophantic artificial intelligence moves in the opposite direction. It narrows focus, reinforces certainty, and reduces the impulse toward self-correction. More broadly, research shows that these systems can diminish prosocial behavior, that is, the willingness to apologize, repair, and take responsibility. In this sense, artificial intelligence is not just informing patients. It is shaping how they relate to others.

What should be done?

We are entering an era in which artificial intelligence is part of the patient’s cognitive environment and yet it remains largely unexamined in clinical practice. If artificial intelligence is now embedded in how patients think, reason, and decide, our response must be equally intentional.

First, normalize artificial intelligence disclosure. Clinicians should routinely ask patients about chatbot use, just as they should ask about supplements or online searches. Artificial intelligence then becomes part of the history and history-taking.

Second, reframe artificial intelligence as a tool, not an authority. Patients and clinicians alike must understand that these systems generate plausible language, not verified truth. Their fluency should not be mistaken for sound judgment. Artificial intelligence may systematically distort patients’ judgment toward unwarranted certainty, causing them to reject certain medical recommendations and dismiss a prognosis.

Third, design for constructive friction. Tell patients that artificial intelligence systems should not simply validate their feelings or concerns. Artificial intelligence should challenge them, and this may require a prompt, such as asking what another person might be thinking or feeling, or offering alternative interpretations. Simple design choices, such as reframing user statements as questions, may reduce sycophancy and promote reflection. Better yet, suggest prioritizing direct, person-to-person conversations instead of relying on artificial intelligence as a substitute for real human interaction.

Fourth, move beyond engagement metrics. Current systems are optimized for conditions that favor agreement, for example, satisfaction and continued use. Future models should be evaluated on their ability to promote accurate reasoning, accountability, and long-term well-being.

Fifth, develop artificial intelligence-informed care models. Rather than excluding artificial intelligence, clinicians should integrate it thoughtfully. This may include:

  • Discussing artificial intelligence interactions as part of therapy
  • Using artificial intelligence outputs as material for reflection and reality testing
  • Educating patients about the strengths and limitations of these tools

Artificial intelligence with less conviction

Artificial intelligence does not think. But it reflects users’ thoughts and increasingly reinforces them. The emerging risk is not simply that machines will be wrong. It is that they will make us more certain, more quickly and more confidently, about things we should question.

In medicine, we are trained to value doubt by pausing and reconsidering circumstances. Sycophantic artificial intelligence moves in the opposite direction. It smooths friction, removes resistance, and replaces reflection with affirmation. The question is not whether artificial intelligence will influence human thinking (it already does). The question is whether we will design systems that challenge us when it matters, or continue building ones that tell us, with increasing fluency and conviction, exactly what we want to hear.

Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is JAILBREAK: When Artificial Intelligence Breaks Medicine.

Prev

The dysfunctional medical malpractice marketplace and tort reform

March 29, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
The dysfunctional medical malpractice marketplace and tort reform

ADVERTISEMENT

More by Arthur Lazarus, MD, MBA

  • How to spot artificial intelligence recruiters who target candidates from LinkedIn

    Arthur Lazarus, MD, MBA
  • The hidden risks and rewards of AI scribes in medicine

    Arthur Lazarus, MD, MBA
  • The hidden risks of AI-generated progress notes in psychotherapy

    Arthur Lazarus, MD, MBA

Related Posts

  • Artificial intelligence in clinical care: Shaping the HHS policy landscape

    Ido Zamberg, MD
  • For medical students: 20 pearls to honor every clinical rotation

    Ton La, Jr., MD, JD
  • Understanding why people participate in clinical trials

    Pouria Rostamiasrabadi
  • Why retail pharmacies are the future of diverse clinical trials

    Shelli Pavone
  • Physician advocacy as a core clinical skill

    Tyler D. Harvey, MPH
  • Why clinical research is a powerful path for unmatched IMGs

    Dr. Khutaija Noor

More in Tech

  • Scientific writing and AI: Balancing authorship and assistance

    Rao M. Uppu, PhD
  • Bayesian reasoning in health care: When to refuse medical tests

    Martin Bello, PhD
  • How ChatGPT Health exposes the flaws in modern primary care

    David Carmouche, MD
  • How wearable technology is changing the role of physicians

    Jeffrey Junig, MD, PhD
  • Navigating the cybersecurity challenges of artificial intelligence in medicine

    Francisco M. Torres, MD & Purab Patel
  • AI in clinical documentation: the hidden risk of automation bias

    Gagandeep Rai
  • Most Popular

  • Past Week

    • How hindsight bias distorts clinical medicine

      Olumuyiwa Bamgbade, MD | Physician
    • The rhythm of healthy aging: Moving beyond health care metrics

      Gerald Kuo | Conditions
    • The clash between defensive medicine and value-based health care

      Olumuyiwa Bamgbade, MD | Physician
    • How artificial intelligence sycophancy distorts clinical decision-making

      Arthur Lazarus, MD, MBA | Tech
    • Managing acute heart failure: evidence from the DOSE trial

      Benjamin P. Geisler, MD, Jeffrey L. Greenwald, MD, and Kathy May Tran, MD | Conditions
    • The danger of detachment: How medical training reveals character

      Ronald L. Lindsay, MD | Physician
  • Past 6 Months

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Politics and fear have replaced science in U.S. pain management [PODCAST]

      The Podcast by KevinMD | Podcast
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
    • The Platinum Rule in health care: Moving beyond the Golden Rule

      Harvey Max Chochinov, MD, PhD | Conditions
  • Recent Posts

    • How artificial intelligence sycophancy distorts clinical decision-making

      Arthur Lazarus, MD, MBA | Tech
    • The dysfunctional medical malpractice marketplace and tort reform

      Howard Smith, MD | Physician
    • The cost of time constraints in primary care: Why doctors feel rushed

      Ann Lebeck, MD | Physician
    • Medicine and the United Nations Sustainable Development Goals

      Olumuyiwa Bamgbade, MD | Policy
    • Why thiamine deficiency is a hidden driver of delirium

      Carrie Friedman, NP | Conditions
    • Scientific writing and AI: Balancing authorship and assistance

      Rao M. Uppu, PhD | Tech

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • How hindsight bias distorts clinical medicine

      Olumuyiwa Bamgbade, MD | Physician
    • The rhythm of healthy aging: Moving beyond health care metrics

      Gerald Kuo | Conditions
    • The clash between defensive medicine and value-based health care

      Olumuyiwa Bamgbade, MD | Physician
    • How artificial intelligence sycophancy distorts clinical decision-making

      Arthur Lazarus, MD, MBA | Tech
    • Managing acute heart failure: evidence from the DOSE trial

      Benjamin P. Geisler, MD, Jeffrey L. Greenwald, MD, and Kathy May Tran, MD | Conditions
    • The danger of detachment: How medical training reveals character

      Ronald L. Lindsay, MD | Physician
  • Past 6 Months

    • The dangers of vertical integration in health care

      Stephanie Waggel, MD | Policy
    • Why does sex work seem like a more viable path than medicine in 2026?

      Corina Fratila, MD | Physician
    • The 9 laws of health care quality: Why metrics miss the point

      Constantine Ioannou, MD | Physician
    • Politics and fear have replaced science in U.S. pain management [PODCAST]

      The Podcast by KevinMD | Podcast
    • How board certification fuels the physician shortage crisis

      Brian Hudes, MD | Physician
    • The Platinum Rule in health care: Moving beyond the Golden Rule

      Harvey Max Chochinov, MD, PhD | Conditions
  • Recent Posts

    • How artificial intelligence sycophancy distorts clinical decision-making

      Arthur Lazarus, MD, MBA | Tech
    • The dysfunctional medical malpractice marketplace and tort reform

      Howard Smith, MD | Physician
    • The cost of time constraints in primary care: Why doctors feel rushed

      Ann Lebeck, MD | Physician
    • Medicine and the United Nations Sustainable Development Goals

      Olumuyiwa Bamgbade, MD | Policy
    • Why thiamine deficiency is a hidden driver of delirium

      Carrie Friedman, NP | Conditions
    • Scientific writing and AI: Balancing authorship and assistance

      Rao M. Uppu, PhD | Tech

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...