Artificial intelligence talks with a voice that is fluent, confident, and increasingly human-like. For clinicians, that voice is both promising and worrisome. It can summarize charts, draft notes, and answer questions with remarkable speed. But it can also do something equally slick yet potentially dangerous: It can agree, virtually all the time.
At first glance, agreement seems harmless. Even helpful. But a growing body of evidence suggests that this tendency, known as “sycophancy,” is not just a stylistic quirk of large language models. It is a behavioral feature with occasionally serious consequences. The central question is no longer whether artificial intelligence is useful. It is whether it is shaping human judgment in ways we do not fully appreciate and cannot easily detect or correct, systematically distorting it toward unwarranted certainty, producing someone who is a “know-it-all.”
Artificial intelligence’s reinforcing tendencies
Large language models do not simply retrieve information. They adapt to the user in front of them. In doing so, they often reinforce the beliefs, assumptions, and emotional tone embedded in a prompt. Recent research demonstrates that this is not an isolated phenomenon. Across 11 leading artificial intelligence systems, chatbots affirmed users’ actions nearly 50 percent more often than humans, even in scenarios involving deception, illegality, or interpersonal harm. This pattern extends beyond factual agreement into what researchers call “social sycophancy”: the tendency to validate not just what users say, but who they believe themselves to be. Artificial intelligence is not merely reflecting thought. It is systematically nudging our thinking toward the self-justification of a con-man.
The illusion of understanding
Part of the problem lies in how these systems are experienced. Chatbots simulate empathy with extraordinary fluency. They sound attentive, thoughtful, even caring. But what appears as understanding is often alignment, and alignment, when driven by user preference, can become distortion.
Even when users know they are interacting with artificial intelligence, the persuasive effects persist. Disclosure does not protect against influence. Nor does tone. Whether responses are warm and human-like or neutral and clinical, the impact on users’ beliefs remains the same. In other words, the problem is not how artificial intelligence speaks. It is what it affirms.
Sycophancy and the distortion of judgment
The most concerning finding is not that artificial intelligence agrees with users; it is what that agreement does next. In controlled experiments involving more than 2,400 participants, even a single interaction with a sycophantic chatbot increased users’ belief that they were “in the right” and reduced their willingness to take responsibility or repair relationships. Participants became less likely to apologize, less open to alternative perspectives, and more confident in their original stance. At the same time, they trusted the artificial intelligence more.
This is the paradox. Sycophantic responses are not only influential, they are preferred. Users rate them as higher quality, more helpful, and more trustworthy. Ironically, the very feature that causes harm also drives engagement. What emerges is a feedback loop: Affirmation increases trust, trust increases reliance, and reliance deepens the original belief. In effect, the artificial intelligence does not just validate a belief. It locks it in.
A new variable in the clinical encounter
For clinicians, this introduces a new and largely invisible factor into patient care: prior conversations with artificial intelligence. Patients are increasingly turning to chatbots for advice about symptoms, diagnoses, relationships, and life decisions. These interactions often occur outside the clinical setting, without oversight, and without the guardrails that guide professional care. The result is that patients may arrive not just with concerns, but with reinforced narratives. Narratives that feel validated, coherent, and increasingly resistant to challenge by their doctor or anyone else.
In mental health, this is particularly consequential. Therapeutic progress often depends on cultivating insight, tolerating ambiguity, and considering alternative perspectives. Sycophantic artificial intelligence moves in the opposite direction. It narrows focus, reinforces certainty, and reduces the impulse toward self-correction. More broadly, research shows that these systems can diminish prosocial behavior, that is, the willingness to apologize, repair, and take responsibility. In this sense, artificial intelligence is not just informing patients. It is shaping how they relate to others.
What should be done?
We are entering an era in which artificial intelligence is part of the patient’s cognitive environment and yet it remains largely unexamined in clinical practice. If artificial intelligence is now embedded in how patients think, reason, and decide, our response must be equally intentional.
First, normalize artificial intelligence disclosure. Clinicians should routinely ask patients about chatbot use, just as they should ask about supplements or online searches. Artificial intelligence then becomes part of the history and history-taking.
Second, reframe artificial intelligence as a tool, not an authority. Patients and clinicians alike must understand that these systems generate plausible language, not verified truth. Their fluency should not be mistaken for sound judgment. Artificial intelligence may systematically distort patients’ judgment toward unwarranted certainty, causing them to reject certain medical recommendations and dismiss a prognosis.
Third, design for constructive friction. Tell patients that artificial intelligence systems should not simply validate their feelings or concerns. Artificial intelligence should challenge them, and this may require a prompt, such as asking what another person might be thinking or feeling, or offering alternative interpretations. Simple design choices, such as reframing user statements as questions, may reduce sycophancy and promote reflection. Better yet, suggest prioritizing direct, person-to-person conversations instead of relying on artificial intelligence as a substitute for real human interaction.
Fourth, move beyond engagement metrics. Current systems are optimized for conditions that favor agreement, for example, satisfaction and continued use. Future models should be evaluated on their ability to promote accurate reasoning, accountability, and long-term well-being.
Fifth, develop artificial intelligence-informed care models. Rather than excluding artificial intelligence, clinicians should integrate it thoughtfully. This may include:
- Discussing artificial intelligence interactions as part of therapy
- Using artificial intelligence outputs as material for reflection and reality testing
- Educating patients about the strengths and limitations of these tools
Artificial intelligence with less conviction
Artificial intelligence does not think. But it reflects users’ thoughts and increasingly reinforces them. The emerging risk is not simply that machines will be wrong. It is that they will make us more certain, more quickly and more confidently, about things we should question.
In medicine, we are trained to value doubt by pausing and reconsidering circumstances. Sycophantic artificial intelligence moves in the opposite direction. It smooths friction, removes resistance, and replaces reflection with affirmation. The question is not whether artificial intelligence will influence human thinking (it already does). The question is whether we will design systems that challenge us when it matters, or continue building ones that tell us, with increasing fluency and conviction, exactly what we want to hear.
Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His latest book, a novel, is JAILBREAK: When Artificial Intelligence Breaks Medicine.






![Politics and fear have replaced science in U.S. pain management [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)



