Artificial intelligence has already entered the exam room. You may or may not have noticed. In my own practice, I sometimes use an ambient scribe during patient visits. Not for every encounter, but for some. When they work well, they can be incredibly helpful. The system listens to the conversation and generates a draft of the clinical note while the visit is still fresh. It allows me to focus on my patient rather than typing on my computer. On a busy day, that can make a real difference. Like many physicians, I have spent years finishing notes long after the last patient has left. Documentation has slowly taken over more and more of the day, especially with the introduction of EMRs. When AI works as intended, it can take some of that burden off our shoulders and allow us to focus more fully on the patient sitting in front of us. So I do see the benefits. But using these tools has also taught me something important. AI can produce polished documentation very quickly, and sometimes it is impressively accurate. Other times, it introduces small details that were never actually said in the room, almost as if it is trying to fill empty space. And when the clinic is moving quickly, those subtle differences can be easy to miss. This is where I begin to feel uneasy. The moment a physician signs a chart that contains AI-generated content, the responsibility for everything in that note belongs to the physician. As these tools become more common in everyday practice, it is worth pausing to think carefully about what that means.
The real promise of AI in the clinic
Much of the excitement around artificial intelligence in health care is understandable. Our system is strained, and this technology offers hope. Many of the most useful applications are not about replacing physicians or making diagnoses. Instead, they focus on the parts of medicine that have become most exhausting for clinicians: documentation, coding, inbox management, and record review. AI tools can now draft notes, summarize long message threads from patient portals, and help organize complex medical records. For physicians buried under administrative work, that promise can feel like a lifeline. In my experience, shorter AI summaries are safer. Brief drafts are easier to review and edit before they become part of the permanent record, while longer automated notes can hide subtle errors. Like any clinical tool, artificial intelligence works best when physicians understand how to use it thoughtfully and recognize its limitations. I have even participated in early beta testing of AI documentation tools for physicians and helped develop pediatric templates for one of these systems. Working on the templates made it clear just how helpful these tools can be, but also how important it is to use them carefully. But even when used carefully, these systems introduce a new type of problem that many physicians have not yet been trained to recognize.
When the medical record gets something wrong
A patient’s mother recently shared a story that made me pause. During a visit with her physician, the doctor had begun using an ambient scribe. Later, when she needed to see a specialist, she requested a copy of her medical records. While reviewing the documentation, she noticed something that immediately caught her attention. Her medical history listed a diagnosis of breast cancer. The problem was that she had never had breast cancer. Somehow the diagnosis had appeared in the note generated during the visit. When she raised the issue with her physician, it became clear that the information had not been intentionally added, but it had also not been caught before the chart was finalized. Now it existed in her medical record. At first glance, this might seem like a small documentation error, but medical records travel. They are sent to specialists, referenced in future visits, and used to guide clinical decisions. An incorrect diagnosis, once written into the chart, can follow a patient for years if it is missed. Situations like this highlight an important reality about AI documentation tools. They can generate language quickly and often convincingly. But they can also introduce details that were never actually part of the conversation. In the world of artificial intelligence, these kinds of errors are sometimes called hallucinations. The system generates information that sounds plausible but is not actually true. And while artificial intelligence can generate language that sounds convincing, it cannot verify clinical truth.
The phrase physicians should pay attention to
Many companies that develop AI tools for health care describe their systems as operating with “a physician in the loop.” At first glance, that phrase sounds reassuring. It suggests that physicians remain central to the process. But it also quietly defines where responsibility lies. An AI system may draft the documentation or summarize the encounter. The physician reviews the note, signs it, and places it into the medical record. From a legal and professional standpoint, that means the physician ultimately owns the output. The technology may help produce the documentation, but the responsibility for its accuracy still belongs to the clinician who signs the chart.
Why physicians need to be part of the governance conversation
Artificial intelligence is moving into medicine quickly, often faster than physicians have been trained to evaluate it. Hospitals and health systems are adopting AI-driven tools that promise efficiency and workflow improvements. Many of these tools may help physicians practice more effectively, but thoughtful adoption requires oversight. Physicians should feel comfortable asking important questions about the technologies they are being asked to use:
- What data trained this system?
- How often does it make mistakes?
- Is it summarizing existing information or generating new content?
- How is patient data protected?
- And ultimately, who is responsible when something incorrect enters the medical record and goes unnoticed?
These are not simply technical questions. They are questions of clinical governance. Physicians have always played a role in evaluating new technologies before they become standard tools in patient care. Artificial intelligence should be no different. In fact, these tools should be designed with physicians, not simply evaluated by us. Our perspective matters because we understand the realities of clinical practice in ways that technology developers and administrators often do not.
A future worth guiding
Artificial intelligence may become one of the most powerful technologies introduced into medicine in our lifetime. If used thoughtfully, it has the potential to reduce documentation burden, streamline communication, and give physicians something many of us feel we have slowly lost over the years: time and attention for the patient sitting in front of us. But tools this powerful require thoughtful use. AI can draft a note. It can summarize a chart. It can organize information faster than any of us could on our own. What it cannot do is take responsibility for the care of a patient. That responsibility still belongs to the physician. Which is why physicians cannot afford to ignore the governance conversation around these tools. We need to understand how they work, where they fail, and how they should be used safely in clinical practice. It also raises an important question for the future of AI in medicine. If physicians remain responsible for the final chart, should technology developers share some responsibility when these systems introduce errors? Thoughtful governance may require shared accountability, not to discourage innovation, but to encourage the development of safer and more reliable tools. Artificial intelligence will continue to enter the exam room whether we participate in shaping it or not. The question is whether physicians will simply use these tools, or whether we will help guide how they are built, implemented, and held accountable. Because at the end of the day, when the chart is signed and the patient walks out the door, the responsibility does not belong to the algorithm. It belongs to us.
Elizabeth Vainder is a pediatrician.







![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)









