It was a cold January night, around two in the morning on my third consecutive 34-hour shift as a senior pediatric resident at a major freestanding children’s hospital in the northeast. My floor had absorbed four admissions in the last hour, our eleventh of the night. The last was a teenage boy, new onset type 1 diabetes presenting in diabetic ketoacidosis. By the time the call came through from the emergency department, his labs were trending in the right direction. His anion gap had closed. He met floor criteria. On paper, he was improving. My intern was buried in paperwork, so I went down to the emergency department to do the assessment myself. I pulled back the curtain.
He was awake but sleepy. He could tell me his name, the date, and where he was. In the chart this gets documented as “alert and oriented times three.” But his eyes were glazed in a way that had nothing to do with the hour. There was a slight slur to his speech, barely perceptible. And a stillness that did not fit the clinical picture I had been told over the phone. Something was wrong. I did not have a number or word for it. I just knew.
The pediatric intensive care unit fellow and emergency department attending pushed back. Of course he looks tired, he has been sitting here for hours. They were not wrong. That just was not the right answer. I stood my ground. I cited what I could, the speed of correction, the risk of cerebral edema, but what I had felt in the moment of pulling back that curtain was something prior to language. Something that lived upstream of the chart.
Lost in translation
Every time a physician walks out of a patient’s room, something gets lost. The encounter has to be translated. Think of a song that has followed you through your life. Hear it unexpectedly in a grocery store and something happens before you have processed a single lyric, your body responds, a feeling arrives, a memory surfaces with eerie fidelity. Now read those lyrics on a page. It is not the same. It was never going to be the same. The lyrics are a compression, portable, shareable, indexable. The meaning lives in everything woven around them, not in the words themselves. The chart works the same way. A room full of sensory information, a body telling its story through a hundred subtle signals. Some of it does not get documented at all, the rest gets compressed into a note.
The philosopher Michael Polanyi spent years trying to articulate why so much of what we know resists being put into words. His conclusion was deceptively simple: We know more than we can tell. A significant portion of what an experienced clinician knows cannot be fully captured in language at all. It lives somewhere else. In pattern and perception. This internal library, assembled from everything perception catches that language never reaches, is what cognitive scientists call a mental model, and what artificial intelligence researchers call a world model. The hardest thing to build, precisely because it cannot be assembled from language alone. The chart is the lyrics. The clinical encounter is the song. And what experienced clinicians carry into every room, what takes years to build, is the capacity to hear the difference.
What artificial intelligence is actually training on
A large language model is trained on text. It learns the statistical relationships between words and concepts, and can summarize, synthesize, and recombine linguistic knowledge at a scale no human can match. But it has no sensory experience. No embodied history. It has a model of how humans describe the world in words, not a model of the world itself. The experienced clinician is a world model. The large language model is a language model. And clinical medicine is at risk of automating the second while calling it a replacement for the first.
What clinical artificial intelligence is training on is the chart, the compressed shadow of the clinical encounter. Not the room. Not the signal. The translation. A system trained on translations is not learning the original language. It is learning how that language gets described by people who spoke it. Because language was never designed to hold the most consequential things, the things that live in the body, in perception, in the felt sense of a room. The chart is not the patient. And an artificial intelligence that has read every chart ever written has still never been in the room.
That cost is already visible. In a stress test published in Nature Medicine, a consumer-scale large language model health tool under-triaged more than half of true emergencies, directing patients with diabetic ketoacidosis to routine follow-up rather than the emergency department. It often recognized the danger in its own reasoning and still reassured the patient. The chart, in other words, looked fine.
What this means
This is not an argument against artificial intelligence in medicine. The pattern recognition that clinical artificial intelligence performs across large populations is genuinely beyond human capability. It is already saving lives. But capability is not the same as comprehension.
The most dangerous moment in the adoption of any powerful technology is not when it fails obviously. It is when it succeeds well enough that we stop questioning what it cannot see. When the artificial intelligence flags the right diagnosis often enough that we begin to trust the chart more than the room. When we optimize workflows around what the algorithm needs, more structured data, more consistent documentation, more language, and quietly, without ever making a conscious decision, begin to devalue the thing that cannot be structured or documented or fed into a model.
Before morning signout I walked down to the second floor to check on him. The pediatric intensive care unit was quiet. Except at the far end of the hall, where the largest room was lit up and humming with the particular controlled urgency that means something has happened. Barriers had been pulled to create a makeshift sterile field. There were lines going in. A breathing tube. Sedation drips keeping him still and comfortable while his brain, swollen quietly inside his skull, was given every chance to recover. His anion gap had closed. The numbers had looked fine. The chart had said he was improving.
I did not have language for what had happened in that curtained bay in the emergency department. I just knew that something in that room had spoken before the chart did. And that I had been trained, slowly and without realizing it, to listen. That is what we are at risk of losing. Not to bad intentions or careless engineering, but to an honest but consequential confusion between the lyrics and the song. Between a model of language and a model of the world. The signal was always there. The question is whether we still have the ears to hear it.
Garrett Terracciano is an anesthesiologist.










![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)







![AI is already reading your dental X-rays and you probably have no idea [PODCAST]](https://kevinmd.com/wp-content/uploads/maxresdefault-190x100.webp)