The patient walked into the ER obtunded, breathing deep and fast, blood sugar over 400. Classic diabetic ketoacidosis. She had been symptomatic for three days.
Three days earlier, she had asked ChatGPT about her symptoms: excessive thirst, frequent urination, fatigue. The AI told her it was “probably related to dehydration and stress” and recommended increasing fluid intake and rest. It suggested calling her doctor “if symptoms don’t improve in a few days.”
She followed that advice. By day three, she was in DKA.
ChatGPT gave reasonable-sounding guidance. It wasn’t hallucinating. The advice would have been fine for someone who was actually just dehydrated and stressed.
But here is what the AI didn’t detect: the fruity smell of ketones on her breath, the Kussmaul respirations, the subtle confusion indicating altered mental status. These were the signals screaming “DKA” to any physician who examined her.
The AI read her words. It couldn’t assess her biology.
This is the sensing gap. And it is the fundamental limitation of medical AI that nobody is talking about.
The architecture of blindness
I build AI systems for medical education while maintaining an active surgical practice. I live on both sides of this collision. And I can tell you: The gap isn’t about AI being “not smart enough.” It is about AI being architecturally blind.
Humans have approximately 10 billion sensory neurons constantly sampling the environment. Your patients walk into your office broadcasting thousands of signals through skin color, respiratory pattern, gait, affect, body language, and dozens of other channels you process without conscious thought.
When you see a patient who is diaphoretic and clutching their chest, you don’t need lab values to know something is wrong. Your velociraptor brain, refined by 3.8 billion years of evolution, already activated the threat detection system.
AI has zero sensors. It processes text. When a patient types “chest pain,” the AI reads those nine characters. You see pale skin, poor perfusion, respiratory distress, and fear.
You are not smarter than the AI. You are sensing in ways the AI architecturally cannot.
Why this matters now
Here is the data that should concern you: 60 percent of patients now Google their symptoms before appointments. That number was pre-ChatGPT. Now they are asking conversational AI that sounds confident, provides detailed explanations, and never says “I don’t know.”
Last month in my PCP’s practice:
- Patient delayed stroke evaluation because AI pattern-matched to “migraine.”
- Patient stopped essential medication because AI missed interaction context with renal insufficiency.
- Patient brought AI-generated treatment plan that would have caused acute renal failure.
Also last month:
- Patient’s AI research led to earlier cancer diagnosis.
- Patient’s prepared questions made our appointment three times more efficient.
- Patient understood their condition better than after typical post-visit.
Same technology. Different outcomes. The variable? Appropriate use.
The personal stake
I am not building this from academic interest. My wife is navigating her second cancer recurrence in a year. She practically lives on ChatGPT, researching treatments, understanding side effects, preparing questions for oncology visits. I am a facial plastics guy. I know nothing about ovarian cancer. I use Claude because I am sophisticated, I guess?
Sometimes the AI helps her become a more informed patient. Sometimes it generates anxiety with incomplete information. Sometimes it misses critical context her oncologist would catch immediately.
This taught me something important: The answer isn’t “don’t use AI” or “AI will solve everything.” The answer is “here is how to use AI appropriately while understanding its limitations.”
Neither patients nor physicians received guidance for this. So I built it.
A framework for both sides
I just launched aiintheexamroom.com, a free curriculum with two pathways. One teaches patients when to trust AI, when to call 911, and how to spot hallucination. The other teaches physicians how to integrate AI-informed patients into practice without losing authority or increasing liability.
Here is what works in actual exam rooms:
For the patient encounter:
Start with curiosity: “Did you look this up online or ask any AI about it?”
Most patients will be honest. Some will be sheepish. Immediately normalize it: “That is completely normal. What did it tell you?”
This accomplishes four things:
- Makes AI use explicit instead of hidden.
- Shows you are not threatened by technology.
- Reveals their actual concern (search query = fear).
- Gives you diagnostic information.
Then validate or correct: “AI got the pattern right. Now let me add what AI couldn’t detect…”
Narrate your examination:
“I am checking your skin color; you are not pale or clammy, which is reassuring. When I press on your chest wall here, I can reproduce your exact pain. That is a finding AI literally cannot do remotely. This is why you need a human exam.”
Make the sensing gap explicit. Patients need to understand that AI processed their text while you are assessing thousands of biological signals.
Teach the framework:
“AI is for information and preparation, not diagnosis and treatment. Use it to understand your condition and prepare questions. Then come see me for examination and clinical judgment.”
This positions AI as a tool, not a replacement. It maintains your authority while acknowledging the reality that patients will use these tools regardless of your opinion.
The documentation reality
Here is what nobody wants to say out loud: When AI-informed patients make medical decisions and those decisions cause harm, you are liable. Not OpenAI. Not Anthropic. Not Google.
You carry the malpractice insurance. You face the consequences.
So document appropriately:
- “Patient reported consulting AI regarding symptoms.”
- “Patient’s understanding based on [source] reviewed and corrected.”
- “Physical exam findings not available to AI: [specific findings].”
This protects you three ways: It shows you addressed AI information (not negligent), documents your unique value-add (examination findings), and establishes you as the final decision-maker (clear liability chain).
What patients need to know
The curriculum teaches patients five essential questions to ask any medical AI:
- “What are you basing this on?” (source quality)
- “What can you NOT detect remotely?” (sensing gap)
- “What would require emergency evaluation?” (red flags)
- “What are you uncertain about?” (humility check)
- “What should I ask my actual doctor?” (human-in-loop)
Question #2 has saved lives. When patients ask, “What can’t you detect remotely?” good AI lists: diaphoresis, perfusion, respiratory effort, mental status, skin color. Patients realize: “Wait, I have several of those. Maybe I need evaluation, not reassurance.”
The velociraptor test
I tell patients: “Until AI has to wrestle a velociraptor for dinner, it will never have the contextual awareness evolution gave you.”
When your body says “THREAT,” not just “I am worried” but that deep evolutionary alarm, that is not anxiety. That is threat detection debugged by billions of years of natural selection.
AI’s pattern recognition was debugged by… training on text for a few years.
In conflicts between evolutionary threat detection and algorithmic pattern matching, bet on evolution.
This especially matters for parents. Maternal instinct is threat detection refined by millions of years of “if your baby dies, your genes don’t continue.” Trust that over any algorithm.
The bottom line
Fighting AI in medicine is pointless. It is already here. Patients are using it with or without our blessing.
Our choice isn’t “AI or humans.” Our choice is “appropriate integration or dangerous chaos.”
The sensing gap is fundamental and unfixable. AI doesn’t need better training to detect diaphoresis or perfusion. It needs sensors. Which it doesn’t have.
Pattern recognition without environmental sensing is hollow. Intelligence cannot be separated from sensing.
This is why your examination matters. This is why telemedicine has limits. This is why AI cannot replace clinical judgment.
The future of medicine is humans using AI as a tool while maintaining the irreplaceable value of biological sensing.
Build the bridge. Teach the framework. Make AI work for patient care instead of against it.
And remember: You can smell ketoacidosis. AI cannot. That will remain true no matter how sophisticated the algorithms become.
John C. Ferguson is a cosmetic surgeon.




![Sabbaticals provide a critical lifeline for sustainable medical careers [PODCAST]](https://kevinmd.com/wp-content/uploads/The-Podcast-by-KevinMD-WideScreen-3000-px-3-190x100.jpg)
![Hospitals must establish safety guardrails before deploying AI [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-4-190x100.jpg)
