He spent his youth memorizing lungs. That was how he learned chest X-rays, not by chasing abnormalities, but by studying thousands of perfectly normal films until his eyes could sense when something was ever so slightly wrong. “If you don’t know normal,” he would tell residents, “you’ll never understand abnormal.”
He was a chest physician, not a radiologist, yet his skill with chest imaging became legendary. At Taipei Veterans General Hospital, he was the undefeated champion of chest X-ray interpretation, the clinician other clinicians turned to when a film contained a shadow too subtle for most eyes.
After retirement, he continued serving: part-time clinics, community volunteering, and teaching whenever someone asked. Medicine, to him, was not employment. It was responsibility. It was memory. That is why what happened recently shook him so deeply.
He opened a chest X-ray on a new AI-based viewing system. A bright red dot covered the area he needed to see.
The AI had flagged a “suspected consolidation.” Fine. A suggestion is acceptable. The problem was that the overlay could not be moved, removed, or dimmed. He tried every menu. Nothing worked.
The nurse apologized. “The system doesn’t let us change it. And the AI-generated report prints automatically. The doctor just signs.”
A man who once taught generations to see subtle pathology now found himself unable to view the raw anatomy beneath an algorithm’s guess. The film was no longer his to interpret. “If I can’t see the original,” he said quietly, “how can I know what’s true?”
The deeper pain came later.
He observed younger physicians reading films. They did not methodically scan the costophrenic angles. They did not examine retrocardiac spaces. They did not trace bronchovascular markings. Their eyes went straight to the red dot.
He felt an ache he did not expect at this stage of life. “Maybe I’m old,” he said. “Or maybe medicine really has changed.”
Then he added something he had never told anyone before: “Judgment comes from memory, correct memory. If your first memory is a shortcut, your future decisions will always be warped.”
Cognitive science explains his discomfort.
- Automation bias makes clinicians accept algorithm suggestions too easily.
- Anchoring bias fixes the eye on the first highlighted region.
- Selective attention hijacking narrows the search prematurely.
- Cognitive offloading weakens skill over time.
But deeper than these theories is the truth he spent a lifetime teaching: Radiologic mastery is built on internalizing normality, not chasing abnormality.
That is the “correct bias” of medicine: a bias toward accuracy, a bias toward anatomy, a bias toward truth. Not a bias toward a machine’s first guess.
Outside the hospital, companion AIs now “remember” users’ routines, speech, and emotions. Inside the hospital, imaging AI begins to “remember” its own overlays and predictions. The machine’s memory grows stronger. The clinician’s memory grows weaker. That imbalance frightened him far more than the red dot itself.
He does not resist technology. He has lived through analog films, PACS transitions, digital archives, and speech-to-text systems. He welcomes tools that expand the human eye. But he will not accept a system that prevents the human eye from seeing.
“For medicine to stay medicine,” he said, “the physician must see first. The AI may comment second. Not the other way around.”
The solution is simple.
- Physicians must always be able to view the raw image.
- Overlays must be optional, adjustable, and removable.
- AI impressions must remain separate from clinical impressions.
- Clinicians must retain the autonomy to disagree without friction.
A red dot must never replace a lifetime of expertise.
He still volunteers. Still teaches. Still reads films with the same careful dignity. But when he walks into the reading room and sees younger doctors looking only at the overlay, he feels a quiet sadness. Not because he is aging. But because medicine may be forgetting something essential: Clinical memory must be built on truth, not shortcuts.
And no AI system (no matter how advanced) should ever stand between a physician and the image that needs to be seen.
Gerald Kuo, a doctoral student in the Graduate Institute of Business Administration at Fu Jen Catholic University in Taiwan, specializes in health care management, long-term care systems, AI governance in clinical and social care settings, and elder care policy. He is affiliated with the Home Health Care Charity Association and maintains a professional presence on Facebook, where he shares updates on research and community work. Kuo helps operate a day-care center for older adults, working closely with families, nurses, and community physicians. His research and practical efforts focus on reducing administrative strain on clinicians, strengthening continuity and quality of elder care, and developing sustainable service models through data, technology, and cross-disciplinary collaboration. He is particularly interested in how emerging AI tools can support aging clinical workforces, enhance care delivery, and build greater trust between health systems and the public.






![Fixing the system that fails psychiatric patients [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-2-190x100.jpg)

![Rebuilding the backbone of health care [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-3-190x100.jpg)
