In a Black Mirror-esque video I watched recently on YouTube, a young man with a persistent cough notices a booth labeled “Instant Doctor” as he’s waiting in a train station, and decides to give it a try. The somewhat pleasant robot voice immediately recognizes him and dutifully (creepily?) reads off his age, height, and weight. She then tells him he has two diagnoses: the first, a mild bronchial infection, is treated easily with a medicated mist, delivered right there in the little doctor pod. Pretty cool. He feels great.
The second, however, is far worse: metastatic glioblastoma multiforme, a brain tumor. The robot voice then helpfully informs him that his life expectancy has been updated to reflect this diagnosis, and he has about eight months left to live. But not to worry, they are sending him information about palliative care referrals. Oh, and they also took care of notifying his loved ones … via Tik Tok.
All the access to all the data in the world cannot make someone (or something) a good physician; only humanity can do that.
The recent conversation around ChatGPT passing a physician licensing examination has led those of us in the medical education and licensure world to consider how artificial intelligence (AI) could (and will) affect the way we teach and train physicians, and the way we assess their readiness to see patients.
Osteopathic physicians (DOs) have their own set of licensure exams, called the Comprehensive Osteopathic Medical Licensing Examination of the United States (COMLEX-USA), which assesses the competency of osteopathic medical students and recent DO graduates. The exam is designed through the lens of the osteopathic philosophy, which looks at the patient as a whole person, rather than a collection of symptoms, and is based on four main tenets.
The first tenet places an emphasis on the connection of the body, mind, and spirit, and that last part –the focus on spirit–is where AI loses out; osteopathic physicians are trained in patient-first care, taking into account many different factors that could be contributing to a patient’s condition. You only learn that from building a rapport with your patient: learning their spirit, allowing them to voice their concerns and to become a partner in their own health maintenance.
In the scenario above, the patient is given a diagnosis, and then sent on his way with no support or comfort. No humanity.
That’s not just me talking as a physician; research has shown that patients want a connection with their physician. One study published in Mayo Clinic Proceedings found that according to patients, the ideal physician is someone who is confident, empathetic, humane, personal, forthright, respectful, and thorough. While AI might have the respectful and thorough parts covered, those first five characteristics are something only a human can truly embody.
Even Ansible Health, a company who is currently using ChatGPT to help explain certain COPD concepts to patients, says it needs a trained, human professional to review the AI platform’s work.
That said, there is certainly a place for AI in health care; there are a multitude of possibilities for diagnostic assistance; for tracking standards of care; for people who have a difficult time accessing a physician; for maintaining wellness; for making sure medications are taken on time and correctly; and for making sure that care is seamless between providers; and many more.
AI could be of particular use to primary care providers who are often bogged down with more administrative tasks than their peers, and could free them to make more time for patient care. For humanity; the art of medicine.
But even though AI could give a patient a diagnosis, there would be no humanity behind it. No way to comfort a patient, to carefully walk them through treatment options, help them make an individualized decision about how to choose the right one for them, or to listen to their concerns. Simply put: there is no substitution for the real thing.
Jeanne Sandella is a family physician.