The year is 2050. You enter the room, ready to speak with your next patient, a 60-year-old white male with recent episodes of chest pain when he climbs the stairs to his office. Before sitting down to speak with him, a monitor in the room pulls up his recent lab work and current medications. Your patient note is being filled in, but nobody is typing; the computer is avidly listening in on the conversation, graciously filling in the note and also suggesting a few physical exam maneuvers for you to perform based on the reported symptoms.
As you recommend a stress test and begin to educate the patient about statins, the computer pings you that the patient has a CYP3A mutation, requiring you to adjust the statin dose. By the time you shake hands and say goodbye to your patient, the billing process has been handled by the computer, and you are on your way to the next patient.
Are we really that far away from this?
The term “artificial intelligence” was coined by computer scientist John McCarthy in 1995, defined as a “machine with intelligent behavior such as perception, reasoning, learning or communication and the ability to perform human tasks.” While this may read as a useful term for a sci-fi novel, tens of billions of dollars are already being poured into AI research every year, and applications directly impacting clinical care are well underway.
Take the example of drug development: several years ago, a robot named Adam searched public databases to generate nine novel hypotheses regarding catalyzed reactions in the yeast Saccharomyces cerevisiae. Adam’s robotic counterpart, Eve, uncovered that triclosan — a common toothpaste ingredient — can be used to inhibit the DHFR enzyme. It’s another drug with the same mechanism of action is pyrimethamine, an antimalarial. The robots’ tree of knowledge is made seamless by testing thousands of hypotheses and stringing data together in a fraction of the time it would take for a team of highly educated mortals to do.
To take another case, a recent study published in Nature Medicine showed that a Google algorithm based on 42,000 patient scans performed better than six radiologists in diagnosing lung cancer. This is a promising alternative, as lung cancer diagnosis has a false positive rate of 97.5 percent.
For medical students at the beginning of their journey, the question is: how do we train our future clinicians (i.e., current medical students) to help accelerate the transition to artificial intelligence without abrogating the human element of the patient-physician relationship?
Medical curricula should begin to incorporate educational modules on subjects that prepare the students of today for day-to-day realities of clinical care in two to three decades. A review of the (unfortunately brief) literature on this topic brings some vital ingredients to the mix.
First, students should be taught a basic overview of what AI is and their role in using it appropriately. Students should be taught about capitalizing on the “4 Vs” of big data in health care:
- Volume (larger amount of data today than ever before)
- Variety (different sources of the data)
- Velocity (increasing momentum of data retrieval and storage)
- Veracity (the need to ensure that data is accurate)
These align well with the characteristics of the future medical practice, where care will be provided in many locations, and multidisciplinary health teams will coordinate care for patients.
Gaining literacy takes precedence over proficiency: even if a physician does not actually know the nuts and bolts of how a particular AI machine works, he or she should be able to convey the general idea of an algorithm to a patient in the same way as describing a surgery or procedure. Furthermore, learning how to ask the right questions in one’s AI platform can help guide clinical decision-making.
Second, integrating medical ethics with the use of AI is important for students to cover. For example, if a machine such as Eve misdiagnoses a patient, where does the liability fall? On the hospital that owns the robot, on the physician for not second-guessing a robot or the AI manufacturers themselves? Also, there are numerous ethical issues and concerns regarding health care disparities in the use of AI.
AI expert Emily Sokol notes that minority populations tend to be under-represented in health datasets. How concerned should the physician be about over- or under-diagnosis for guiding the health decisions of such populations?
Third, students should be taught leadership and decision-making in the responsible use of AI. Physicians are often touted as leaders of a multi-professional care team; soon, they will also be leaders of non-human caretakers, straddling the divide between the patient, provider, and machine.
Having the patient communication skills to explain the role of an AI machine will be a useful skill. Integrating concepts in cognitive psychology and simulations where students can communicate with an AI system can be used to facilitate this objective. This should be done in a manner that does not compromise traditional values of empathy and compassion undergirding the patient-physician relationship.
Like AI itself, these three prescriptions for success are already being implemented on some level: Vijaya Kolachalama, an Assistant Professor at Boston University School of Medicine, designed and is now teaching students introductory concepts in data science. Future possibilities of incorporating education on AI are endless. Dr. Kolachalama reinforces that these modules can form the basis for preclinical or fourth-year electives taught by a data scientist instructor, or integrated in combined dual-degree programs.
While medical curricula across the nation should be applauded for being responsive to the opioid epidemic, the call for improved patient hand-offs, and cultural respect for patients, the looming transition to digitized care cannot be ignored. Medical students were historically not prepared for the implementation of electronic health records, and today only a third of practicing doctors are happy with how their EHR works with many continuing to keep on with paper records.
By gearing tomorrow’s physicians to be more aware and proactive about the coming revolution in health care, we can give modern medical education a fresh “reboot.”
Waqas Haque is a public health and medical student.
Image credit: Shutterstock.com