It has been an exciting several weeks for AI enthusiasts worldwide, particularly those in health care.
OpenAI recently released GPT-4, a major upgrade over ChatGPT. Microsoft updated Bing search engine and Office365 business applications Word, Powerpoint, and Excel will run on GPT-4. Google announced PaLM API and MakerSuite, which enables developers to experiment with Large Language Models (LLM), whereas MakerSuite allows quick prototypes.
These generative AI announcements from OpenAI, Microsoft, and Google have raised interest around the world. The latest AI news surrounding health care has ignited conversations around AI replacing your doctors and the possibilities of robot doctors.
Many question whether doctors and all health care practitioners need to be scared for their jobs in the future.
Research shows the results of AI attempts to pass the United States Medical Licensing Exams. Google’s version 2 of Med-PaLM performed at an “expert” doctor level, scoring 85 percent on the USMLE Practice Test. The models were tested against 14 criteria, including scientific factuality, precision, medical consensus, reasoning, bias, and harm, and verified by clinicians worldwide. GPT-4 successfully passed the three parts of USMLE, but barely.
As a trained technologist with experience in AI-related technologies and now focused on innovation in health care, I do not believe AI is ready to be our only doctor.
At my hospital, vendors pitch the latest AI-driven technology solutions to solve different challenges in health care. Evaluation becomes difficult as the use of AI technologies shields deficiencies in the solution, and it is difficult to gauge if the problem is actually solved.
The use of AI in health care can be helpful in two realms: diagnostic AI and symptom checker AI.
Research on the diagnostic AI front shows AI can detect breast cancer that radiologists miss. It is likely the majority of image diagnosis will go the AI route, with radiologists doing the final approval. This certainly raises a concern for radiologists as health systems may require fewer of them.
The symptom checker AI world is still in a dangerous terrain. It can be fatal to let AI diagnose medical issues without clinical supervision. While clinicians routinely ask patients to stop using Dr. Google to diagnose symptoms, it is becoming increasingly difficult to forbid patients from using the internet for seeking medical information based on their symptoms.
My primary care doctor recently ordered blood and urine tests for my annual physical. The test results showed a few values that were outside the normal range.
Google searches on those individual test values yielded varying levels of potential diseases and medical issues. It was concerning. But I trusted my doctor’s expert opinion and waited. After reviewing the test results, my doctor informed me that the results were fine keeping my age and overall health in perspective.
An experienced and trained primary care doctor can never be replaced by an artificial computing entity. Still, generative AI can assist primary care doctors in creating after-care visit notes.
According to the American Academy of Medical Colleges, physicians are learning to partner with Dr. Google. An informed patient can be helpful for a well-designed care plan. This becomes important over time as the search engines (Google and Bing) are fully driven by AI models. In the future, patients will be talking to AI bots to diagnose symptoms, searching for health care providers, comparing therapeutics, and declaring outcomes.
Based on my informal conversations, many doctors and nurses are not necessarily losing sleep over generative AI. Many confer AI should be used for clinical or diagnostic support, but patients should never use AI as their first line of clinical care.
AI can assist clinicians by reducing administrative burdens. Administrative burden takes time away from patients. Examples include paperwork and prior authorization associated with care.
The U.S. Surgeon General has raised alarms over health worker burnout. A 2022 Mayo Clinic survey showed doctors were exhibiting symptoms of burnout with a major cause being increasing amounts of paperwork. AI can help with innovative methods of reducing paperwork with better data interoperability and electronic assistance. Microsoft’s latest DAX Express announcement is moving in this direction.
Prior authorization is a huge administrative burden that can harm patients. According to a 2022 American Medical Association survey, 88 percent of physicians report that the burden associated with prior authorization is high or extremely high. Another 33 percent of physicians report that prior authorization has led to a serious adverse event for their patients. Using AI in this process will greatly reduce the administrative burden for clinicians and help shorten care delays for patients.
The use of AI will only grow with time. Policymakers, administrators, practitioners, and all those in support roles in the health care industry need to figure out ways to partner with technology innovators to make meaningful use towards reducing administrative burden, eliminating repetitive tasks, making workflow improvements, aiding diagnostic support, and better patient education.
Medical professionals need AI to better do what they do best: treat patients and improve health care for all.
Anil Saldanha is a hospital executive.