Amidst the recent news about the murder of UnitedHealthcare CEO, the issue of prior authorization has come to the forefront again. Insurance companies use prior authorization as an easy tool to control the costs and incorporate it as a genuine step of the health care delivery process. The recent trend is to use AI (artificial intelligence) for prior authorizations. AI offers several advantages over human workers, further saving time and money.
Using AI for prior authorizations opens an entirely novel type of Pandora’s box in health care. A reduction of 50–75 percent of the manual task was achieved by using automated AI for prior authorization in 2022. Natural language processing (NLP) can be used to extract relevant information from the myriads of data submitted to the insurance companies and formulate a decision. AI programs can also be trained on previous customer data and previous decision patterns of the company. Algorithms can be tweaked to approve or deny treatment or medication requests from physicians. This could be a purely financial decision, nothing to do with health care delivery. AI can also ostensibly route queries to the appropriate reviewer faster. A federal class action lawsuit filed in 2023 against UnitedHealthcare Group alleges a 90 percent error rate in AI programs used for prior authorizations. The lawsuit also alleges that only 0.2 percent of the health policyholders are likely to appeal the denials because many consider it a hassle and a fruitless effort. The company apparently used a prediction model called “nH Predict” to determine coverage for Medicare Advantage patients in a post-acute care setting. According to the lawsuit, the model uses rigid and unrealistic predictions of recovery, often overruling the treating physicians’ determinations. Seemingly, “nH Predict” uses data from a few million patients compiled over years. The details of the machine learning algorithm used in the program are proprietary and hence not publicly known.
This issue brings the “black box” problem of AI programs. While transparency is important for trust building, AI companies encourage minimizing transparency because of various reasons such as maintaining competitiveness, preserving copyright and intellectual property, reducing exposure to lawsuits, and preventing malicious attacks. As such, almost all the current AI programs are a black box, and it is not entirely clear why they produce the results they do. Regulation seems to be the only way to force AI companies to increase the transparency and accountability of AI programs. Even though organizations such as the European Union are making some headway, globally, AI is likely to be a black box in the near future.
Ethical issues surrounding AI programs have been addressed by initiatives such as the Asilomar AI principles. In short, these principles embrace human values, privacy, liberty, and the common good. Asilomar principles are faced with considerable problems in practice because of the varied interpretation across different nations and political spectrums. In the current environment, commercial goals overtake any ethical concerns of AI programs. One would assume that AI programs used in such a vital arena such as health care are ethical in delivering their conclusions.
Racial bias in AI-enabled programs in health care has already been documented by Obermeyer et al. in a paper published in the journal Science. Health systems use commercial prediction algorithms to identify and help patients with complex health needs. The authors found that a widely used program was biased against Black patients, giving them a similar risk score as White patients even though, generally, they were sicker.
AI programs also face instability concerns, delivering unpredictable results. Large language models are prone to “hallucinations.” Using high-quality training data, setting clear boundaries for the use of AI models, and continuous refining and revision are suggested as possible solutions to decrease hallucinations. However, such rigorous measures are lacking in the current AI models used in health care.
A new bill in Congress, S. 4532 aiming to improve seniors’ timely access to care is seeking more transparency for prior authorizations under Medicare Advantage plans. As per the bill, the absolute numbers and percentages of denial of the prior authorization requests must be available to the public from health insurers and other agencies contracting health care services from federal or state governments. In the future, this type of transparent data dissemination might force health insurance companies to tweak their algorithm and, hopefully, reduce prior authorization denials.
Even though CMS has released guidelines for using AI in procedures such as prior authorizations and utilization management, as of now, the rule lacks rigor. While the federal and state authorities struggle to regulate and direct AI ventures, the FDA has recently stepped onto the arena. In May 2024, the FDA released a list of 882 approved AI or machine learning–enabled medical devices, which include predictive software programs. Because of the proliferation of such AI programs and their widespread use in U.S. health care, monitoring and ensuring quality and safety are going to be a challenge for any regulatory agency. But it needs to be done.
P. Dileep Kumar is a board-certified practicing hospitalist specializing in internal medicine. Dr. Kumar is actively engaged with professional associations such as the American College of Physicians, Michigan State Medical Society, and the American Medical Association. He has held a variety of leadership roles and has authored more than 100 publications in various medical journals and a book on rabies (Biography of Disease Series). Additionally, he has presented more than 50 papers at various national and international medical conferences. Several of his papers are widely cited in the literature and referenced in various textbooks.
Dr. Kumar has been involved in various hospital committees with advanced knowledge of Centers for Medicare & Medicaid Services (CMS) initiatives such as meaningful use, value-based purchasing, and Accountable Care Organizations.
Furthermore, Dr. Kumar has served as a scientific peer reviewer for various medical journals, including the British Medical Journal, Annals of Internal Medicine, American Journal of Cardiology, Physician Leadership Journal, and European Journal of Clinical Microbiology & Infectious Diseases.