Well before the advent of chat GPT, popular culture has explored how technology might affect health care, often with a dystopian bent.
Take, for example, the 2013 sci-fi movie Elysium, set in 2154 (spoilers ahead). Matt Damon’s character, Max, is exposed to a lethal dose of radiation when his factory supervisor threatens to fire him if he doesn’t perform a dangerous task. Because he can’t afford to lose his job, Max does it, but when he steps into the room, his face tells us what will happen next. His health—and life—will be forfeited to the production line.
Cut to Max lying on a metal table, a robot looming over him. In an electronic voice, the robot/doctor tells him he has mere days to live. This impersonal message is punctuated by the robot giving Max medication to at least provide some relief from the symptoms of radiation poisoning. But instead of placing the bottle of pills in his outstretched hand, the robot dispassionately drops it onto his body. The robot then exits, leaving Max to grapple with his sudden change of fate and imminent death alone.
I thought of this movie when I read the April 28, 2023 article in JAMA Internal Medicine, “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.”
Much attention is being paid to the study’s conclusions that the chatbot responses were rated more empathetic. But it’s important to note that this was not a comparative study of “chatbot versus human” in real-world conditions but sourced from the online social media forum Reddit’s r/AskDocs.
ChatGPT and other generative AI chatbots have, in recent months, captured the public’s imagination. The JAMA Internal Medicine study further captures our attention because its findings upset the longstanding popular culture paradigm of the dystopian robot doctor of the future, as portrayed in movies such as Elysium.
Let’s jump back to the movie for a minute. The elite lives on the utopian space station Elysium, which orbits the Earth and provides a high-tech paradise away from the squalor of a crumbling and unhealthy planet. Any medical condition can be cured on Elysium by lying in a treatment chamber coded to the patient’s DNA. (The movie doesn’t explain how this tech works; it simply exists in this future, and we’re asked to accept it). One can easily envision a kind, benevolent, chatGPT-like digital doctor fitting into a future world like Elysium.
But the chambers exist only on Elysium and not on Earth. The implication is that the rich and powerful have selfishly chosen not to share the technology with the lowly working classes left planet-side, even for children dying of leukemia. And certainly not for Max.
The movie makes us consider our current world and where AI will fit into health care. Whether as these tools progress, they will be available equally to all. On my recent rewatch, it struck me how the underlying theme seems even more urgent now than when the movie came out ten years ago: Is health care fundamentally a human right?
Will the future hold a utopian blend of human and machine intelligence as some thought leaders, such as Dr. Eric Topol, author of the landmark book The Patient Will See You Now, envision? Or, instead, a worsening health care divide along class lines as portrayed in dystopian movies such as Elysium?
Recent articles have exposed insurance companies’ use of AI algorithms not to maximize the patient experience but to maximize their profit—showing us this dystopia already exists. From AI-driven denials of medical care coverage to seniors on Medicare Advantage to algorithm-driven denials at Cigna that bypassed physician reviews, current evidence shows us that not all have altruistic goals for AI in the health care space.
The answers are unlikely to be simple. In my speculative novel, The Algorithm Will See You Now (recently published but written before the advent of chatGPT), I explore how both utopian and dystopian outcomes might coexist. The book is not only about a possible future for AI in health care but an exploration of how easily altruism can become warped in a profit-driven system.
I think we should be talking more about how ultimately, the limitations of AI in health care may not lie in the technology but in who we give the power to wield it. The physicians—or the insurance giants. The medical experts—or the C-suite.
To answer, as a society, whether we truly believe health care is a fundamental human right—and act accordingly.
Jennifer Lycette is a novelist, award-winning essayist, rural hematology-oncology physician, wife, and mom. Mid-career, Dr. Lycette discovered the power of narrative medicine on her path back from physician burnout and has been writing ever since. Her essays can be found in The Intima, NEJM, JAMA, and other journals. She can be reached on Instagram, LinkedIn, Facebook, and Mastodon.
Her books explore the overarching theme of humanism in medicine. Her first novel, The Algorithm Will See You Now (Black Rose Writing Press), a near-future medical thriller, is available now. Her second novel, The Committee Will Kill You Now, a prequel in the form of a near-historical medical suspense, is out 11/9/23 and available for preorder now in paperback and on Kindle.