Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

The consequences of adopting AI in medicine

Jordan Liz, PhD
Tech
December 23, 2025
Share
Tweet
Share

Across society, AI is being adopted at an staggering rate. In medicine in particular, AI is helping physicians analyze diagnostic tests, calculate disease risk, summarize patient history, and formulate treatment plans. Health care administrators are using AI to manage and optimize workflows. Insurance companies report that AI is improving consumer service, expediting claims, and detecting fraud. AI-assisted medical research is discovering new drugs to combat Parkinson’s, Alzheimer’s, cancer, and cardiovascular diseases. The public is turning to AI to make more responsible and informed choices about their own health and wellness.

Rapid adoption of AI in medicine is facilitated by several factors, including the desire to save lives, make scientific progress, increase profits, and techno-optimism. Techno-optimism is the belief that new technologies will solve every material problem and usher in a better future. This view sees technology as no more than a set of tools. The hope at the core of techno-optimism is that, once we have enough tools, we will be able to fix everything.

Whether or not technology in general, or AI specifically, is the answer to every problem is hotly contested. As a philosopher of technology, I’d argue we need to pay greater attention to a more basic question, namely, is AI simply a tool? This might seem like an odd question at first. While AI may potentially achieve autonomy and even surpass human intelligence, it’s clearly not there yet.

The point rather is about the broader consequences of adopting and utilizing new technologies like AI. The philosopher Langdon Winner argues that certain technologies, like nuclear power plants, change the social, political, and economic landscape around them. A society that adopts nuclear power will have to establish strict centralized systems of control and technical expertise to safely operate it. Its citizens will also have to contend with the risk of nuclear meltdown. Given the high costs associated, once a nuclear power plant is built, switching to a different energy source may become less likely. For Winner, nuclear power plants are more than tools; they are “similar to legislative acts or political foundings that establish a framework for public order that will endure over many generations.”

AI is similar in many respects. First, in adopting AI, the way that health care and medical research is conducted may fundamentally change. If AI proves to be useful in discovering new drugs or curing diseases, then it’s unlikely to be abandoned. As AI advances, it may cause the role and responsibilities of human scientists to shift. Already projects like the AI Scientist are attempting to build a system capable of conducting scientific research without human involvement. Doctronic, the AI doctor, has already helped over 18.4 million people understand their medical issues, refill their medications, and answer questions about their health and lifestyle. This will have implications for both current and future health care workers.

Second, AI raises new risks. Aside from data privacy and algorithmic bias, there are also issues of intelligibility and accountability. Responsible use of AI arguably requires understanding how the AI functions and generates its results. If so, physicians and researchers may need to become both medical and AI experts. It imposes an extra professional obligation, one that may be increasingly difficult to meet as AI continues to advance. Relatedly, there is the question of accountability. If a physician uses an algorithm to aid with a diagnosis, and the diagnosis is incorrect, who is to blame? The physician? The hospital? The programmer? The company that developed the AI? The AI itself? Sorting these questions will require developing new agencies and policies to address them.

Third, for patients, AI-assisted health care poses new opportunities and challenges. AI may, for instance, improve accessibility, reduce costs, and advance health care equity. It may also create new issues regarding transparency. Patients may not know if it’s a doctor or AI making medical decisions about their treatment. As AI is increasingly adopted within health care, this may become the new normal that patients will simply need to accept.

Fourth, AI imposes an inescapable choice. Whether AI is useful to medicine, or ultimately making doctors worse at their jobs, the fact that AI exists means that physicians and medical researchers must decide if they will use it or not. If AI can effectively treat patients and develop new treatments, then not using it may constitute a moral failure. Yet, rushing towards widespread adoption of AI is also morally murky given the risks involved. Either way, a new moral dilemma is created for current and future generations.

While AI raises some novel challenges, these kinds of considerations are not entirely unique to it. Many technologies impose moral burdens upon us. Many technologies fundamentally change society. It may turn out that AI-driven science ultimately proves the techno-optimist correct and solves everything.

For the time being, we should remain cautious and critical. For Winner, when it comes to politics, most of us think change should be slow and incremental. Our attitudes are vastly different when it comes to new technologies because we only see them as tools. We often fail to consider the broader consequences, for ourselves and others, of adopting them. Yet, while we still have time, it may be worth considering the world that AI in medicine will create and what we can do to make sure it is a better one.

Jordan Liz is an associate professor of philosophy.

Prev

Pediatrician vs. grandmother: Choosing love over medical advice

December 23, 2025 Kevin 0
…

Kevin

Tagged as: Health IT

Post navigation

< Previous Post
Pediatrician vs. grandmother: Choosing love over medical advice

ADVERTISEMENT

More by Jordan Liz, PhD

  • We must work harder to provide COVID relief to other countries

    Jordan Liz, PhD

Related Posts

  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Why affirmative action is crucial for health equity and social justice in medicine

    Katrina Gipson, MD, MPH
  • Take politics out of science and medicine

    Anonymous
  • Medicine has become the new McDonald’s of health care

    Arthur Lazarus, MD, MBA
  • Family medicine and the fight for the soul of health care

    Timothy Hoff, PhD
  • Can personalized medicine live up to its hype in health care?

    Ketan Desai, MD, PhD

More in Tech

  • Why AI in medicine elevates humanity instead of replacing it

    Tod Stillson, MD
  • How an AI medical scribe saved my practice

    Ashten Duncan, MD
  • Innovation in medicine: 6 strategies for docs

    Jalene Jacob, MD, MBA
  • AI in medical imaging: When algorithms block the view

    Gerald Kuo
  • Physicians must lead the vetting of AI

    Saurabh Gupta, MD
  • Why Medicare must embrace AI support

    Ronke Lawal
  • Most Popular

  • Past Week

    • Psychiatrists are physicians: a key distinction

      Farid Sabet-Sharghi, MD | Physician
    • The blind men and the elephant: a parable for modern pain management

      Richard A. Lawhern, PhD | Conditions
    • Is primary care becoming a triage station?

      J. Leonard Lichtenfeld, MD | Physician
    • Preventing physician burnout before it begins in med school [PODCAST]

      The Podcast by KevinMD | Podcast
    • The consequences of adopting AI in medicine

      Jordan Liz, PhD | Tech
    • The risk of ideology in gender medicine

      William Malone, MD | Conditions
  • Past 6 Months

    • Psychiatrists are physicians: a key distinction

      Farid Sabet-Sharghi, MD | Physician
    • Why feeling unlike yourself is a sign of physician emotional overload

      Stephanie Wellington, MD | Physician
    • The U.S. gastroenterologist shortage explained

      Brian Hudes, MD | Physician
    • The Silicon Valley primary care doctor shortage

      George F. Smith, MD | Physician
    • California’s opioid policy hypocrisy

      Kayvan Haddadan, MD | Conditions
    • A lesson in empathy from a young patient

      Dr. Arshad Ashraf | Physician
  • Recent Posts

    • The consequences of adopting AI in medicine

      Jordan Liz, PhD | Tech
    • Pediatrician vs. grandmother: Choosing love over medical advice

      Jessie Mahoney, MD | Physician
    • How I got Dr. Luis Torres Díaz on Wikipedia: a grandson’s journey

      Francisco M. Torres, MD | Physician
    • Direct primary care vs psychotherapy models: Why they aren’t interchangeable

      Arthur Lazarus, MD, MBA | Physician
    • The hidden depth of the rural primary care shortage

      Esther Yu Smith, MD | Physician
    • When hospitals act like platforms, clinicians become content

      Gerald Kuo | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Psychiatrists are physicians: a key distinction

      Farid Sabet-Sharghi, MD | Physician
    • The blind men and the elephant: a parable for modern pain management

      Richard A. Lawhern, PhD | Conditions
    • Is primary care becoming a triage station?

      J. Leonard Lichtenfeld, MD | Physician
    • Preventing physician burnout before it begins in med school [PODCAST]

      The Podcast by KevinMD | Podcast
    • The consequences of adopting AI in medicine

      Jordan Liz, PhD | Tech
    • The risk of ideology in gender medicine

      William Malone, MD | Conditions
  • Past 6 Months

    • Psychiatrists are physicians: a key distinction

      Farid Sabet-Sharghi, MD | Physician
    • Why feeling unlike yourself is a sign of physician emotional overload

      Stephanie Wellington, MD | Physician
    • The U.S. gastroenterologist shortage explained

      Brian Hudes, MD | Physician
    • The Silicon Valley primary care doctor shortage

      George F. Smith, MD | Physician
    • California’s opioid policy hypocrisy

      Kayvan Haddadan, MD | Conditions
    • A lesson in empathy from a young patient

      Dr. Arshad Ashraf | Physician
  • Recent Posts

    • The consequences of adopting AI in medicine

      Jordan Liz, PhD | Tech
    • Pediatrician vs. grandmother: Choosing love over medical advice

      Jessie Mahoney, MD | Physician
    • How I got Dr. Luis Torres Díaz on Wikipedia: a grandson’s journey

      Francisco M. Torres, MD | Physician
    • Direct primary care vs psychotherapy models: Why they aren’t interchangeable

      Arthur Lazarus, MD, MBA | Physician
    • The hidden depth of the rural primary care shortage

      Esther Yu Smith, MD | Physician
    • When hospitals act like platforms, clinicians become content

      Gerald Kuo | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...