Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Are the AI safeguards currently in place sufficient to prevent a doomsday scenario?

Arthur Lazarus, MD, MBA
Tech
December 2, 2024
Share
Tweet
Share

My vote for Time’s Person of the Year is Artificial Intelligence (AI). I think AI is the most talked about and hyped (overhyped?) development of 2024, already transforming operations across numerous sectors, from manufacturing to financial services. In the health sector, AI has ushered in groundbreaking advancements in several areas, including psychotherapy, substituting for therapists and also posing ominous portents for physicians. AI systems that learn independently and autonomously – as opposed to iteratively – are the ones to keep an eye on.

Iterative learning and autonomous learning differ in terms of process and decision-making scope. Iterative learning involves a step-by-step process where an AI model is trained through repeated cycles or iterations. Each cycle refines the model based on errors or feedback from the previous iteration. This type of learning often involves human supervision, with periodic interventions to adjust hyperparameters, refine datasets, or evaluate outcomes. In a health care setting, iterative AI might be used in diagnostic tools that analyze imaging data, where radiologists provide feedback on the AI’s initial assessments, allowing the system to learn and improve its diagnostic accuracy.

In contrast, autonomous learning refers to an AI system’s ability to independently acquire knowledge or adapt its behavior in real-time without explicit instructions or frequent human input. These systems are self-guided, seeking and utilizing data or experiences on their own to enhance performance. They are adaptable to changing environments and can learn new tasks or optimize their performance in open-ended scenarios. Autonomous AI in health care could potentially manage routine tasks such as patient monitoring or medication management, making decisions based on clinical signs and symptoms. Robotic surgery systems can make real-time adjustments during procedures, utilizing AI to enhance precision and efficiency.

Both approaches are valuable and are often combined in practice. For instance, iterative learning might pre-train a model that subsequently engages in autonomous learning during deployment, fine-tuning its abilities based on real-world data. This combination allows for both structured development and dynamic adaptability.

A compelling example where both iterative and autonomous AI approaches are combined in health care is in the development and deployment of personalized medicine platforms, particularly in oncology, where iterative AI is initially used to train models on large datasets comprising genetic information, treatment outcomes, and patient histories, and autonomous AI analyzes new patient data, recommending personalized treatment plans based on the insights derived from its extensive pre-training.

If you watch a lot of science fiction, like I do, then perhaps the fear of autonomous AI systems “taking over” and eliminating human functions – or humans themselves – feels both familiar and unsettling. It is a topic fueled not only by science fiction and fantasy but also by philosophical debate. Former Google chairman and CEO Eric Schmidt’s new book Genesis: Artificial Intelligence, Hope, and the Human Spirit has been described as “[a] profound exploration of how we can protect human dignity and values in an era of autonomous machines.” I’m worried about protecting our species – let alone our “spirit.”

Theoretically, several factors currently prevent doomsday scenarios. These can be divided into technical limitations, ethical safeguards, social structures, and systemic dependencies.

Technical limitations

Autonomous AI systems are highly specialized and lack general intelligence. While they excel in narrow tasks, they do not possess the creative, emotional, or abstract thinking capabilities required for broad, human-like cognition. Current AI systems operate within strict parameters, and their decision-making is bound by the data and algorithms they are trained on. Even advanced systems that can adapt or learn in real-time are limited in scope and do not have the capacity for complex, independent planning or motivation—essential components for “taking over.”

Ethical safeguards

AI development is guided by ethical principles, regulations, and oversight designed to prevent harm. Developers and governments are implementing frameworks such as AI ethics guidelines, explainability requirements, and safety measures to ensure AI systems act in accordance with human values. Examples include the European Union’s AI Act and AI ethical principles recommended by the U.S. Department of Defense and organizations like OpenAI (there are 200 or more guidelines and recommendations for AI governance worldwide). These guardrails aim to prevent misuse or unintended consequences.

Social structures

AI systems are tools created, owned, and operated by humans or organizations. They lack autonomy in the sense of independence from these structures. Governments, institutions, and corporations establish rules and maintain oversight over how AI is deployed, ensuring that it serves specific purposes and remains under human control. Social and political systems also resist relinquishing significant power to autonomous systems due to economic, ethical, and existential concerns.

Systemic dependencies

Autonomous AI systems depend on infrastructure, energy, and maintenance, all of which remain under human control. They cannot sustain themselves without these resources. Furthermore, AI systems often require human input or oversight for ongoing relevance and adaptation, particularly in unpredictable environments.

Preventing harm

The idea of AI systems intentionally “eliminating” humans assumes a level of sentience, malice, and motive that current AI lacks. AI systems do not have desires, self-preservation instincts, or moral reasoning. Any harm caused by AI arises from flawed design, inadequate safeguards, or malicious use by humans – not from the systems themselves. Efforts to mitigate such risks focus on robust design, testing, and mandating accountability in AI deployment.

Future considerations

ADVERTISEMENT

As AI evolves, ensuring its alignment with human values and control becomes increasingly critical. This includes developing general AI, also known as Artificial General Intelligence (AGI), a type of AI that possesses the ability to understand, learn, and apply knowledge across a wide range of tasks at a level comparable to human intelligence. The development of AGI is a major goal in the field of AI research, but it remains largely theoretical at this point, as current AI systems are specialized and lack the generalization capabilities of human cognition.

Public discourse, interdisciplinary collaboration, and regulatory oversight will play pivotal roles in preventing scenarios where AI could displace humans in destructive ways. While theoretical risks exist, the current state of AI lacks the capacity or motive for such dramatic outcomes. Vigilance in research, ethical frameworks, and societal control will continue to guarantee that AI systems augment human capabilities rather than threaten them.

To boldly go

If you are not convinced of that future reality, I suggest you watch the original Star Trek episode “The Ultimate Computer.” An advanced artificially intelligent control system, the M-5 Multitronic unit, malfunctions and engages in real war rather than simulated war, putting the Enterprise and a skeleton crew at risk. Kirk disables M-5, but he must gamble on the humanity of an opposing starship captain to not retaliate against the Enterprise. The Enterprise is spared. Kirk tells Mr. Spock that he knew the captain personally: “I knew he would not fire. An advantage of man versus machine.”

God help us should we lose that advantage.

Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia, PA. He is the author of several books on narrative medicine, including Medicine on Fire: A Narrative Travelogue and Story Treasures: Medical Essays and Insights in the Narrative Tradition.

Prev

Heartbreaking stories of famous lives lost to suicide and a community's fight for awareness

December 2, 2024 Kevin 1
…
Next

A doctor’s life-saving instinct reveals the hidden danger in a patient’s crisis

December 2, 2024 Kevin 0
…

Tagged as: Health IT

Post navigation

< Previous Post
Heartbreaking stories of famous lives lost to suicide and a community's fight for awareness
Next Post >
A doctor’s life-saving instinct reveals the hidden danger in a patient’s crisis

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

More by Arthur Lazarus, MD, MBA

  • How a $75 million jet brought down America’s boldest doctor

    Arthur Lazarus, MD, MBA
  • What independent bookstores and private practice doctors teach us about human connection

    Arthur Lazarus, MD, MBA
  • How trade wars could destroy the U.S. health care system

    Arthur Lazarus, MD, MBA

Related Posts

  • Are negative news cycles and social media injurious to our health?

    Rabia Jalal, MD
  • Digital health equity is an emerging gap in health

    Joshua W. Elder, MD, MPH and Tamara Scott
  • Improve mental health by improving how we finance health care

    Steven Siegel, MD, PhD
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Melting the iron triangle: Prioritizing health equity in dynamic, innovative health care landscapes

    Nina Cloven, MHA

More in Tech

  • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

    Harvey Castro, MD, MBA
  • Why fearing AI is really about fearing ourselves

    Bhargav Raman, MD, MBA
  • Health care’s data problem: the real obstacle to AI success

    Jay Anders, MD
  • What ChatGPT’s tone reveals about our cultural values

    Jenny Shields, PhD
  • Bridging the digital divide: Addressing health inequities through home-based AI solutions

    Dr. Sreeram Mullankandy
  • Staying stone free with AI: How smart tech is revolutionizing kidney stone prevention

    Robert Chan, MD
  • Most Popular

  • Past Week

    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • The dreaded question: Do you have boys or girls?

      Pamela Adelstein, MD | Physician
    • Rethinking patient payments: Why billing is the new frontline of patient care [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • What happened to real care in health care?

      Christopher H. Foster, PhD, MPA | Policy
    • Internal Medicine 2025: inspiration at the annual meeting

      American College of Physicians | Physician
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
  • Recent Posts

    • Jumpstarting African health care with the beats of innovation

      Princess Benson | Conditions
    • Empowering IBD patients: tools for managing symptoms between doctor visits [PODCAST]

      The Podcast by KevinMD | Podcast
    • Voices from the inside: 35 years as a nurse in health care

      Virginia DeFranco, RN | Conditions
    • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

      Harvey Castro, MD, MBA | Tech
    • The invisible weight carried by Black female physicians

      Trisza Leann Ray, DO | Physician
    • A female doctor’s day: exhaustion, sacrifice, and a single moment of joy

      Dr. Damane Zehra | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • The dreaded question: Do you have boys or girls?

      Pamela Adelstein, MD | Physician
    • Rethinking patient payments: Why billing is the new frontline of patient care [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • What happened to real care in health care?

      Christopher H. Foster, PhD, MPA | Policy
    • Internal Medicine 2025: inspiration at the annual meeting

      American College of Physicians | Physician
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
  • Recent Posts

    • Jumpstarting African health care with the beats of innovation

      Princess Benson | Conditions
    • Empowering IBD patients: tools for managing symptoms between doctor visits [PODCAST]

      The Podcast by KevinMD | Podcast
    • Voices from the inside: 35 years as a nurse in health care

      Virginia DeFranco, RN | Conditions
    • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

      Harvey Castro, MD, MBA | Tech
    • The invisible weight carried by Black female physicians

      Trisza Leann Ray, DO | Physician
    • A female doctor’s day: exhaustion, sacrifice, and a single moment of joy

      Dr. Damane Zehra | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...