Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Artificial intelligence in medicine raises legal and ethical concerns

Sharona Hoffman, JD
Tech
September 29, 2019
Share
Tweet
Share

The use of artificial intelligence in medicine is generating great excitement and hope for treatment advances.

AI generally refers to computers’ ability to mimic human intelligence and to learn. For example, by using machine learning, scientists are working to develop algorithms that will help them make decisions about cancer treatment. They hope that computers will be able to analyze radiological images and discern which cancerous tumors will respond well to chemotherapy and which will not.

But AI in medicine also raises significant legal and ethical challenges. Several of these are concerns about privacy, discrimination, psychological harm, and the physician-patient relationship. In a forthcoming article, I argue that policymakers should establish a number of safeguards around AI, much as they did when genetic testing became commonplace.

Potential for discrimination

AI involves the analysis of very large amounts of data to discern patterns, which are then used to predict the likelihood of future occurrences. In medicine, the data sets can come from electronic health records and health insurance claims but also from several surprising sources. AI can draw upon purchasing records, income data, criminal records, and even social media for information about an individual’s health.

Researchers are already using AI to predict a multitude of medical conditions. These include heart disease, stroke, diabetes, cognitive decline, future opioid abuse, and even suicide. As one example, Facebook employs an algorithm that makes suicide predictions based on posts with phrases such as “Are you okay?” paired with “Goodbye” and “Please don’t do this.”

This predictive capability of AI raises significant ethical concerns in health care. If AI generates predictions about your health, I believe that information could one day be included in your electronic health records.

Anyone with access to your health records could then see predictions about cognitive decline or opioid abuse. Patients’ medical records are seen by dozens or even hundreds of clinicians and administrators in the course of medical treatment. Additionally, patients themselves often authorize others to access their records: for example, when they apply for employment or life insurance.

Data broker industry giants such as LexisNexis and Acxiom are also mining personal data and engaging in AI activities. They could then sell medical predictions to any interested third parties, including marketers, employers, lenders, life insurers, and others. Because these businesses are not health care providers or insurers, the HIPAA Privacy Rule does not apply to them. Therefore, they do not have to ask patients for permission to obtain their information and can freely disclose it.

Such disclosures can lead to discrimination. Employers, for instance, are interested in workers who will be healthy and productive, with few absences and low medical costs. If they believe certain applicants will develop diseases in the future, they will likely reject them. Lenders, landlords, life insurers, and others might likewise make adverse decisions about individuals based on AI predictions.

Lack of protections

The Americans with Disabilities Act does not prohibit discrimination based on future medical problems. It applies only to current and past ailments. In response to genetic testing, Congress enacted the Genetic Information Nondiscrimination Act. This law prohibits employers and health insurers from considering genetic information and making decisions based on related assumptions about people’s future health conditions. No law imposes a similar prohibition with respect to nongenetic predictive data.

AI health prediction can also lead to psychological harm. For example, many people could be traumatized if they learn that they will likely suffer cognitive decline later in life. It is even possible that individuals will obtain health forecasts directly from commercial entities that bought their data. Imagine obtaining the news that you are at risk of dementia through an electronic advertisement urging you to buy memory-enhancing products.

When it comes to genetic testing, patients are advised to seek genetic counseling so that they can thoughtfully decide whether to be tested and better understand test results. By contrast, we do not have AI counselors who provide similar services to patients.

Yet another concern relates to the doctor-patient relationship. Will AI diminish the role of doctors? Will computers be the ones to make predictions, diagnoses, and treatment suggestions, so that doctors simply implement the computers’ instructions? How will patients feel about their doctors if computers have a greater say in making medical determinations?

ADVERTISEMENT

These concerns are exacerbated by the fact that AI predictions are far from infallible. Many factors can contribute to errors. If the data used to develop an algorithm are flawed – for instance, if they use medical records that contain errors – the algorithm’s output will be incorrect. Therefore, patients may suffer discrimination or psychological harm when, in fact, they are not at risk of the predicted ailments.

A call for caution

What can be done to protect the American public? I have argued in past work for the expansion of the HIPAA Privacy Rule so that it covers anyone who handles health information for business purposes. Privacy protections should apply not only to health care providers and insurers, but also to commercial enterprises. I have also argued that Congress should amend the Americans with Disabilities Act to prohibit discrimination based on forecasts of future diseases.

Physicians who provide patients with AI predictions should ensure that they are thoroughly educated about the pros and cons of such forecasts. Experts should counsel patients about AI just as trained professionals do about genetic testing.

The prospect of AI can over-awe people. Yet, to ensure that AI truly promotes patient welfare, physicians, researchers, and policymakers must recognize its risks and proceed with caution.

Sharona Hoffman is a professor of health law and bioethics, Case Western Reserve University, Cleveland, OH. This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image credit: Shutterstock.com

Prev

A patient confesses that he cuts himself

September 29, 2019 Kevin 2
…
Next

10 telltale signs that you're ready for a mindset shift

September 30, 2019 Kevin 0
…

Tagged as: Cardiology, Oncology/Hematology

Post navigation

< Previous Post
A patient confesses that he cuts himself
Next Post >
10 telltale signs that you're ready for a mindset shift

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

More by Sharona Hoffman, JD

  • Physician burnout is as much a legal problem as it is a medical one

    Sharona Hoffman, JD
  • The CVS merger with Aetna: What does it mean?

    Sharona Hoffman, JD
  • Our laws don’t do enough to protect our health data

    Sharona Hoffman, JD

Related Posts

  • Benefit vs. social responsibility: a profound ethical dilemma in medicine today

    Hans Duvefelt, MD
  • How social media can advance humanism in medicine

    Pooja Lakshmin, MD
  • Ethical humanism: life after #medbikini and an approach to reimagining professionalism

    Jay Wong
  • The difference between learning medicine and doing medicine

    Steven Zhang, MD
  • KevinMD at the Richmond Academy of Medicine

    Kevin Pho, MD
  • Medicine won’t keep you warm at night

    Anonymous

More in Tech

  • Closing the gap in respiratory care: How robotics can expand access in underserved communities

    Evgeny Ignatov, MD, RRT
  • Model context protocol: the standard that brings AI into clinical workflow

    Harvey Castro, MD, MBA
  • Addressing the physician shortage: How AI can help, not replace

    Amelia Mercado
  • The silent threat in health care layoffs

    Todd Thorsen, MBA
  • In medicine and law, professions that society relies upon for accuracy

    Muhamad Aly Rifai, MD
  • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

    Harvey Castro, MD, MBA
  • Most Popular

  • Past Week

    • The silent toll of ICE raids on U.S. patient care

      Carlin Lockwood | Policy
    • Why recovery after illness demands dignity, not suspicion

      Trisza Leann Ray, DO | Physician
    • Addressing the physician shortage: How AI can help, not replace

      Amelia Mercado | Tech
    • Why medical students are trading empathy for publications

      Vijay Rajput, MD | Education
    • Why does rifaximin cost 95 percent more in the U.S. than in Asia?

      Jai Kumar, MD, Brian Nohomovich, DO, PhD and Leonid Shamban, DO | Meds
    • How conflicts of interest are eroding trust in U.S. health agencies [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
  • Recent Posts

    • How conflicts of interest are eroding trust in U.S. health agencies [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why young doctors in South Korea feel broken before they even begin

      Anonymous | Education
    • Measles is back: Why vaccination is more vital than ever

      American College of Physicians | Conditions
    • When errors of nature are treated as medical negligence

      Howard Smith, MD | Physician
    • Physician job change: Navigating your 457 plan and avoiding tax traps [PODCAST]

      The Podcast by KevinMD | Podcast
    • The hidden chains holding doctors back

      Neil Baum, MD | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The silent toll of ICE raids on U.S. patient care

      Carlin Lockwood | Policy
    • Why recovery after illness demands dignity, not suspicion

      Trisza Leann Ray, DO | Physician
    • Addressing the physician shortage: How AI can help, not replace

      Amelia Mercado | Tech
    • Why medical students are trading empathy for publications

      Vijay Rajput, MD | Education
    • Why does rifaximin cost 95 percent more in the U.S. than in Asia?

      Jai Kumar, MD, Brian Nohomovich, DO, PhD and Leonid Shamban, DO | Meds
    • How conflicts of interest are eroding trust in U.S. health agencies [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • Make cognitive testing as routine as a blood pressure check

      Joshua Baker and James Jackson, PsyD | Conditions
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
  • Recent Posts

    • How conflicts of interest are eroding trust in U.S. health agencies [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why young doctors in South Korea feel broken before they even begin

      Anonymous | Education
    • Measles is back: Why vaccination is more vital than ever

      American College of Physicians | Conditions
    • When errors of nature are treated as medical negligence

      Howard Smith, MD | Physician
    • Physician job change: Navigating your 457 plan and avoiding tax traps [PODCAST]

      The Podcast by KevinMD | Podcast
    • The hidden chains holding doctors back

      Neil Baum, MD | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...