Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

How responsible AI can benefit patients and clinicians

Kathy Ford
Tech
February 3, 2024
Share
Tweet
Share

The impact of artificial intelligence (AI) is palpable across the health care spectrum, from aiding in early disease detection through image analysis to streamlining administrative tasks. Regulatory agencies recognize the need for rapid integration of health care AI solutions, as demonstrated by the FDA’s clearance of over 500 AI solutions classified as Software as a Medical Device (SaMD).

However, AI developers often prioritize speed over meticulous validation, which can compromise the depth of continuous performance monitoring and validation. Given the critical nature of medical decisions, health care AI has unique requirements.

Machine learning (ML) models can be fragile due to changes and inevitable data drift. However, reduced data quality over time and sub-par model outputs can often cause patient harm. In addition, transferring a model from one hospital system to another can prove challenging due to the complexity of the data.

To derive value from AI and ML implementations, developers must use responsible AI that aligns with five fundamental principles: It must be useful, safe, equitable, secure, and transparent. Nowhere is this more important than in the treatment of patients with cancer.

1. Useful

AI solutions must be designed to address specific health care challenges and deliver meaningful improvements in patient care and operational efficiency.

One of the fundamental challenges in determining the usefulness of an AI model is its application to a specific clinical context that solves real-world problems. Usefulness should translate into the quadruple aim of improving population health, enhancing patient satisfaction, reducing costs, and improving clinician work-life balance.

Here are two ways responsible AI has proven useful:

Increase positive patient outcomes. Implementing a “closing the loop” strategy using predictive insights into emergency department (ED) visits and early interventions for symptomatic or at-risk cancer patients can reduce ED visits by 30%.

Improve clinician efficiency. The ability to analyze large swaths of data and provide insight is a valuable time-saving benefit that was previously impractical for clinicians to accomplish on their own. With the application of AI in the clinical setting, hidden trends in patient data are surfaced allowing physicians to pre-empt adverse events while reducing the burden of gathering data.

These findings highlight the positive impact of AI-driven solutions on patient outcomes and overall health care experiences.

2. Safe

Patient safety is paramount. AI solutions must be rigorously tested and monitored to ensure they do not harm patients or introduce errors into clinical workflows.

Developers venturing into health care AI integration must understand the unique quality of every hospital and its patient population. One approach to deliberate implementation of responsive AI is through extensive model validation during development, continuous performance monitoring, and swift issue resolution:

Extensive model validation. Implementing this process ensures high performance and fairness across sensitive demographic subgroups. This involves thoroughly testing and validating diverse datasets to ensure models provide accurate and unbiased results for clinicians across different patient populations.

ADVERTISEMENT

Continuous performance monitoring. Automated alerting, data transformations, and ML algorithms should track the performance of the model in real-world clinical settings. Performance measures should include prediction volume, data drift, prediction drift, label drift, model drift discrimination, and calibration.

Swift issue resolution. Should metrics fall out of range, timely interventions can maintain model integrity. When an out-of-range alert is received, a root-cause analysis can pinpoint the sources of problems and suggest decisive action, whether through updating data, fine-tuning algorithms, or retraining models, to rectify the issues and ensure AI systems consistently deliver safe, fair, and effective results.

3. Equitable

AI must be designed and evaluated to work effectively across diverse patient populations.

AI systems in health care should work fairly for everyone, regardless of race, gender, age, socioeconomic status, or any other demographic or clinical characteristics. Problems often originate from systematic biases present in the data used for training. In 2017, the National Academy of Medicine highlighted the fact that Black patients often receive inferior treatments than their Caucasian counterparts, even after controlling for such variables as class, comorbidities, health behaviors, and access to health care services.

The incidence of bias can be reduced by:

Engaging clinicians in product development. Involving nurses and clinicians with extensive industry experience in product design helps ensure solutions meet health care providers’ practical needs and expectations.

Conducting frequent user surveys. Qualitative and quantitative user interviews through a product’s life cycle generate continuous feedback. By listening carefully, developers can address concerns promptly, make the necessary adjustments, and improve the overall user experience.

Auditing for bias and fairness. Using third-party resources to audit data and track the performance of AI models helps reduce bias at the data level and allows for quick intervention should the AI model drift from expected performance.

4. Secure

Health care data is sensitive and must be protected. AI systems must adhere to strict security standards to prevent unauthorized access and data breaches.

Compliance with SOC2 (Service Organization Control 2) and adherence to the Health Insurance Portability and Accountability Act (HIPAA) privacy and security requirements should be minimum standards for any AI developer. Those standards should also apply to all partners within the AI tech stack, including data storage providers, analytics platforms, and any other business associates.

Adherence to the following can help ensure security of AI products:

Data siloing. Data from each organization should be isolated to minimize the risk of data leakage between health care institutions. This reduces the likelihood of unauthorized access or unintentional data exposure. It also makes it more difficult for hackers to access multiple organizations.

Continuous security testing. By conducting routine penetration testing and vulnerability assessments, health care AI products can fortify their defenses, implement timely security patches, and ensure that data remains secure. This approach safeguards patient information and reflects a commitment to responsible AI in health care.

Employee training and awareness. Nine out of 10 data breaches start with a mistake by a human. A responsible AI developer should conduct comprehensive and frequent employee training to create a culture of data security awareness, punctuated by a quarterly phishing campaign of each employee and follow-up with those who fall prey.

5. Transparent

Clinicians and patients must understand how AI decisions are made. Transparent AI systems are explainable, making their decision-making processes accessible and interpretable.

Transparent AI safeguards both patient care and clinical efficiency, making it a cornerstone of ethical AI use in health care.

AI systems should feature user-friendly interfaces that enable clinicians to grasp the rationale behind AI predictions. Further, AI outputs must be customized to the clinician’s needs and accompanied by context and individualized for each patient.

Transparent AI should include:

Clear presentation within the clinician’s workflow. AI systems should simplify clinician decision-making, with algorithm, training data, and predictions available within customary workflows.

Visual representation of clinical basis. Visual data representations of relevance to each patient and impactful clinical factors can effectively communicate the primary patient characteristics that drive the risk assessment or diagnosis. This builds trust and allows clinicians to make more informed judgments about the relevance of AI-generated insights.

Prioritization of actionable insights. This approach allows clinicians to make timely and informed choices about patient care. Prominently displayed data — such as a risk score related to the likelihood of a particular cancer patient visiting the emergency department in the next 30 days or a risk index change score of patient status — can inform care decisions.

AI’s future should be responsible.

The responsible use of AI in health care should empower clinicians, rather than replace them. Health care’s transformation must follow responsible AI principles to ensure that the technology aligns with ethical and regulatory standards while maximizing its benefits for health care delivery and patient well-being.

By adhering to these principles, clinicians, AI developers, and regulators can collectively contribute to a system where technology enhances patient care, improves clinical efficiency, and upholds the highest standards of ethics and safety. This journey toward responsible AI in health care holds the promise of a healthier and more equitable future for all.

Kathy Ford is a health care executive.

Prev

Create a new, empowering identity for yourself [PODCAST]

February 2, 2024 Kevin 0
…
Next

Why Barbie resonated with me as a mid-career woman physician: a reflection for National Women Physicians Day

February 3, 2024 Kevin 0
…

Tagged as: Health IT

Post navigation

< Previous Post
Create a new, empowering identity for yourself [PODCAST]
Next Post >
Why Barbie resonated with me as a mid-career woman physician: a reflection for National Women Physicians Day

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

Related Posts

  • Clinicians unite for health care reform

    Leslie Gregory, PA-C
  • Physicians and patients must work together to improve health care

    Michele Luckenbaugh
  • Patients alone cannot combat high health care prices

    Peter Ubel, MD
  • Reduce health care’s carbon footprint to save our patients

    Aditi Gadre
  • Doctors and patients should be wary of health care mega-mergers

    Linda Girgis, MD
  • Are negative news cycles and social media injurious to our health?

    Rabia Jalal, MD

More in Tech

  • In medicine and law, professions that society relies upon for accuracy

    Muhamad Aly Rifai, MD
  • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

    Harvey Castro, MD, MBA
  • Why fearing AI is really about fearing ourselves

    Bhargav Raman, MD, MBA
  • Health care’s data problem: the real obstacle to AI success

    Jay Anders, MD
  • What ChatGPT’s tone reveals about our cultural values

    Jenny Shields, PhD
  • Bridging the digital divide: Addressing health inequities through home-based AI solutions

    Dr. Sreeram Mullankandy
  • Most Popular

  • Past Week

    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

      Harvey Castro, MD, MBA | Tech
    • The hidden cost of delaying back surgery

      Gbolahan Okubadejo, MD | Conditions
    • Do Jewish students face rising bias in holistic admissions?

      Anonymous | Education
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • Internal Medicine 2025: inspiration at the annual meeting

      American College of Physicians | Physician
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • Are quotas a solution to physician shortages?

      Jacob Murphy | Education
  • Recent Posts

    • Why a fourth year will not fix emergency medicine’s real problems

      Anna Heffron, MD, PhD & Polly Wiltz, DO | Education
    • Why shared decision-making in medicine often fails

      M. Bennet Broner, PhD | Conditions
    • Do Jewish students face rising bias in holistic admissions?

      Anonymous | Education
    • She wouldn’t move in the womb—then came the rare diagnosis that changed everything

      Amber Robertson | Conditions
    • Rethinking medical education for a technology-driven era in health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • From basketball to bedside: Finding connection through March Madness

      Caitlin J. McCarthy, MD | Physician

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The broken health care system doesn’t have to break you

      Jessie Mahoney, MD | Physician
    • How dismantling DEI endangers the future of medical care

      Shashank Madhu and Christian Tallo | Education
    • How scales of justice saved a doctor-patient relationship

      Neil Baum, MD | Physician
    • “Think twice, heal once”: Why medical decision-making needs a second opinion from your slower brain (and AI)

      Harvey Castro, MD, MBA | Tech
    • The hidden cost of delaying back surgery

      Gbolahan Okubadejo, MD | Conditions
    • Do Jewish students face rising bias in holistic admissions?

      Anonymous | Education
  • Past 6 Months

    • What’s driving medical students away from primary care?

      ​​Vineeth Amba, MPH, Archita Goyal, and Wayne Altman, MD | Education
    • Internal Medicine 2025: inspiration at the annual meeting

      American College of Physicians | Physician
    • Residency as rehearsal: the new pediatric hospitalist fellowship requirement scam

      Anonymous | Physician
    • A faster path to becoming a doctor is possible—here’s how

      Ankit Jain | Education
    • The hidden bias in how we treat chronic pain

      Richard A. Lawhern, PhD | Meds
    • Are quotas a solution to physician shortages?

      Jacob Murphy | Education
  • Recent Posts

    • Why a fourth year will not fix emergency medicine’s real problems

      Anna Heffron, MD, PhD & Polly Wiltz, DO | Education
    • Why shared decision-making in medicine often fails

      M. Bennet Broner, PhD | Conditions
    • Do Jewish students face rising bias in holistic admissions?

      Anonymous | Education
    • She wouldn’t move in the womb—then came the rare diagnosis that changed everything

      Amber Robertson | Conditions
    • Rethinking medical education for a technology-driven era in health care [PODCAST]

      The Podcast by KevinMD | Podcast
    • From basketball to bedside: Finding connection through March Madness

      Caitlin J. McCarthy, MD | Physician

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...