Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

How responsible AI can benefit patients and clinicians

Kathy Ford
Tech
February 3, 2024
Share
Tweet
Share

The impact of artificial intelligence (AI) is palpable across the health care spectrum, from aiding in early disease detection through image analysis to streamlining administrative tasks. Regulatory agencies recognize the need for rapid integration of health care AI solutions, as demonstrated by the FDA’s clearance of over 500 AI solutions classified as Software as a Medical Device (SaMD).

However, AI developers often prioritize speed over meticulous validation, which can compromise the depth of continuous performance monitoring and validation. Given the critical nature of medical decisions, health care AI has unique requirements.

Machine learning (ML) models can be fragile due to changes and inevitable data drift. However, reduced data quality over time and sub-par model outputs can often cause patient harm. In addition, transferring a model from one hospital system to another can prove challenging due to the complexity of the data.

To derive value from AI and ML implementations, developers must use responsible AI that aligns with five fundamental principles: It must be useful, safe, equitable, secure, and transparent. Nowhere is this more important than in the treatment of patients with cancer.

1. Useful

AI solutions must be designed to address specific health care challenges and deliver meaningful improvements in patient care and operational efficiency.

One of the fundamental challenges in determining the usefulness of an AI model is its application to a specific clinical context that solves real-world problems. Usefulness should translate into the quadruple aim of improving population health, enhancing patient satisfaction, reducing costs, and improving clinician work-life balance.

Here are two ways responsible AI has proven useful:

Increase positive patient outcomes. Implementing a “closing the loop” strategy using predictive insights into emergency department (ED) visits and early interventions for symptomatic or at-risk cancer patients can reduce ED visits by 30%.

Improve clinician efficiency. The ability to analyze large swaths of data and provide insight is a valuable time-saving benefit that was previously impractical for clinicians to accomplish on their own. With the application of AI in the clinical setting, hidden trends in patient data are surfaced allowing physicians to pre-empt adverse events while reducing the burden of gathering data.

These findings highlight the positive impact of AI-driven solutions on patient outcomes and overall health care experiences.

2. Safe

Patient safety is paramount. AI solutions must be rigorously tested and monitored to ensure they do not harm patients or introduce errors into clinical workflows.

Developers venturing into health care AI integration must understand the unique quality of every hospital and its patient population. One approach to deliberate implementation of responsive AI is through extensive model validation during development, continuous performance monitoring, and swift issue resolution:

Extensive model validation. Implementing this process ensures high performance and fairness across sensitive demographic subgroups. This involves thoroughly testing and validating diverse datasets to ensure models provide accurate and unbiased results for clinicians across different patient populations.

ADVERTISEMENT

Continuous performance monitoring. Automated alerting, data transformations, and ML algorithms should track the performance of the model in real-world clinical settings. Performance measures should include prediction volume, data drift, prediction drift, label drift, model drift discrimination, and calibration.

Swift issue resolution. Should metrics fall out of range, timely interventions can maintain model integrity. When an out-of-range alert is received, a root-cause analysis can pinpoint the sources of problems and suggest decisive action, whether through updating data, fine-tuning algorithms, or retraining models, to rectify the issues and ensure AI systems consistently deliver safe, fair, and effective results.

3. Equitable

AI must be designed and evaluated to work effectively across diverse patient populations.

AI systems in health care should work fairly for everyone, regardless of race, gender, age, socioeconomic status, or any other demographic or clinical characteristics. Problems often originate from systematic biases present in the data used for training. In 2017, the National Academy of Medicine highlighted the fact that Black patients often receive inferior treatments than their Caucasian counterparts, even after controlling for such variables as class, comorbidities, health behaviors, and access to health care services.

The incidence of bias can be reduced by:

Engaging clinicians in product development. Involving nurses and clinicians with extensive industry experience in product design helps ensure solutions meet health care providers’ practical needs and expectations.

Conducting frequent user surveys. Qualitative and quantitative user interviews through a product’s life cycle generate continuous feedback. By listening carefully, developers can address concerns promptly, make the necessary adjustments, and improve the overall user experience.

Auditing for bias and fairness. Using third-party resources to audit data and track the performance of AI models helps reduce bias at the data level and allows for quick intervention should the AI model drift from expected performance.

4. Secure

Health care data is sensitive and must be protected. AI systems must adhere to strict security standards to prevent unauthorized access and data breaches.

Compliance with SOC2 (Service Organization Control 2) and adherence to the Health Insurance Portability and Accountability Act (HIPAA) privacy and security requirements should be minimum standards for any AI developer. Those standards should also apply to all partners within the AI tech stack, including data storage providers, analytics platforms, and any other business associates.

Adherence to the following can help ensure security of AI products:

Data siloing. Data from each organization should be isolated to minimize the risk of data leakage between health care institutions. This reduces the likelihood of unauthorized access or unintentional data exposure. It also makes it more difficult for hackers to access multiple organizations.

Continuous security testing. By conducting routine penetration testing and vulnerability assessments, health care AI products can fortify their defenses, implement timely security patches, and ensure that data remains secure. This approach safeguards patient information and reflects a commitment to responsible AI in health care.

Employee training and awareness. Nine out of 10 data breaches start with a mistake by a human. A responsible AI developer should conduct comprehensive and frequent employee training to create a culture of data security awareness, punctuated by a quarterly phishing campaign of each employee and follow-up with those who fall prey.

5. Transparent

Clinicians and patients must understand how AI decisions are made. Transparent AI systems are explainable, making their decision-making processes accessible and interpretable.

Transparent AI safeguards both patient care and clinical efficiency, making it a cornerstone of ethical AI use in health care.

AI systems should feature user-friendly interfaces that enable clinicians to grasp the rationale behind AI predictions. Further, AI outputs must be customized to the clinician’s needs and accompanied by context and individualized for each patient.

Transparent AI should include:

Clear presentation within the clinician’s workflow. AI systems should simplify clinician decision-making, with algorithm, training data, and predictions available within customary workflows.

Visual representation of clinical basis. Visual data representations of relevance to each patient and impactful clinical factors can effectively communicate the primary patient characteristics that drive the risk assessment or diagnosis. This builds trust and allows clinicians to make more informed judgments about the relevance of AI-generated insights.

Prioritization of actionable insights. This approach allows clinicians to make timely and informed choices about patient care. Prominently displayed data — such as a risk score related to the likelihood of a particular cancer patient visiting the emergency department in the next 30 days or a risk index change score of patient status — can inform care decisions.

AI’s future should be responsible.

The responsible use of AI in health care should empower clinicians, rather than replace them. Health care’s transformation must follow responsible AI principles to ensure that the technology aligns with ethical and regulatory standards while maximizing its benefits for health care delivery and patient well-being.

By adhering to these principles, clinicians, AI developers, and regulators can collectively contribute to a system where technology enhances patient care, improves clinical efficiency, and upholds the highest standards of ethics and safety. This journey toward responsible AI in health care holds the promise of a healthier and more equitable future for all.

Kathy Ford is a health care executive.

Prev

Create a new, empowering identity for yourself [PODCAST]

February 2, 2024 Kevin 0
…
Next

Why Barbie resonated with me as a mid-career woman physician: a reflection for National Women Physicians Day

February 3, 2024 Kevin 0
…

Tagged as: Health IT

Post navigation

< Previous Post
Create a new, empowering identity for yourself [PODCAST]
Next Post >
Why Barbie resonated with me as a mid-career woman physician: a reflection for National Women Physicians Day

ADVERTISEMENT

Related Posts

  • Clinicians unite for health care reform

    Leslie Gregory, PA-C
  • Physicians and patients must work together to improve health care

    Michele Luckenbaugh
  • Patients alone cannot combat high health care prices

    Peter Ubel, MD
  • Reduce health care’s carbon footprint to save our patients

    Aditi Gadre
  • Doctors and patients should be wary of health care mega-mergers

    Linda Girgis, MD
  • Are negative news cycles and social media injurious to our health?

    Rabia Jalal, MD

More in Tech

  • Would The Pitts’ Dr. Robby Robinavitch welcome a new colleague? Yes. Especially if their initials were AI.

    Gabe Jones, MBA
  • Generative AI 2025: a 20-minute cheat sheet for busy clinicians

    Harvey Castro, MD, MBA
  • Why public health must be included in AI development

    Laura E. Scudiere, RN, MPH
  • Here’s what providers really need in a modern EHR

    Laura Kohlhagen, MD, MBA
  • AI and humanity in health care: Preserving what makes us human

    Harvey Castro, MD, MBA
  • AI is not a threat to radiologists. It’s a distraction from what truly matters in medicine.

    Fardad Behzadi, MD
  • Most Popular

  • Past Week

    • Why are medical students turning away from primary care? [PODCAST]

      The Podcast by KevinMD | Podcast
    • Here’s what providers really need in a modern EHR

      Laura Kohlhagen, MD, MBA | Tech
    • Why “do no harm” might be harming modern medicine

      Sabooh S. Mubbashar, MD | Physician
    • How community paramedicine impacts Indigenous elders

      Noah Weinberg | Conditions
    • Why Canada is losing its skilled immigrant doctors

      Olumuyiwa Bamgbade, MD | Physician
    • How to speak the language of leadership to improve doctor wellness [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Why tracking cognitive load could save doctors and patients

      Hiba Fatima Hamid | Education
    • Why are medical students turning away from primary care? [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why “do no harm” might be harming modern medicine

      Sabooh S. Mubbashar, MD | Physician
    • Here’s what providers really need in a modern EHR

      Laura Kohlhagen, MD, MBA | Tech
    • What the world must learn from the life and death of Hind Rajab

      Saba Qaiser, RN | Conditions
    • How medical culture hides burnout in plain sight

      Marco Benítez | Conditions
  • Recent Posts

    • Why Canada is losing its skilled immigrant doctors

      Olumuyiwa Bamgbade, MD | Physician
    • Why doctors are reclaiming control from burnout culture

      Maureen Gibbons, MD | Physician
    • Would The Pitts’ Dr. Robby Robinavitch welcome a new colleague? Yes. Especially if their initials were AI.

      Gabe Jones, MBA | Tech
    • Why medicine must stop worshipping burnout and start valuing humanity

      Sarah White, APRN | Conditions
    • Why screening for diseases you might have can backfire

      Andy Lazris, MD and Alan Roth, DO | Physician
    • How organizational culture drives top talent away [PODCAST]

      The Podcast by KevinMD | Podcast

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Why are medical students turning away from primary care? [PODCAST]

      The Podcast by KevinMD | Podcast
    • Here’s what providers really need in a modern EHR

      Laura Kohlhagen, MD, MBA | Tech
    • Why “do no harm” might be harming modern medicine

      Sabooh S. Mubbashar, MD | Physician
    • How community paramedicine impacts Indigenous elders

      Noah Weinberg | Conditions
    • Why Canada is losing its skilled immigrant doctors

      Olumuyiwa Bamgbade, MD | Physician
    • How to speak the language of leadership to improve doctor wellness [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Why tracking cognitive load could save doctors and patients

      Hiba Fatima Hamid | Education
    • Why are medical students turning away from primary care? [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why “do no harm” might be harming modern medicine

      Sabooh S. Mubbashar, MD | Physician
    • Here’s what providers really need in a modern EHR

      Laura Kohlhagen, MD, MBA | Tech
    • What the world must learn from the life and death of Hind Rajab

      Saba Qaiser, RN | Conditions
    • How medical culture hides burnout in plain sight

      Marco Benítez | Conditions
  • Recent Posts

    • Why Canada is losing its skilled immigrant doctors

      Olumuyiwa Bamgbade, MD | Physician
    • Why doctors are reclaiming control from burnout culture

      Maureen Gibbons, MD | Physician
    • Would The Pitts’ Dr. Robby Robinavitch welcome a new colleague? Yes. Especially if their initials were AI.

      Gabe Jones, MBA | Tech
    • Why medicine must stop worshipping burnout and start valuing humanity

      Sarah White, APRN | Conditions
    • Why screening for diseases you might have can backfire

      Andy Lazris, MD and Alan Roth, DO | Physician
    • How organizational culture drives top talent away [PODCAST]

      The Podcast by KevinMD | Podcast

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...