Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Ethical AI in mental health: 6 key lessons

Ronke Lawal
Tech
October 28, 2025
Share
Tweet
Share

As a founder developing AI systems for mental health support, I have wrestled with a fundamental question: How do we use AI to expand access while maintaining patient-provider trust? Building an AI Mental Health Copilot has shown me that the ethical challenges are as complex as the technical ones, and far more consequential. The mental health crisis demands innovation. AI copilots offer scalable, always-available support to bridge care gaps caused by provider shortages. Yet, deploying these systems forces us to confront uncomfortable truths about consent, boundaries, bias, and the nature of therapy itself.

Lesson 1: Consent must be continuous, not just initial

Traditional informed consent is inadequate for AI-assisted care. Patients deserve ongoing transparency: knowing when responses trigger alerts, when data is shared, and how recommendations arise. The challenge intensifies in crisis moments. When a user types “I have pills in my hand,” our system displays: “I care about your safety. Connecting you with crisis support now, please stay with me” while alerting human counselors. Our action-oriented approach maintained contact in 89 percent of cases until human support arrived, compared to 62 percent when we provided detailed explanations. In a crisis, transparency about process must yield to transparency about action.

Lesson 2: Boundaries are different, not absent

Human therapists maintain professional boundaries through training, supervision, and ethical codes. But what boundaries apply to AI? Its constant availability creates new risks: dependency, over-reliance, and illusory relationships. We’ve observed patients forming attachments to AI assistants, sharing more openly than with human providers. While this comfort can be therapeutic, it raises profound ethical concerns. The AI simulates care but has no stake in the patient’s well-being. Patients form meaningful attachments to interactions that are fundamentally transactional. We’ve created the psychological equivalent of a Skinner box; optimized for engagement, not healing. We’ve implemented several safeguards: limiting daily interaction time, inserting deliberate pauses before responses to prevent addictive rapid-fire exchanges, and requiring periodic “human check-ins” where users must report on real-world therapeutic relationships. But I’m not convinced these measures are sufficient. The fundamental question remains unanswered: can we design AI empathy that helps without hooking, or is the very attempt ethically compromised from the start?

Lesson 3: Escalation is not optional

The most critical ethical imperative is knowing when to step aside. AI copilots must recognize their limitations and seamlessly escalate to human clinicians when necessary. Through extensive testing, we’ve identified numerous escalation triggers: suicidal ideation, abuse disclosures, and complex trauma responses. But the harder challenge is detecting subtle cues that something exceeds the AI’s scope. A patient’s sudden change in communication pattern, cultural references the AI might misinterpret, or therapeutic impasses all require human intervention. The ethical framework we’ve developed prioritizes false positives over false negatives. Better to escalate unnecessarily than miss a critical moment. Yet this creates its own tensions: excessive escalation burdens already overwhelmed providers and may discourage patients from engaging openly. We currently escalate approximately 8 percent of interactions; a rate that reflects this balance between caution and usability.

Lesson 4: Cultural competence cannot be an afterthought

Mental health is deeply cultural; expressions of distress vary dramatically. Early in development, our system flagged a Latina user’s description of “ataque de nervios” (nervous attack) as potential panic disorder, missing that this is a recognized cultural syndrome requiring different therapeutic approaches than Western panic frameworks. Similarly, when East Asian users avoided direct language about family conflict, “culturally appropriate indirectness,” our system misread this as avoidance or denial. These failures drove three architectural innovations. Firstly, a multi-ontology system that maps culturally-specific expressions to therapeutic concepts without forcing Western diagnostic frameworks. Secondly, context-aware reasoning that interprets behaviors through cultural lenses, understanding that eye contact avoidance might signal respect, not depression. Finally, response generation that incorporates cultural healing frameworks, recognizing when family involvement or spiritual practices align with users’ values, alongside evidence-based therapy. But technical solutions alone are insufficient. We’ve learned that meaningful cultural competence requires diverse development teams, ongoing consultation with cultural advisors, and humility about what we don’t know. Every deployment in a new community should begin with the assumption that our system will miss important cultural nuances.

Lesson 5: Monitoring is a moral obligation

Deploying an AI copilot isn’t the end of ethical responsibility; it’s the beginning. Continuous monitoring for unintended consequences is essential, often revealing uncomfortable truths hidden in aggregate data. Our monitoring identified seemingly empathetic AI creating dependency patterns, with long-term users becoming 34 percent less willing to seek human therapy, a “therapeutic cage.” Simultaneously, high overall engagement masked systemic failures for specific vulnerable populations. Users discussing intergenerational trauma had 73 percent higher drop-off rates, inadvertently widening disparities rather than closing them. These discoveries prompted immediate architectural changes: implementing “therapeutic friction” to encourage human connection beyond certain thresholds, rebuilding our system to represent non-Western trauma narratives better, and introducing controlled variability in response patterns to prevent users from optimizing their interactions with the AI. While we implement standard feedback loops, clinician ratings, patient satisfaction surveys, and outcome tracking, these findings underscore complex ethical questions: How do we balance improvement with confidentiality? When we add friction to encourage human therapy despite users preferring the AI, are we honoring their autonomy or appropriately guiding them away from harm? These aren’t rhetorical questions. They represent genuine ethical tensions where reasonable people disagree. This ongoing ethical vigilance isn’t a sign of struggle; it’s a core component of responsible innovation.

Lesson 6: Clinical accuracy is the foundation

Beyond ethical implementation, there’s a fundamental question: Does the AI provide clinically sound guidance? We track therapeutic alliance scores, symptom improvement, and clinician override rates; instances when human providers disagree with AI recommendations. Currently, clinicians override AI suggestions in 23 percent of cases. This falls within the typical range for clinical decision support systems (15-30 percent), suggesting the AI provides useful guidance while human oversight catches errors. But this metric reveals deeper tensions: Should we aim for near-perfect agreement with clinicians, essentially automating current practice? Or does the AI’s value lie precisely in offering alternative perspectives that challenge clinical blind spots? When a clinician overrides our recommendation, we face an attribution problem: Is the AI wrong, or is the clinician missing something? Without ground truth in mental health, no definitive lab test, no clear right answer, we’re left making probabilistic judgments about whose judgment to trust. This uncertainty doesn’t absolve us of responsibility; it amplifies it.

ADVERTISEMENT

The path forward

Building ethical AI isn’t about perfection; it’s about thoughtful trade-offs, continuous improvement, and commitment to patient welfare. We must resist both blind enthusiasm and reflexive rejection. This requires collaboration among technologists, clinicians, ethicists, and, most importantly, patients themselves. Their voices must guide development, their concerns must shape safeguards, and their well-being must remain paramount. Health care providers take an oath not to harm. As the technologists building the tools they will use, we inherit a parallel ethical obligation. Integrating AI copilots into mental health care means that the foundational principle must extend to the systems we build and deploy. The technology may be new, but our ethical responsibilities remain unchanged: respect autonomy, promote beneficence, ensure non-maleficence, and advance justice.

Ronke Lawal is the founder of Wolfe, a neuroadaptive AI platform engineering resilience at the synaptic level. From Bain & Company’s social impact and private equity practices to leading finance at tech startups, her three-year journey revealed a $20 billion blind spot in digital mental health: cultural incompetence at scale. Now both building and coding Wolfe’s AI architecture, Ronke combines her business acumen with self-taught engineering skills to tackle what she calls “algorithmic malpractice” in mental health care. Her work focuses on computational neuroscience applications that predict crises seventy-two hours before symptoms emerge and reverse trauma through precision-timed interventions. Currently an MBA candidate at the University of Notre Dame’s Mendoza College of Business, Ronke writes on AI, neuroscience, and health care equity. Her insights on cultural intelligence in digital health have been featured in KevinMD and discussed on major health care platforms. Connect with her on LinkedIn. Her most recent publication is “The End of the Unmeasured Mind: How AI-Driven Outcome Tracking is Eradicating the Data Desert in Mental Healthcare.”

Prev

The simple wellness hack of playing catch

October 28, 2025 Kevin 0
…

Kevin

Tagged as: Health IT

Post navigation

< Previous Post
The simple wellness hack of playing catch

ADVERTISEMENT

More by Ronke Lawal

  • AI companions and loneliness

    Ronke Lawal
  • The physician mental health crisis in the ER

    Ronke Lawal
  • The mental health workforce is collapsing

    Ronke Lawal

Related Posts

  • Physician burnout: the impact of social media on mental health and the urgent need for change

    Aaron Morgenstein, MD & Amy Bissada, DO & Jen Barna, MD
  • We need a mental health infrastructure bill

    Jennifer Reid, MD
  • A step forward: a way to advance the mental health of health care professionals

    Mattie Renn, Thomas Pak, and Corey Feist, JD, MBA
  • Navigating mental health challenges in medical education

    Carter Do
  • Mental health issues and the African American community

    Lashawnda Thornton, MSW
  • Social media’s impact on mental health [PODCAST]

    The Podcast by KevinMD

More in Tech

  • AI companions and loneliness

    Ronke Lawal
  • The dangerous racial bias in dermatology AI

    Alex Siauw
  • Reinforcing trust in AI: a critical role for health tech leaders

    Miles Barr
  • The digital divide in rural health care

    Jason Griffin, MBA
  • One doctor’s journey to making an AI study tool less corrosive to critical thinking

    Mark Lee, MD
  • Is it time to embrace augmented empathy while using artificial intelligence in health care?

    Vanessa D‘Amario, PhD & Vijay Rajput, MD
  • Most Popular

  • Past Week

    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • The high cost of PCSK9 inhibitors like Repatha

      Larry Kaskel, MD | Conditions
    • The decline of the doctor-patient relationship

      William Lynes, MD | Physician
    • Rethinking cholesterol and atherosclerosis

      Larry Kaskel, MD | Conditions
    • Diagnosing the epidemic of U.S. violence

      Brian Lynch, MD | Physician
    • A neurosurgeon’s fight with the state medical board [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Rethinking the JUPITER trial and statin safety

      Larry Kaskel, MD | Conditions
    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • When language barriers become a medical emergency

      Monzur Morshed, MD and Kaysan Morshed | Physician
    • The mental health workforce is collapsing

      Ronke Lawal | Conditions
    • A doctor’s struggle with burnout and boundaries

      Humeira Badsha, MD | Physician
    • The stoic cure for modern anxiety

      Osmund Agbo, MD | Physician
  • Recent Posts

    • Ethical AI in mental health: 6 key lessons

      Ronke Lawal | Tech
    • The simple wellness hack of playing catch

      Sarah Averill, MD | Physician
    • Grief and leadership in health care

      Dana Y. Lujan, MBA | Conditions
    • What psychiatry can teach all doctors

      Farid Sabet-Sharghi, MD | Physician
    • How undermining physicians harms society

      Olumuyiwa Bamgbade, MD | Physician
    • CRISPR therapy offers hope for diabetes

      Cliff Dominy, PhD | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • The high cost of PCSK9 inhibitors like Repatha

      Larry Kaskel, MD | Conditions
    • The decline of the doctor-patient relationship

      William Lynes, MD | Physician
    • Rethinking cholesterol and atherosclerosis

      Larry Kaskel, MD | Conditions
    • Diagnosing the epidemic of U.S. violence

      Brian Lynch, MD | Physician
    • A neurosurgeon’s fight with the state medical board [PODCAST]

      The Podcast by KevinMD | Podcast
  • Past 6 Months

    • Rethinking the JUPITER trial and statin safety

      Larry Kaskel, MD | Conditions
    • The dangerous racial bias in dermatology AI

      Alex Siauw | Tech
    • When language barriers become a medical emergency

      Monzur Morshed, MD and Kaysan Morshed | Physician
    • The mental health workforce is collapsing

      Ronke Lawal | Conditions
    • A doctor’s struggle with burnout and boundaries

      Humeira Badsha, MD | Physician
    • The stoic cure for modern anxiety

      Osmund Agbo, MD | Physician
  • Recent Posts

    • Ethical AI in mental health: 6 key lessons

      Ronke Lawal | Tech
    • The simple wellness hack of playing catch

      Sarah Averill, MD | Physician
    • Grief and leadership in health care

      Dana Y. Lujan, MBA | Conditions
    • What psychiatry can teach all doctors

      Farid Sabet-Sharghi, MD | Physician
    • How undermining physicians harms society

      Olumuyiwa Bamgbade, MD | Physician
    • CRISPR therapy offers hope for diabetes

      Cliff Dominy, PhD | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...