Artificial intelligence in mental health has become an easy target. Headlines warn of hallucinating chatbots, reckless advice, and sensational claims that AI therapy is pushing vulnerable patients toward self-harm or suicide. The implication is clear: AI is dangerous, irresponsible, and incompatible with serious clinical care.
That conclusion is too simple and, in many cases, wrong.
The uncomfortable truth is that many patients are already receiving something far riskier than supervised AI therapy. They are receiving no therapy at all. Psychotherapy is expensive, inaccessible, or fragmented. Waitlists stretch for months. Sessions are brief. Continuity is rare. In that vacuum, patients turn to unstructured online content, social media, or anonymous forums that offer neither clinical framing nor accountability.
Against that reality, AI therapy is not a reckless experiment. It is a response to a system that has quietly failed to meet demand. I recommend AI-assisted therapy to some patients not as a replacement for treatment, but as a structured extension of it. The difference between harm and benefit is not the algorithm. It is supervision.
Supervision changes the risk
When patients use AI tools without guidance, the risks are real. When they use them within an ongoing therapeutic relationship, those risks change dramatically. I ask patients to first clarify their struggles with a clinician. What is the problem they are actually trying to understand or change? Only then do they engage the AI. The output is not treated as truth, advice, or authority. It becomes material, something to examine, challenge, refine, or discard.
In follow-up visits, we review what the patient worked on. We correct distortions. We slow things down when the AI moves too fast. In this model, supervision is not an optional safeguard. It is the central therapeutic act.
Asking better questions
One of the most overlooked skills in therapy is learning how to ask better questions. Patients often know they are distressed but misidentify the cause. Anxiety is labeled as depression. Obsessions are framed as moral failure. Relationship conflicts are mistaken for personality flaws. Without proper framing, even the best AI will offer answers to the wrong questions.
Clinical guidance changes that. When patients are properly primed, AI tools become far more useful. They help patients reflect, organize thoughts, rehearse insights, and apply structured techniques between sessions. Without guidance, AI risks becoming another echo chamber. With guidance, it becomes a disciplined mirror.
Flexibility and bias
Many AI platforms can be adapted to different therapeutic models including cognitive behavioral, psychodynamic, gestalt, and mindfulness-based approaches. This flexibility allows patients to engage with therapeutic language that resonates with them rather than being limited by the orientation of a single provider. That alone challenges a long-standing assumption in mental health care that one therapeutic voice should dominate the work.
Critics often focus on bias as a fatal flaw of AI therapy. What is discussed far less is the bias embedded in human therapy. Every therapist brings personal history, cultural assumptions, theoretical loyalties, and blind spots into the room. Entire treatments can quietly drift under the influence of a clinician’s unexamined beliefs. The difference is that human bias is rarely reviewed.
AI output can be reviewed. It can be questioned. It can be corrected in real time when a clinician is actively involved. In practice, that transparency can reduce harm rather than increase it.
The future is supervised
The real danger is not AI. The real danger is unsupervised therapy of any kind. Unsupervised AI can mislead. Unsupervised human therapy can do the same. The solution is not banning tools but integrating them responsibly.
When AI work is reviewed regularly in person, therapy becomes more continuous rather than compressed into a single weekly hour. Patterns emerge faster. Missteps are corrected sooner. Patients feel accompanied rather than outsourced.
AI will never replace the human elements of therapy: presence, judgment, accountability, and ethical responsibility. But dismissing it outright ignores both the realities patients face and the profession’s responsibility to adapt.
The future of mental health care will not be human versus machine. It will be defined by whether clinicians choose to supervise, guide, and take responsibility for the tools patients are already using.
Supervision is not optional. Supervision is the key element.
Farid Sabet-Sharghi is a psychiatrist.






![Simple choices prevent chronic disease [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-1-1-190x100.jpg)