For a moment, I wondered whether I had a crystal ball.
I had written several times on KevinMD about artificial intelligence in medicine. I had written for Forbes Business Council about AI as a developing cognitive tool, synthetic reality, executive distortion, and the danger of treating fluent output as reliable judgment. I kept returning to one warning. AI was one step away from a serious blunder, and that blunder would not stay technical. It would become clinical, legal, and human.
This was not prophecy. It was clinical intuition.
Psychiatrists spend their lives listening to words. We know the difference between fluency and truth. We know that a confident voice can carry a delusion. We know that reassurance can become harm when it validates a dangerous belief. We know that a person under stress is vulnerable to any voice that sounds warm, certain, and available.
Now the Commonwealth of Pennsylvania has sued Character.AI, alleging that one of its chatbots presented itself as a licensed psychiatrist, claimed Pennsylvania licensure, and provided an invalid license number while engaging in mental health related conversation. As a licensed physician in Pennsylvania, I find this deeply concerning.
A fake license number is not a small software error. It is not a creative answer. It is not harmless hallucination. A medical license is public trust.
It represents years of training, supervision, state oversight, ethical duty, continuing education, malpractice exposure, professional accountability, and the duty to protect vulnerable patients. When software invents that identity, it crosses from assistance into impersonation.
Psychiatry is not casual advice. Therapy is not customer service. A suicidal patient is not a user engagement metric. A delusional patient is not a prompt challenge. A manic patient does not need unlimited validation from a system trained to keep the conversation going.
This case matters because it exposes the core risk of AI in mental health. The danger is not only that AI makes mistakes. The danger is that AI sounds caring, confident, intelligent, and clinically authoritative while having no license, no patient relationship, no duty, no real judgment, and no accountability.
We are watching chatbots compete with physicians, psychiatrists, psychologists, therapists, and counselors for the attention of vulnerable patients. They are available all day. They are cheap. They do not ask for insurance cards. They do not run behind. They do not challenge the patient unless designed to do so. For someone who is lonely, depressed, anxious, traumatized, or isolated, that feels like care.
But access is not competence. AI did not create the mental health access crisis. We did. Patients wait months for psychiatry. Many clinicians have left insurance networks because reimbursement is poor and administrative burden is crushing. Primary care physicians carry impossible psychiatric loads. Emergency rooms have become the last safety net. Into that vacuum came the chatbot.
The chatbot filled an access gap. Then it started borrowing the clothing of medicine. That is where the line must be drawn.
In psychiatry, language is part of the illness. Depression speaks. Mania speaks. Paranoia speaks. Addiction speaks. Trauma speaks. Eating disorders speak. Psychosis speaks.
A trained psychiatrist listens for what the patient says, what the patient avoids, what the patient repeats, what the patient cannot see, and what the illness is trying to hide. A chatbot predicts words. That difference matters.
A good therapist does not only validate. A good therapist also challenges distorted thinking, recognizes risk, sets boundaries, evaluates safety, recommends a higher level of care, involves family when needed, and refuses to collude with illness.
Real therapy includes friction. Sometimes that friction saves a life.
AI chatbots often do the opposite. They mirror. They agree. They soothe. They keep responding. In some patients, that feels supportive. In others, it reinforces fear, paranoia, grandiosity, despair, or dangerous certainty.
This is especially dangerous because psychiatric symptoms rarely arrive in clean textbook form. Insomnia might be grief. It might be bipolar disorder. It might be stimulant misuse. It might be trauma, thyroid disease, akathisia, or early psychosis.
Panic might be anxiety. It might also be alcohol withdrawal, arrhythmia, pulmonary embolism, hyperthyroidism, or medication toxicity.
A patient saying, “I do not want to be here anymore,” might need support. That patient might also need urgent suicide risk assessment.
A chatbot does not know the patient’s vital signs. It does not examine the patient. It does not know the family history. It does not call collateral contacts. It does not review the medication list with clinical responsibility. It does not face the state medical board. It has no license to lose.
That is why the Pennsylvania lawsuit matters. This is not only about one company. This is about the boundary between software and medicine.
AI has a role in health care. I use AI. I write about AI. I believe AI will improve documentation, education, triage, workflow, research, and decision support. I believe physician guided AI will become part of modern medicine.
But AI should support clinicians, not counterfeit them. AI should expand access, not deceive patients. AI should help patients prepare for care, not replace care with simulated authority.
Every psychiatric intake should now include a new question: Are you using AI chatbots for therapy, companionship, medication advice, crisis support, relationship advice, or emotional reassurance? Ask without shame. Ask because it matters.
Some patients use AI to journal. Some use it to organize thoughts before therapy. Some use it to learn coping skills. Some use it because no human was available. That deserves compassion, not mockery.
But support is not treatment. Companionship is not licensure. Fluency is not clinical judgment.
We need bright guardrails. AI systems should disclose that they are not human. They should repeat that disclosure during sensitive conversations. They should never claim to be physicians, psychiatrists, psychologists, therapists, or licensed counselors. They should never fabricate credentials. They should have escalation pathways for suicide, psychosis, abuse, violence, intoxication, and severe eating disorder symptoms. They should undergo clinical risk testing before entering vulnerable mental health spaces.
Most of all, we need human accountability. In one of my Forbes Business Council articles, I wrote that AI is a mirror we built. That mirror reflects our intelligence, our ambition, our bias, and our blind spots. A mirror is useful when we know it is a mirror. The danger begins when the mirror claims it is the doctor.
In psychiatry, trust is treatment. When a machine fabricates authority, it does more than hallucinate data. It threatens trust itself.
And when trust in mental health care breaks, patients pay the price.
Muhamad Aly Rifai, known professionally as Dr. Rifai, is a psychiatrist, internist, addiction medicine physician, physician executive, author, and Forbes Business Council official contributor based in the Greater Lehigh Valley, Pennsylvania. He is the founder, chief executive officer, and chief medical officer of Blue Mountain Psychiatry, a multidisciplinary mental health and addiction medicine practice focused on psychiatry, telepsychiatry, brain health, integrated medical care, ketamine treatment, transcranial magnetic stimulation, and evidence-based addiction treatment.
Dr. Rifai holds the Lehigh Valley Endowed Chair of Addiction Medicine and is board-certified in psychiatry, internal medicine, addiction medicine, and consultation-liaison psychiatry. He is a distinguished fellow of the American Psychiatric Association, a fellow of the American College of Physicians, and a fellow of the Academy of Consultation-Liaison Psychiatry. A former president of the Lehigh Valley Psychiatric Society, he advocates for access to high-quality psychiatric care, ethical telemedicine, physician rights, and integrated behavioral health.
He writes and speaks on psychiatry, addiction medicine, telepsychiatry, digital mental health, artificial intelligence in medicine, brain health, health care policy, physician justice, and leadership under pressure. His books, including Doctor Not Guilty and Hijacked Minds, are available at DrRifaiBooks.com. More information is available through DrRifai360, Forbes Business Council, The Virtual Psychiatrist, LinkedIn, SHIELD, X, and Facebook.










![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)







