Integrity and trust are foundational. But today, that trust is under assault—not from human error, nor negligence, but from the sophisticated but disturbingly unreliable outputs of artificial intelligence (AI). What the media euphemistically calls “AI hallucinations” are not benign mistakes—they are dangerous fabrications, systematically undermining both clinical and legal standards.
As a psychiatrist, I confront the reality of hallucinations regularly. Patients vividly describe voices, visions, and sensations with profound distress. Hallucinations, in medical terms, represent profound and involuntary perceptual disturbances. In stark contrast, the inaccuracies spewed forth by large language models like ChatGPT are not involuntary misperceptions—they are systematic, plausible-sounding falsifications generated by probabilistic algorithms, devoid of ethical accountability.
A recent editorial by Robin Emsley in the journal Schizophrenia reveals the alarming truth: ChatGPT-generated references, presented with confidence and eloquence, are often wholly fabricated. Emsley described requesting literature references from ChatGPT to support his research into structural brain changes with antipsychotic treatment. Initially impressed, his enthusiasm quickly turned to dismay. Out of five citations provided, several were entirely fictitious or grossly inaccurate. One real reference was irrelevant, and three others simply didn’t exist. This was not an isolated case. Additional research highlighted even more disturbing statistics: of 115 medical citations generated by AI, 47 percent were entirely fabricated, another 46 percent inaccurate, leaving only 7 percent both accurate and authentic.
Emsley emphasizes that calling these inaccuracies “hallucinations” is misleading and diminishes the gravity of real clinical hallucinations. Instead, these are “fabrications and falsifications,” a term that correctly assigns moral and professional weight to the AI-generated falsehoods.
Medicine isn’t the only domain suffering from AI’s plausible fabrications.
Consider the notorious incident involving attorney Steven Schwartz, who submitted legal arguments citing six court cases invented entirely by ChatGPT. The court was stunned, Schwartz was sanctioned, and the incident sent shockwaves through the legal community. Schwartz’s defense—that he didn’t know AI could produce false references—highlights a deeper systemic problem. Legal professionals, trained rigorously in verifying sources, were caught off-guard by the convincing fabrications of AI, endangering justice and undermining public trust. Some Federal courts now outright ban AI pleading.
We see a disturbing parallel between these AI fabrications and the phenomenon of clinical hallucinations following sensory alterations.
This is exemplified vividly by the case of musical hallucinations post-cochlear implantation (source). A woman, following cochlear implantation, experienced persistent and increasingly intrusive musical hallucinations. Initially gentle and non-intrusive, these hallucinations eventually became overwhelming. Interestingly, the music continued even when the implant was inactive, driven perhaps by the brain’s “parasitic memory” phenomenon—a desperate neurological attempt to fill sensory voids.
AI-generated fabrications, though algorithmic rather than neurological, similarly fill “informational voids,” creating plausible but false data to satisfy user queries. However, unlike the involuntary neurological processes, AI-generated falsehoods arise from inherent limitations of machine learning models—specifically, their probabilistic approach that blends bits of factual and false data seamlessly.
These issues raise critical ethical and professional questions.
How much can we safely rely on AI in professions built on verifiable truth? How do we hold an algorithm accountable when it deceives? Unlike human malpractice, AI errors currently have no clear accountability. ChatGPT cannot be sanctioned, sued, or disbarred. The responsibility and risk fall solely on the humans who use it.
The Psychiatric News piece “Moving from ‘hallucinations’ to ‘fabrications'” emphasizes the ethical imperative to shift terminology. Labeling AI outputs as hallucinations inadvertently trivializes real psychiatric experiences. By explicitly calling them “fabrications,” we highlight the deliberate and structured nature of these errors, making clear that these inaccuracies must trigger rigorous verification and accountability.
The potential consequences of continuing to overlook these fabrications are severe.
Imagine a scenario where medical students, pressed for time and resources, rely on AI-generated references without verification, inadvertently propagating false information. Clinical guidelines could become contaminated by inaccuracies. Physicians may unwittingly compromise patient safety, guided by fictitious evidence.
Similarly, in legal contexts, fabrications could erode foundational precedents and jurisprudence. Cases relying on AI-generated content could jeopardize justice, sending innocent individuals to prison or letting guilty parties escape accountability based on phantom precedents.
What steps should the medical and legal communities take?
- Rigorous educational initiatives must inform professionals about AI limitations. Training should explicitly teach skepticism and the essential skill of cross-verifying AI-generated content.
- Stringent standards for disclosing AI use must become mandatory in publications and court submissions, akin to conflict-of-interest disclosures. Such transparency will foster accountability and caution.
- The AI industry must develop and enforce verification tools that flag fabricated citations or case law proactively, integrating ethical oversight directly into the technological framework.
- We must advocate for clear terminology, distinguishing neurological “hallucinations” from intentional AI “fabrications,” maintaining both scientific integrity and ethical clarity.
Our trust in medicine and law has always hinged on accuracy and ethical standards. AI, for all its promises, currently challenges both. The damage already inflicted demands immediate response. We cannot passively accept AI-generated fabrications under the comforting illusion that they are harmless “hallucinations.” Real hallucinations cause genuine human suffering; fabricated evidence leads directly to real-world harm and injustice.
We stand at an ethical crossroads. We must actively decide whether we are willing to surrender critical standards of truth and accountability to convenience and technological expediency. As medical professionals sworn to uphold truth and justice, we must resist this erosion vigorously. Let’s reclaim our responsibility. Let’s clearly distinguish hallucinations from fabrications. Let’s remain vigilant guardians of the truth in the age of AI. Our patients, our clients, and our professions depend on it.
Muhamad Aly Rifai is a nationally recognized psychiatrist, internist, and addiction medicine specialist based in the Greater Lehigh Valley, Pennsylvania. He is the founder, CEO, and chief medical officer of Blue Mountain Psychiatry, a leading multidisciplinary practice known for innovative approaches to mental health, addiction treatment, and integrated care. Dr. Rifai currently holds the prestigious Lehigh Valley Endowed Chair of Addiction Medicine, reflecting his leadership in advancing evidence-based treatments for substance use disorders.
Board-certified in psychiatry, internal medicine, addiction medicine, and consultation-liaison (psychosomatic) psychiatry, Dr. Rifai is a fellow of the American College of Physicians (FACP), the American Psychiatric Association (FAPA), and the Academy of Consultation-Liaison Psychiatry (FACLP). He is also a former president of the Lehigh Valley Psychiatric Society, where he championed access to community-based psychiatric care and physician advocacy.
A thought leader in telepsychiatry, ketamine treatment, and the intersection of medicine and mental health, Dr. Rifai frequently writes and speaks on physician justice, federal health care policy, and the ethical use of digital psychiatry.
You can learn more about Dr. Rifai through his Wikipedia page, connect with him on LinkedIn, X (formerly Twitter), Facebook, or subscribe to his YouTube channel. His podcast, The Virtual Psychiatrist, offers deeper insights into topics at the intersection of mental health and medicine. Explore all of Dr. Rifai’s platforms and resources via his Linktree.