Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

AI isn’t hallucinating, it’s fabricating—and that’s a problem [PODCAST]

The Podcast by KevinMD
Podcast
August 17, 2025
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Psychiatrist, internist, and addiction medicine specialist Muhamad Aly Rifai discusses his article, “In medicine and law, professions that society relies upon for accuracy.” He argues that labeling AI errors as “hallucinations” is a dangerous euphemism that trivializes real psychiatric conditions and downplays the serious threat these errors pose to professions built on trust. He insists on using the term “fabrications” to accurately describe the plausible-sounding but often entirely false information generated by large language models. Citing alarming examples, including a study where 47 percent of AI-generated medical citations were fake and a legal case built on invented precedents, Muhamad explains how these fabrications directly threaten patient safety and justice. With no clear accountability for algorithmic errors, he calls for urgent action, including rigorous education on AI’s limitations, mandatory disclosure of its use, and a commitment to terminology that reflects the ethical gravity of the problem.

Careers by KevinMD is your gateway to health care success. We connect you with real-time, exclusive resources like job boards, news updates, and salary insights, all tailored for health care professionals. With expertise in uniting top talent and leading employers across the nation’s largest health care hiring network, we’re your partner in shaping health care’s future. Fulfill your health care journey at KevinMD.com/careers.

VISIT SPONSOR → https://kevinmd.com/careers

Discovering disability insurance? Pattern understands your concerns. Over 20,000 doctors trust us for straightforward, affordable coverage. We handle everything from quotes to paperwork. Say goodbye to insurance stress – visit Pattern today at KevinMD.com/pattern.

VISIT SPONSOR → https://kevinmd.com/pattern

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome back Muhamad Aly Rifai, psychiatrist and internist. Today’s KevinMD article is “In medicine and law: professions society relies upon for accuracy.” Muhamad, welcome back to the show.

Muhamad Aly Rifai: Thank you very much for having me to talk about this timely topic on accuracy and integrity in medicine and law.

Kevin Pho: All right, so tell us what this article is about.

Muhamad Aly Rifai: In this article, I write about how integrity and trust are foundational now in our society with all of this news that sometimes is not trustworthy. Sometimes things are not accurate, and trust is really under assault. We see that in the field of medicine, where there are a lot of things that are questionable.

ADVERTISEMENT

There are a lot of things that are being reevaluated. I talk specifically about my field in psychiatry. It’s just being uprooted completely. We are questioning whether antidepressants work, whether other medications work, our foundational thoughts about the pathogenesis of depression and anxiety.

We know, for example, that with Alzheimer’s dementia, there has been some research that has not been trustworthy, that has been manipulated for many years, and that has diverted our effort in terms of research and in terms of treatment. And in comes artificial intelligence, AI, and that has created an even much more significant crisis in terms of trust.

The media now calls these things artificial intelligence hallucinations: AI hallucinations. They have gone even as far as calling them AI fabrications, and these are dangerous. And we are seeing that on a regular basis in the fields of medicine and the field of law.

Kevin Pho: So specifically in medicine, what are some examples of these fabrications or hallucinations that you are seeing?

Muhamad Aly Rifai: Sure. It’s quite interesting because AI just burst on the scene, and we still have little understanding about how it works. We call them large language models. It’s a computer trying to mimic us humans in terms of making a product to our demand and to our prompts. But we fail to realize that what it does is it actually mimics our behavior.

And we humans invariably make false statements and incorrect statements sometimes. Sometimes we intentionally lie. In my field of psychiatry, I experienced individuals, for example, who have schizophrenia, who experience hallucinations, who have misperceptions of reality.

And even though that’s very rare, that kind of bled into the large language models. We’ve also seen that the inability to control artificial intelligence has really escalated that. Now, when you purchase or engage an AI model, they basically give you the disclaimer, “Oh, this AI is 90 percent hallucination-free.” That shouldn’t be the issue.

In medicine specifically, I referenced a recent event that happened after I wrote the article where the Department of Health and Human Services had to retract a position statement because there was an argument that several papers that were referenced were actually AI-hallucinated. The experiment that I referenced in the article is basically a schizophrenia researcher who wanted a question answered by ChatGPT, the most prominent large language model artificial intelligence that’s available. It talks about schizophrenia research, and he was surprised that out of five references—five article references, scientific references that he requested—almost all five were fabricated.

Two didn’t even exist. Two were actually papers, but the artificial intelligence model gave him an answer that said something different than the paper that was referenced. And then one was a complete fabrication out of the blue.

So it’s quite interesting that we are seeing that in the field of medicine. Now we are seeing papers that have hallucinated scientific references, and that is bleeding even into position statements from the U.S. Department of Health and Human Services, where we saw that they actually retracted a position statement because it had AI-hallucinated scientific references.

Kevin Pho: So you’re seeing it, of course, in the areas of medical research where these citations are fabricated or hallucinated. Are you seeing your fellow colleagues perhaps going to ChatGPT and looking up medical information and getting wrong information back? Or are you seeing patients also using these large language models to ask questions and again, getting hallucinations or false information back?

Muhamad Aly Rifai: I’m actually seeing both. I’m seeing patients that are going to ChatGPT at least once or twice a day. I will get a patient who will give me a comment that they consulted ChatGPT on, undoubtedly, a complex medical problem, specifically since I deal with treatment-resistant depression.

And invariably they consult ChatGPT. Importantly, now ChatGPT over the last three months has actually put a disclaimer: “Please check your information. ChatGPT may give you wrong answers.” That disclaimer was just added. So they are identifying that they do not want liability for a patient going to ChatGPT, asking about a treatment, requesting a treatment, getting a treatment, and then basically seeking legal liability from ChatGPT.

I’m also seeing colleagues who are experiencing that in terms of the production of papers, in terms of finding references, in terms of basically going for answers. Sometimes it’s very important that if somebody consults ChatGPT on a complex issue, it is important to actually check the reference that ChatGPT is relying on.

Most variably, if it’s a settled matter and ChatGPT is giving you a position statement about, for example, a professional organization, most of the time the answer is correct, but you still have to check. But sometimes it will add or editorialize and may give you the wrong answer or the wrong conclusion. Even though it listed the raw information correctly, it gives you the wrong conclusion for the question.

Kevin Pho: In your article, you talk about an analogy between filling informational voids and the brain’s response to sensory loss. So talk more about that analogy from your perspective as a psychiatrist.

Muhamad Aly Rifai: Sure. I encountered that, actually. There is literature on a patient who was diagnosed with schizophrenia—and I doubt he has schizophrenia—but he had hearing loss during childhood at age two, and he actually received cochlear implants. That phenomenon is well known: individuals who have cochlear implants will experience hallucinatory experiences or fabricated sensory input from the cochlear implant, specifically because the software of the cochlear implant will think that there’s a sound or there’s a voice in the environment while there’s actually no stimulus.

And so invariably, individuals who have cochlear implants will hear conversations in the background even though it’s quiet and there’s nothing going on. It’s actually the software for the cochlear implant trying to fill the void. It can’t be silence; there’s something going on. And so it kind of hallucinates or fabricates some noise.

This patient was hospitalized so many times psychiatrically until we figured out that that is a phenomenon from the cochlear implant. I talked with an audiologist and I talked with his ENT who worked on the cochlear implant, and they’re trying to see if there are any software adjustments to reduce that phenomenon seen in cochlear implants.

So we are seeing that. Large language models—ChatGPT, Grok, Claude—are mimicking what humans are. They are not coming up with anything. They are not making any discoveries. They’re not like Newton or Einstein coming up with novel ideas. They’re just mimicking our behavior, and sometimes humans lie.

Kevin Pho: So tell us about the path forward in terms of accountability. Clearly, these AI companies don’t want accountability for that, hence the disclaimers. So is the responsibility shifted more towards the end user in terms of being careful themselves with what they’re asking these ChatGPT models?

Muhamad Aly Rifai: Sure. Absolutely, the responsibility has shifted to the end user. I bring the example that I put in the article, and there’s another example which is actually from the field of law where lawyers filed a brief with the court, and what ended up happening is that there were hallucinated case references from the brief that was submitted to the court.

We’ve also seen another case recently where the lawyer submitted a brief that had AI-hallucinated case references. The court, without even checking, put out an order that referenced the AI-hallucinated cases. It was only after the opposing attorney pointed out that these cases were hallucinated that the lawyer was sanctioned by the court and had to pay $2,500 because of unprofessional conduct.

Now, there are district courts, federal district courts, or state courts where they have blank statements about the utilization of large language models—AI—in terms of basically not using AI in that field.

So it’s very important. It has shifted to the end user. And I can tell you, for example, U.S. physicians use AI for dictations, for medical dictations, and invariably the AI is going to say something wrong. Invariably, there’s going to be a malpractice lawsuit where some text that was inserted by AI that was never said is going to come up and somebody is going to pay the price for that. So it has shifted to the end user, and the companies that do these AI models are just washing their hands from any liability.

Kevin Pho: Now, from a medical standpoint, what kind of advice do you have for physicians when they use these large language models in clinical practice or research? What kind of guidelines can you share?

Muhamad Aly Rifai: I think that we need to develop stringent standards for disclosing that AI was used. So anybody who uses AI in any scientific production or article should disclose that AI was used in that article. I know, for example, now pictures that are being published on the internet or on any platforms now have an AI disclaimer. It will say that this picture was generated by AI. The industry also must participate.

They have to enforce verification tools that flag fabricated citations, articles, and law citations. They have to work actively to see how they can curb this phenomenon. But it has significantly proliferated into the known large language models that are available.

And also to put a measurement. So they have to, like with contaminated water, for example, or the air quality, the AI companies have to put in disclaimers. “OK. Our AI now is only doing maybe 10 percent hallucinations.” So you have to be careful with that, and that needs to be verified.

We really have to advocate also for clear terminology, distinguishing hallucinations from fabrications and seeing how we can stem that phenomenon. Otherwise, those fabrications, hallucinations, and inaccuracies are just going to invade our field in medicine and law and are going to lead to very bad consequences for our patients, for clients in the field of law, and for litigants. And there’s going to be litigation coming pretty soon.

Kevin Pho: Now, as you know, the speed of innovation when it comes to AI has been exponential because the models that we’re using now are just so much better than when they first came out. You have these reasoning models that are passing the most difficult tests possible, and the benchmarks are only going to get better and better. Are we going to come to a point where we can stop worrying about hallucinations and trust what’s coming out of the AI, given how fast the improvement pace is?

Muhamad Aly Rifai: I don’t think so. I think that we really stand at an ethical crossroads. We must actively decide whether we are willing to surrender these critical standards of truth and accountability to convenience and technological expediency. We really need to put our foot down, for example, as medical professionals. We are sworn to uphold truth and justice, so we must resist that erosion.

We cannot trust that just because we’ve seen that this model is capable of fabrication and hallucination, we can say, “OK, no, we fixed it.” It got fixed because patients in the law field, clients, and our profession really depend on that. They depend on us standing up and saying that we cannot continue to experience that. So the companies that do these models have to work actively on trying to stem this phenomenon.

Kevin Pho: We’re talking with Muhamad Aly Rifai, internist and psychiatrist. Today’s KevinMD article is “In medicine and law: professions society relies upon for accuracy.” Muhamad, let’s end with key messages that you want to leave with the KevinMD audience.

Muhamad Aly Rifai: Artificial intelligence and large language models are wonderful tools that aid us as humans and can be extremely productive. However, in the fields of medicine and law, we have to be very careful because these tools sometimes create fabrications, hallucinations, and misperceptions, and we have to advocate for truth and integrity in our fields.

So I advocate for standards to be created by the companies, by professional societies, by scientific articles, and by courts to be able to regulate the utilization of artificial intelligence in those fields.

Kevin Pho: Muhamad, as always, thank you so much for sharing your perspective and insight, and thanks again for coming back on the show.

Muhamad Aly Rifai: Thank you very much for having me.

Prev

Brooklyn hepatitis C cluster reveals hidden dangers in outpatient clinics

August 17, 2025 Kevin 0
…

Kevin

Tagged as: Health IT

Post navigation

< Previous Post
Brooklyn hepatitis C cluster reveals hidden dangers in outpatient clinics

ADVERTISEMENT

More by The Podcast by KevinMD

  • Putting food allergy safety on the menu [PODCAST]

    The Podcast by KevinMD
  • Stop medicalizing burnout and start healing the culture [PODCAST]

    The Podcast by KevinMD
  • How motherhood made me a better scientist [PODCAST]

    The Podcast by KevinMD

Related Posts

  • Why doctors must fight health misinformation on social media

    Olapeju Simoyan, MD
  • Digital health equity is an emerging gap in health

    Joshua W. Elder, MD, MPH and Tamara Scott
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Melting the iron triangle: Prioritizing health equity in dynamic, innovative health care landscapes

    Nina Cloven, MHA
  • Physician burnout: the impact of social media on mental health and the urgent need for change

    Aaron Morgenstein, MD & Amy Bissada, DO & Jen Barna, MD

More in Podcast

  • Putting food allergy safety on the menu [PODCAST]

    The Podcast by KevinMD
  • Stop medicalizing burnout and start healing the culture [PODCAST]

    The Podcast by KevinMD
  • How motherhood made me a better scientist [PODCAST]

    The Podcast by KevinMD
  • Why our fear of AI is really a fear of ourselves [PODCAST]

    The Podcast by KevinMD
  • How to safely undergo IVF with von Willebrand disease [PODCAST]

    The Podcast by KevinMD
  • How interoperability solves the biggest challenges in health care [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • How federal actions threaten vaccine policy and trust

      American College of Physicians | Conditions
    • Are we repeating the statin playbook with lipoprotein(a)?

      Larry Kaskel, MD | Conditions
    • mRNA post vaccination syndrome: Is it real?

      Harry Oken, MD | Conditions
    • AI isn’t hallucinating, it’s fabricating—and that’s a problem [PODCAST]

      The Podcast by KevinMD | Podcast
    • Closing the diversity gap in Parkinson’s research

      Vicky Chan | Conditions
  • Past 6 Months

    • COVID-19 was real: a doctor’s frontline account

      Randall S. Fong, MD | Conditions
    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • Why so many doctors secretly feel like imposters

      Ryan Nadelson, MD | Physician
    • Confessions of a lipidologist in recovery: the infection we’ve ignored for 40 years

      Larry Kaskel, MD | Conditions
    • A physician employment agreement term that often tricks physicians

      Dennis Hursh, Esq | Finance
    • Why taxing remittances harms families and global health care

      Dalia Saha, MD | Finance
  • Recent Posts

    • AI isn’t hallucinating, it’s fabricating—and that’s a problem [PODCAST]

      The Podcast by KevinMD | Podcast
    • Brooklyn hepatitis C cluster reveals hidden dangers in outpatient clinics

      Don Weiss, MD, MPH | Policy
    • The truth in medicine: Why connection matters most

      Ryan Nadelson, MD | Physician
    • New student loan caps could shut low-income students out of medicine

      Tom Phan, MD | Physician
    • Why trust and simplicity matter more than buzzwords in hospital AI

      Rafael Rolon Rivera, MD | Tech
    • Putting food allergy safety on the menu [PODCAST]

      The Podcast by KevinMD | Podcast

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • How federal actions threaten vaccine policy and trust

      American College of Physicians | Conditions
    • Are we repeating the statin playbook with lipoprotein(a)?

      Larry Kaskel, MD | Conditions
    • mRNA post vaccination syndrome: Is it real?

      Harry Oken, MD | Conditions
    • AI isn’t hallucinating, it’s fabricating—and that’s a problem [PODCAST]

      The Podcast by KevinMD | Podcast
    • Closing the diversity gap in Parkinson’s research

      Vicky Chan | Conditions
  • Past 6 Months

    • COVID-19 was real: a doctor’s frontline account

      Randall S. Fong, MD | Conditions
    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • Why so many doctors secretly feel like imposters

      Ryan Nadelson, MD | Physician
    • Confessions of a lipidologist in recovery: the infection we’ve ignored for 40 years

      Larry Kaskel, MD | Conditions
    • A physician employment agreement term that often tricks physicians

      Dennis Hursh, Esq | Finance
    • Why taxing remittances harms families and global health care

      Dalia Saha, MD | Finance
  • Recent Posts

    • AI isn’t hallucinating, it’s fabricating—and that’s a problem [PODCAST]

      The Podcast by KevinMD | Podcast
    • Brooklyn hepatitis C cluster reveals hidden dangers in outpatient clinics

      Don Weiss, MD, MPH | Policy
    • The truth in medicine: Why connection matters most

      Ryan Nadelson, MD | Physician
    • New student loan caps could shut low-income students out of medicine

      Tom Phan, MD | Physician
    • Why trust and simplicity matter more than buzzwords in hospital AI

      Rafael Rolon Rivera, MD | Tech
    • Putting food allergy safety on the menu [PODCAST]

      The Podcast by KevinMD | Podcast

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...