As a psychiatry resident, I have been closely monitoring developments in the proliferation of AI chatbots for therapy. The moment feels like the ground shifting below my feet, and I have just started my career. In my clinical practice I feel unprepared to discuss how these new technologies are impacting my patients’ lives, but from my digest of national media, I am certain that a substantial number of my patients are using generative AI for any number of reasons including therapy. On August 1, 2025, Governor JB Pritzker signed HB1806, the Wellness and Oversight for Psychological Resources Act, to ban AI therapy in Illinois. The bill’s purpose is to protect consumers from “unlicensed or unqualified providers, including unregulated artificial intelligence systems.” This legislation appears to be a necessary reaction to the emerging stories of significant harm done by AI “therapy” chatbots. It is especially timely and pressing as survey data from earlier this year suggests that ChatGPT may be the largest provider of mental health support in the United States. The bill prohibits AI systems from making therapeutic decisions, directly interacting with a client in any form of therapeutic communication, and generating therapeutic treatment plans without review from a licensed professional, and outlines that AI systems may only be used by a licensed professional for administrative or supportive functions like scheduling, billing, or preparing therapy notes. There is a striking omission from the list of licensed professionals defined by the bill.

Except for a physician? I had to read that several times because it was confusing that physicians, notably including psychiatrists, are exempt from this legislation that is ostensibly protecting patients from unlicensed and unsupervised AI therapy. Why physicians were exempt from this list has little to do with AI therapy chatbots. Instead, it concerns ambient listening technologies and informed consent.
Please direct your attention to Section 15 subsection (b) of HB1806 “Permitted use of artificial intelligence.”

The law states that there must be a written informed consent process for the use of ambient listening technologies. For those unaware of ambient listening, they are software that records audio of a clinical encounter and employs generative AI to summarize the encounter immediately in note form. DAX Copilot, produced by the Microsoft subsidiary Nuance, is perhaps the best-known and most popular ambient listening platform in the United States. They are often directly embedded into the electronic health record, produce almost instant documentation, and are being deployed broadly across U.S. health care. For HB1806, this is a permissible use of generative AI as it is a supplementary, not therapeutic, service provided to the client. Again why would doctors be exempt from providing informed consent to patients if the clinical interview is being recorded, transcribed, analyzed and restated by an AI system? It appears that the state medical society objected to physicians being included in this legislation. This exemption potentially exposes patients and doctors to potential harm and liability associated with ambient listening.
This excellent commentary in JAMA maps the ethical and legal challenges for physicians and health systems using ambient listening technology. The authors begin by highlighting that the generative AI underlying ambient listening is still capable of producing plausible but incorrect information, commonly known as a “hallucination.” Monitoring these errors requires vigilance from the physicians using these tools which is a challenge as the technology’s benefit to doctors is to make documentation frictionless. Automation bias, or blindly placing trust in AI, is a potential risk as a physician could increasingly rely on these tools for productivity and miss an error produced by an ambient system. The authors underscore that the medical record is the “invariably the single most critical and probative piece of evidence in a malpractice case.” Discrepancies between the ambient listening recording, the AI-generated transcription, and the final note signed by a physician create a “shadow record” of a single encounter. Errors, omissions, or changes between these multiple records may pose liability risk to physicians who are generally held responsible for accurate documentation.
The sponsors and authors of Illinois HB1806 took decisive action to protect their communities from unlicensed and unsupervised AI “therapy,” but they missed the opportunity to establish a bright legal line around ambient listening and informed consent for doctors. Patients and doctors deserve to know who, or what, is listening to a clinic visit and for what purpose. Informed consent requires that physicians have a more detailed understanding of the risks, benefits, and alternatives to using ambient listening so that we can meaningfully explain these new and arguably experimental technologies to our patients. It is disappointing that physicians in Illinois are exempt from this new legal standard regarding ambient listening. It does not seem to serve patient or physician interest, and the patients and physicians of Illinois deserve an explanation.
Davis Chambers is a psychiatry resident.




![Rebuilding the backbone of health care [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-3-190x100.jpg)


![Fixing the system that fails psychiatric patients [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-2-190x100.jpg)