Speech recognition software is an important part of my clinic workflow. I use the industry-leading application, which has saved me at least 1,000 hours of documentation time over the last decade. My typing is much slower than my speaking, and since my goal is to leave the office at a reasonable hour every day, using it is the obvious choice for me.
But there is always a trade-off when using fancy new technologies. Like poor Dr. Bean in the comic strip above, I frequently find transcription errors in my chart reviews. Here is one that I found this week:
“He is continuing to take gabapentin 903 times a day …”
Someone who actually takes gabapentin 903 times a day will get a dose of 90,300 mg, assuming 100 mg tabs. This amount is enough to render you unconscious before taking the 903rd dose. But the software doesn’t know that. All it knows is that the speaker said, “nine hundred-three times a day.” The text is absurd if you read it literally, but if you know medicine, you can probably figure out what the clinician meant from the context.
A few years ago, I started collecting these transcription errors for my own amusement. This one might be my favorite:
“Thank you for allowing us to produce pain in the care of this patient.”
This was from a neurosurgery consult note after the resident who dictated it had played hot potato with my service about who would admit the patient. He actually said, “Thank you for allowing us to participate in the care of this patient,” but voice recognition stepped in as a truth serum translator.
Here are several more from my file:
“Dr. X was insulted and he requested to see the patient as an outpatient.”
“I spent several months with the patient discussing this plan.”
“It is my opinion that he currently lacks the cognitive skills necessary to participate injury duty, and I recommend that he be excused from this responsibility.”
“Patient does have some essential tremor trick-or-treating with primidone.”
“We talked for a long time about her rug gnosis”
“Her left patella just has been monitoring the vagal nerve stimulator …”
“He also complains of some ‘bumps’ on his head near the crown and at the Indian.”
“To my knowledge patient has had no thanks for your thoughts”
“He and his heart and her were agreeable with this plan and I did manifest answer their questions.”
“She has never lived in the cerebral country.”
“We talked about ways to redirect his a beer when he is agitated or confused.”
“It feels like his years need to pop.”
“She is having some behavioral side effects from this medication, so we will try her on pirate docs seen to see if these reduce.”
“She has had some episodes of leaving the water on a sink, and her husband’s cottages in time to up a flight from happening.”
“Everyone’s mother can get her to walk outside a little bit.”
“We discussed her options for management, including repeating diagnostic tests and adjusting vacation doses.”
(Note: Can I increase my vacation doses?)
My kids laugh hilariously at nonsense like this. Most of these are pretty benign, but rarely do I find something really embarrassing or offensive like this:
“This is a X-year old illogical female who identifies as male …”
I actually said “biological,” not “illogical,” and I had no idea this error was in my note until I saw the patient for a follow-up. It is not my practice to disparage transgendered people in my clinic notes or elsewhere. Thankfully the patient accepted my apology when I explained what had happened, but can you imagine trying to explain this to a Twitter mob?
To think that such statements can be found in notes written by some of the most brilliant, competent, hard-working, and well-educated people in our society is sobering when it isn’t funny. Healthcare providers are literate, often incredibly articulate, with legendary attention to detail. So how do we manage to say such stupid things in our chart notes, usually without even noticing that we have done it?
A Poder et al. (2018) review highlighted some of the tradeoffs between human transcription services and speech recognition software. The major benefit of speech recognition is the dramatically shorter turnaround time for note writing, but this comes at the expense of a major error rate up to three times higher than what you get with human transcription, and clinicians have to spend more time proofreading their transcripts in real-time when using speech recognition. I think that the last point could largely explain the error rate. When you place a greater burden on an already burned-out workforce, you can expect that burden to be borne poorly.
I once read an ED note that contained nine errors within ten run-on sentences and then closed with this disclaimer:
“This note was created using voice recognition software and may contain some technical errors, but every effort has been made to ensure its accuracy.”
I’m a little skeptical of the “every effort” claim, but I can totally sympathize.
My purpose here is not to say we shouldn’t use speech recognition software or to slander Nuance’s very impressive and incredibly useful application. I plan to continue using voice recognition daily, and my family will thank me for getting home at a sane hour every evening. I hope we will all think more critically about our medical documentation tools, which are just like any other clinical tools. Do the medicines we prescribe and the procedures we perform carry risk? Absolutely! But we know those risks and weigh them carefully when making clinical decisions. The same is true of my documentation tools. I understand the tradeoffs involved, and I accept them.
Speech recognition is error-prone and a bit labor-intensive, but it quickly gets the job done. And it makes me laugh more often than the other options, which itself can be rather valuable.
The author is an anonymous physician and can be reached at Coalmine Hospital Comics.