When I am sued by a prominent medical malpractice plaintiff attorney, I open the firm’s website. Despite the lawyer’s prominence, the website uncharacteristically solicits for medical malpractice, claiming, “If you have a phone, you have a lawyer.” We all know that it takes more than a phone; it takes a case. What I have learned since that day is a lesson for every physician practicing medicine because each, like it or not, has an 8.5 percent chance per year of being sued.
Every year, an unreported number of alleged medical malpractice claims are reviewed by plaintiff attorneys. Some are medical errors, and some are errors of nature. Those that are errors of nature are frivolous. The medical malpractice landscape illustrates how inefficient medical malpractice litigation is. For a lawyer to file a medical malpractice lawsuit, it only takes a certificate of merit from a medical expert, who is our colleague. It no longer matters which are errors of nature and which are medical errors. It boils down to one fact. Neither lawyers nor the medical experts they retain are able to distinguish a meritorious claim from a frivolous one. It only takes fifty percent confidence plus a scintilla to allege medical malpractice. The cost of this inefficiency is $56 billion per year.
Nevertheless, once the game begins, frivolous or not, there is a sixty-five percent chance that a claim is dropped during discovery, perhaps with prejudice, and a thirty-one percent chance it is settled. One in 3,000 lawsuits go to court and sixty-six percent of these are lost. Filing a frivolous claim is expensive.
This leads me to speculate how plaintiff attorneys determine which claims to even consider representing and which to reject outright to avoid incurring this expense. They must have a method. Indeed, they do, they use AI platforms, LawDroid and Darrow AI just to name a couple.
I subject the following actual case summary for review by artificial intelligence in Google search:
A 16-year-old Black female conceives in Liberia. At 17 weeks of gestation, she immigrates to the United States. Prenatal care begins at 23 weeks. A screening sonogram is normal, and she is diagnosed with chlamydia. At 25 weeks, she develops pre-eclampsia. She is admitted to a hospital known for expertise in high-risk obstetrics. A sonogram is consistent with oligohydroramnios and a small for gestational age fetus. Doctors recommend a cesarean section and advise that, probably, the fetus has medical problems due to pre-existing pregnancy complications and prematurity. The patient repeatedly refuses a C-section for fetal indication but not for maternal indications. Fetal monitoring detects fetal heart decelerations. Under the condition that there are no circumstances for a C-section to be performed for a fetal indication, it is hospital policy to discontinue electronic fetal monitoring. Following induction of labor, there is a vaginal delivery of a 670-gram female infant. The mother recovers from pre-eclampsia and the newborn is diagnosed with brain damage, microcephaly, and cerebral palsy.
Is there a preponderance of evidence to conclude medical malpractice? AI concludes there is. However, it attaches a caveat emptor. “Consulting with an attorney specializing in birth injuries would be the best next step to assess the specific details of the case, including applicable laws and regulations in the relevant jurisdiction.”
Even this AI tells the story. No telling what AI designed for lawyers by lawyers might tell.
This case is Byrom v. Johns Hopkins Bayview Medical Center. On July 1, 2019, a Maryland jury returned the largest plaintiff verdict in U.S. history, $229.6 million.
I have a method, which I call “CCC+C,” Collate, Compare, Calculate, and Certify, not to be confused with the “4 Cs.” CCC+C is a decision-making tool. It incorporates a forensic science called ACE+V, statistical analysis, the law, and the Hippocratic Oath, which are also decision-making tools. CCC+C uses hypothesis testing and the Hippocratic Oath, which are principles in medicine, to determine duty, breach of duty, harm, and proximate cause, which are principles in law.
Just as ACE+V determines identity using analysis of a fingerprint, CCC+C distinguishes a medical error from an error of nature using analysis of a complication. However, CCC+C does so with ninety-five percent confidence, whereas ACE+V is qualitative.
I use CCC+C to analyze this very same case. The null hypothesis is “there is no statistically significant difference between the medical intervention in question and the standard of care.” The null hypothesis was retained. The brain damage, microcephaly, and cerebral palsy in the newborn are beyond the control of any of the involved doctors and there is no medical malpractice. This is determined with ninety-five percent confidence, without equivocation and without the need for a caveat emptor.
Furthermore, a court agrees. On February 2, 2021, the Maryland Court of Special Appeals overturned Byrom v. Bayview Hospital, stating: “… Because we conclude that the evidence presented at trial was not sufficient to support findings of either negligent treatment or breach of informed consent … we reverse the judgments.”
I am certain that most plaintiff attorneys firmly believe they never represent a frivolous medical malpractice lawsuit; however, when any of them are unable to distinguish a medical error from an error-of-nature with ninety-five percent confidence, how do they really know? A case without merit is frivolous, no matter what they believe. As a countermeasure, CCC+C scares the bejesus out of plaintiff attorneys and, if it catches on, so it should.
Howard Smith is an obstetrics-gynecology physician.