A colleague recently reacted to the idea of physician-built AI with a kind of weary cynicism: AI glitches. All AI glitches. I understood what he meant. Many physicians have already lived through one wave of technology that promised efficiency and delivered frustration. We were told electronic medical records would improve care, streamline communication, and modernize medicine. In some ways, they did. But they also inserted screens into the exam room, buried clinical thinking under clicks, and too often turned physicians into data-entry clerks inside systems designed more for billing and compliance than for bedside care. Today, physicians spend nearly half their day in the EHR, and primary care physicians spend another 2.7 hours of personal time outside scheduled patient care doing EHR work. So when doctors are skeptical about AI, I do not see ignorance. I see pattern recognition. But skepticism alone will not protect us. If anything, the lesson of the EMR era is this: When physicians do not shape technology early, we inherit technology shaped around someone else’s priorities. That is why this AI moment matters so much.
Medicine did not lose autonomy all at once. It lost it in phases. First to administrators. Then to insurers. Then to systems built around throughput, coding, prior authorization, compliance, and margin. At every step, physicians were told the changes were necessary, modern, and inevitable. At every step, the people making the decisions moved farther from the bedside, while the people absorbing the consequences stayed exactly where they had always been: in front of patients. That is how many physicians ended up in the position we occupy now, still responsible for outcomes, patient satisfaction, documentation, and liability, while having less and less authority over the systems shaping all of it. The numbers tell the story. In 2024, only 35.4 percent of physicians had an ownership stake in their practice, down from 53.2 percent in 2012. Another analysis found that 77.6 percent of physicians are now employed by hospitals, health systems, or other corporate entities, while corporate entities owned 58.5 percent of physician practices in 2024. This is not just a business shift. It is a transfer of power. As medicine became more corporatized, the doctor-patient relationship was increasingly filtered through nonclinical structures: productivity targets, quality dashboards, insurer barriers, staffing shortages, and revenue-cycle logic. Physicians remained the face of care, but often ceased to be the people defining how care was delivered. Patients still saw the doctor as the responsible party. But the doctor increasingly had to answer for systems the doctor did not design.
That has had consequences. The AMA reported that 45.2 percent of physicians had at least one symptom of burnout in 2023, and only 54.5 percent said they felt valued by their organization. A large JAMA Network Open study found that about one-third of academic physicians reported moderate or greater intent to leave their institution within two years, with burnout among the strongest associated factors. Another national analysis estimated that physician burnout costs the U.S. about $4.6 billion each year through turnover and reduced clinical hours alone. Burnout is often framed as an individual wellness problem. It is not. It is a systems signal. Physicians are not burning out because patient care is meaningless. They are burning out because the work surrounding patient care has become increasingly misaligned with the purpose of medicine. One of the clearest examples is prior authorization. In the AMA’s 2024 survey, 94 percent of physicians said prior authorization delays necessary care, 93 percent said it harms clinical outcomes, and 29 percent said it had led to a serious adverse event for a patient in their care. That means physicians are often held responsible for harms created by barriers they do not control.
This is why AI cannot be treated as just another IT upgrade. AI is arriving in a profession that is already strained, already fragmented, and already wary of outside forces claiming they know how to “fix” medicine. Ambient scribes are here. Generative AI is moving into chart summarization, patient communication, triage support, administrative workflows, and decision support. Agentic AI will go further. It will not just generate text. It will increasingly coordinate actions, surface recommendations, and shape how work moves through the system. The question is not whether AI will affect medicine. It already is. The real question is who will govern it. If physicians do not lead AI adoption, implementation, and oversight, then health systems, vendors, consultants, insurers, and technology companies will define its role for us. That would be a historic mistake. We have already seen what happens when physicians surrender too much control over the infrastructure of care. We should not repeat that error at the intelligence layer. To be clear, physician leadership does not mean blind enthusiasm. It means disciplined engagement.
Physicians should demand evidence, clinical relevance, transparency, liability protections, and clear lines of accountability. The encouraging part is that many already are. In the AMA’s 2025 physician AI sentiment report, use of at least one AI use case rose from 38 percent in 2023 to 66 percent in 2024. Sixty-eight percent of physicians said AI offers at least some advantage for patient care. But they also insisted on safeguards: 74 percent said standard malpractice coverage for AI is important, 67 percent said physician oversight of implementation is important, and 58 percent said protection from liability for AI-related errors is important. That is exactly the right posture. Not resistance for its own sake. Not hype for its own sake. Physician-led governance. This matters especially for cognitive and nonprocedural specialties. Internal medicine, family medicine, hospital medicine, geriatrics, psychiatry, endocrinology, infectious disease, these fields depend on synthesis, ambiguity management, communication, and judgment. They are vulnerable not because they lack value, but because outsiders often underestimate the complexity of what they do. AI will move aggressively into those domains. If physicians in those specialties do not help define the terms of adoption, others may mistake deep clinical reasoning for a workflow that can simply be automated. That would not only be wrong. It would be dangerous.
William Osler’s line still captures what medicine must not lose: “The great physician treats the patient who has the disease.” Francis Peabody said it even more plainly: “The secret of the care of the patient is in caring for the patient.” AI should protect that human core, not erode it. The U.S. is already facing a projected physician shortage of up to 86,000 physicians by 2036. AI is arriving not in a comfortable system, but in a stressed one. That means it will become infrastructure quickly. And infrastructure determines power. EMRs were the warning. AI is the test. Physicians can either help shape this next era of medicine, or once again wake up inside a system redesigned by everyone except the people who care for patients.
Augusta Uwah is an internal medicine physician.







![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)








