Late at night, after basketball games or practices, with the physical exhaustion of the court still settling in, I often find myself staring at my laptop. I am watching how long it takes to complete something that has nothing to do with sports, or even actual patient care. It is documentation. Coding. Forms. Insurance language that seems designed less for clinical clarity than for mere survival within a bureaucratic system.
In Silicon Valley, where I am growing up, the default answer to this kind of systemic friction is always the same: Build an AI to fix it. Today, heavily funded startups are rapidly deploying ambient AI scribes to capture every word spoken at the bedside. Tech giants are pushing large language models to answer patient questions at any hour. The promise is seductive: better data, faster care, and the end of the documentation burnout I see on my screen.
But as someone watching this innovation culture up close, I have started asking a different question. Not whether these systems are accurate, but whether they are accountable. This distinction matters more than it might seem. Most current health care AI operates probabilistically. When the output is right, it is useful. When it is wrong, the standard disclaimers appear: “This is guidance, not diagnosis.” But in the real world, patients act on these outputs. Physicians incorporate them into their workflows. The line between a suggestion and a clinical decision blurs rapidly.
At that point, accuracy is no longer the central problem. Accountability is.
In Where It Hurts: Dispatches from the Emotional Frontlines of Medicine, a collection of candid literary dispatches from more than 60 doctors, nurses, and healers, clinicians describe the profound moral injury and frustration of their daily practice. Reflecting on their stories, I came to see this pain as not caused by bad intentions, but by a modern structural flaw: Today’s health care architectures often feel designed primarily for data extraction and efficiency, rather than human responsibility. They feel like architectures that were never built to hold anyone accountable for the human context that gets lost in the drop-down menus.
Health care AI risks repeating this exact pattern, only faster.
The deeper problem is structural. When an AI system generates a clinical recommendation and that recommendation leads to harm, the question of responsibility becomes genuinely opaque. Was it a hallucination in the algorithm? A bias in the training data? The physician who implicitly trusted the summary? The hospital that bought the software? Current generative systems are simply not designed to make this traceable.
What we need is a fundamental shift in how we architect AI for medicine: a move from generation to constraint.
Instead of allowing systems to produce outputs directly from raw clinical inputs, we should require that outputs be firmly grounded in a structured, verifiable representation of the underlying context. If an AI’s recommendation cannot be traced back to the actual patient narrative, it should not be allowed to exist in its final form.
This is not a radical idea. It is exactly how we expect human clinicians to operate. We ask physicians to show their reasoning. We require documentation. We hold providers accountable when decisions cannot be traced back to evidence. We should demand the exact same standard from the computational systems we are pouring billions into right now.
Silicon Valley is already building the future of medical tools. But as founders and investors race to deploy them, the question is whether we are building them with accountability designed into the core architecture, or just slapping it on as a legal disclaimer at the end. The answer matters most to the patients whose lives will be shaped by what these systems say. In a region that prides itself on moving fast, the most important innovation we can make in health care right now is not a faster algorithm. It is a more trustworthy one.
Ian Hu is a high school student and writer. Pao Hsuan Huang is a licensed acupuncturist.





![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)









![Silence at the chessboard changed how I talk to patients [PODCAST]](https://kevinmd.com/wp-content/uploads/b664dfaa-d79f-41b8-9445-d43d50340ea4-190x100.png)



