Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
KevinMD
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking
  • About KevinMD | Kevin Pho, MD
  • Be heard on social media’s leading physician voice
  • Contact Kevin
  • Discounted enhanced author page
  • DMCA Policy
  • Establishing, Managing, and Protecting Your Online Reputation: A Social Media Guide for Physicians and Medical Practices
  • Group vs. individual disability insurance for doctors: pros and cons
  • KevinMD influencer opportunities
  • Opinion and commentary by KevinMD
  • Physician burnout speakers to keynote your conference
  • Physician Coaching by KevinMD
  • Physician keynote speaker: Kevin Pho, MD
  • Physician Speaking by KevinMD: a boutique speakers bureau
  • Primary care physician in Nashua, NH | Kevin Pho, MD
  • Privacy Policy
  • Recommended services by KevinMD
  • Terms of Use Agreement
  • Thank you for subscribing to KevinMD
  • Thank you for upgrading to the KevinMD enhanced author page
  • The biggest mistake doctors make when purchasing disability insurance
  • The doctor’s guide to disability insurance: short-term vs. long-term
  • The KevinMD ToolKit
  • Upgrade to the KevinMD enhanced author page
  • Why own-occupation disability insurance is a must for doctors

Gradually, then suddenly: Dr. Robert Wachter on health care’s giant AI leap [PODCAST]

The Podcast by KevinMD
Podcast
April 22, 2026
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

What if the biggest problem with electronic health records was not the technology itself, but that we expected it to transform medicine when it could only lay the foundation? Robert Wachter, professor and chair of the Department of Medicine at the University of California, San Francisco, joins the show to discuss his book, A Giant Leap: How AI Is Transforming Healthcare and What That Means for Our Future. He explains why AI is the first technology that replicates what doctors thought only they could do, from diagnosing complex cases to demonstrating empathy. You will hear how Open Evidence dethroned UpToDate as the go-to clinical knowledge tool, why AI scribes went from experiment to expectation in just two years, and what the Waymo model of incremental trust teaches us about avoiding a catastrophic setback in medical AI. Wachter also explores the deskilling debate in medical education, why the doctor-patient relationship may not be as irreplaceable as physicians believe, and how primary care could look radically different within a decade. If you are trying to understand where AI in health care is headed and what it means for your career and your patients, this is the conversation to hear.

Partner with me on the KevinMD platform. With over three million monthly readers and half a million social media followers, I give you direct access to the doctors and patients who matter most. Whether you need a sponsored article, email campaign, video interview, or a spot right here on the podcast, I offer the trusted space your brand deserves to be heard. Let’s work together to tell your story.

PARTNER WITH KEVINMD → https://kevinmd.com/influencer

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today it is my pleasure to welcome Robert Wachter, professor and chair of the Department of Medicine at the University of California San Francisco, and we are going to talk about his new book, A Giant Leap: How AI Is Transforming Health Care, and what that means for our future. Bob, welcome to the show.

Robert Wachter: Thank you, Kevin. Great to see you.

Kevin Pho: I was thinking back, and you graciously wrote the foreword to my book. This was thirteen years ago, back in 2013, about online reputation and social media. I was thinking about all the technological changes. Change is too small a word for these momentous changes in medicine over the years. It could be the EHR, social media, mobile devices, and, of course, now we have artificial intelligence. When you reflect and think back between all of these digital waves, what feels fundamentally different about AI?

Robert Wachter: The humanness of it. The fact that it can replicate to a pretty astounding degree what doctors typically thought only we could do. I wrote a book about the transformation of medicine going from paper to digital. It changed a lot, some for the better, some for the worse, but in some ways it did not really fundamentally challenge what I do as a doctor. It was an adjunct. It helped that I got rid of doctor’s handwriting and I could send an e-prescription to Walgreens, but being a doctor still felt the same.

To a large extent, and this is a bad thing, being a patient felt pretty much the same. I think this is the first technology that I have ever seen that just fundamentally challenges what it means to be a patient, a doctor, a nurse, or a health care leader. That is why I use Hemingway’s old line in the book. When a character went bankrupt, one of the others asked how a man goes bankrupt. He said two ways: gradually and then suddenly. Everything up to now feels gradual. This feels sudden.

Kevin Pho: I feel that electronic medical records, even though they did not change our definition of being physicians, did change the day-to-day. You mentioned the humanness of artificial intelligence. Go into more detail. What do you mean by that?

Robert Wachter: It is the first technology that I have seen that essentially feels like you are talking with a human. I remember the early days, three years ago of generative AI. I would be watching and thinking it can pass the boards. That is impressive. But in some ways, it is not more impressive than when I watched Watson beat the Jeopardy champions in 2011. It can pass tests, and it knows things. But what about complex cases? That is what we do in medicine.

Then it does really well on the case records of the Mass General Hospital, the most complex cases. But how about relationships, communication, and empathy? Then it was clear it can do that well too. Of course it has no empathy, but it can fake empathy like the best doctor. On every measure of what we think of as being unique about what physicians do and their relationship with patients, we began to see that it is acting more like a human.

Like most of the doctors I now know, I use OpenEvidence to get my medical information. Until two years ago, I used UpToDate or I used Google. Why is OpenEvidence better? I could go into UpToDate and ask about the right dose of apixaban for a patient with pulmonary embolism. But I could not say I have an 82-year-old patient who comes in with a PE who also had a history of GI bleed three years ago, has a creatinine of 1.7, and weighs 243 pounds. I can say that to a large language model, and it gives me an answer that is not that different than the answer I might get if I was talking to a colleague. That is magical and new. That is different than anything we have ever had before.

Kevin Pho: Are they not having some type of large language model interface at UpToDate to replicate what OpenEvidence is already doing?

Robert Wachter: They are, they have built it, and they needed to because their lunch was getting eaten. Until two years ago, if I had gone to my residents and asked what they use as a knowledge tool, every single one would have said UpToDate. If I did that today, every single one would say OpenEvidence. UpToDate was a little asleep at the switch and then woke up one day and said we better build that kind of interface. Now I think it is an open question whether UpToDate can win back its audience. I think that is an interesting food fight.

What OpenEvidence does is you can put in a very doctor prompt, and it is spatially searching the medical literature and guidelines from respected societies to give you an answer. What UpToDate theoretically can do is search a curated chapter written by the world’s expert on a topic. You could argue that might be better because that expert will know that a study is a good study and another study is not a good study. Does OpenEvidence know that? That is a little trickier. On the other hand, the economics model for UpToDate is going to be tricky because they have to have human experts for one thousand topics that have to keep chapters up to date. They have to pay them. I do not know how that is going to play out, but UpToDate has some work to do to reclaim the mantle from OpenEvidence.

I will tell you one quick story in case you are having any sympathy for UpToDate getting disrupted in the classic way. When hospital medicine became a field, I wrote the first textbook of hospital medicine with two of my co-authors at UCSF. We put it out the minute UpToDate became a thing. I gave a copy to the residents and put it in the resident library, a little bookshelf in the resident room. Six months later, here was a book written by the chair of the department, the associate chair of the department, and the residency director. I went into the office and pulled it out. It had been there for six months, and I opened it up. It was clear to me that the book had not been opened before because UpToDate had just come out. I had spent two years writing this thing. UpToDate did to textbooks essentially what OpenEvidence did to UpToDate. UpToDate is going to need to scramble to reclaim their position.

Kevin Pho: We are going to talk about this later in terms of some of the tools that physicians are using. It is a fascinating horse race. You have OpenEvidence, UpToDate, and of course, Doximity really fighting for supremacy when it comes to the knowledge aspect of large language models. Thinking back to when ChatGPT first came out, November 30, 2022. There are times when you have a monumental shift happen and you just know exactly what you are doing at that point. I can think of one episode in the past when Steve Jobs gave his iPhone keynote, and he just knew at that moment that it was going to change the world. Did you get that feeling? You mentioned earlier it was kind of like Watson. If you did not get that feeling when you first saw ChatGPT, when did you get the feeling it was going to change everything? Was there a singular event that changed your mind?

Robert Wachter: I did get that feeling the first time I used it. It was not perfect. I did not know what hallucination was in the beginning. Then you began to see that it would fabricate certain answers. Part of the reason I got so excited about it for health care was the day before, November 29, 2022, if I was Googling something, I was not sitting there saying Google stinks and I need a better search engine. Given that I am not Steve Jobs, I could not even imagine a better search engine than Google. But the first time I used GPT, I said this is better. I can put a prompt in plain English and get an answer in plain English. That was astounding to me.

On November 29, 2022, I do not know anybody who thought the health care system was perfect. The first time I used it, I recognized that part of the problem with electronic health records and our disappointment with them was our belief that just having this digital tool would transform health care. I came to realize it does not do it automatically, but it creates a foundation for it. All of our data are digital, but they are mostly in unstructured notes, doctor’s notes, nurses notes, etc. The questions we ask are not computable. They are the kind of complex questions I just asked before, with lots of words, jargon, and initials. The first time I used it, I knew this is the kind of tool that we need that can essentially deliver what we hoped the electronic health record would deliver and did not.

I have gone back and looked at my book The Digital Doctor a few times. Toward the end of this very grumpy book, there is chapter twenty-seven where I ended on a pretty optimistic note. The note was my mistake here was thinking that the EHR would be transformative. I came to believe that the EHR was foundational but not transformative. We needed our data in digital form, but then we needed a set of tools that would allow us to take all that data, make sense of it, get diagnoses, get treatment recommendations, and get rid of some of the bureaucratic paperwork. The EHR by itself did not do that and, in fact, created some new bits of friction.

Truly, on November 30, 2022, when I used GPT for the first time, I said this is it. It is not perfect, and it is going to take some evolution. One of the things I learned from the EHR is there will be lots of unanticipated consequences. As I used different and better tools, like the first time I used OpenEvidence, I realized this is even better than what I had seen before. The first time I put a paragraph of text into Claude and asked it to make this better while retaining my voice, or the first time I said to GPT to tell me about Robert Wachter in the style of Hemingway, Shakespeare, or The Godfather, those were all wow moments for me.

Kevin Pho: As you reflect back in your previous book, The Digital Doctor, there are many painful lessons that we have learned from that transition to electronic health records. If you were to pick one, what is the most important lesson that you have learned? What is the most important mistake that we cannot repeat as hospitals and medical institutions rush into the AI era?

Robert Wachter: That is a great question. The thing that I did not understand at all was the productivity paradox of IT. There is a tendency to put in some fancy new piece of technology and assume that it is going to be transformative. It is not just the tech. It really is the ecosystem, the way you govern it, and the culture. There is some change that needs to happen that goes beyond just putting in a piece of technology. This technology is easier in a way than the electronic health record in that you do not have to learn a lot about how to put in a prompt. The importance of the right prompt has gone down in the last couple of years.

The biggest lesson might be the lesson of unanticipated consequences. That is a paradox. What flows from that? You should anticipate unanticipated consequences. Recognizing that Vinod Khosla wrote a blurb for my book, the theme is that we have learned it is not the technology itself. It is the ecosystem, the culture, and the payment system. We have to pay a lot of attention to it.

The thing that may turn out to be the biggest challenge for physicians is why many of us dislike our electronic health record. Part of it is the tools were not very good, but it also became an enabler of corporate control. When I was scribbling on a piece of paper, nobody could look over my shoulder in real time, and nobody could make me do anything. Once it became electronic, everybody who cared about what you did as a doctor had the capacity to electronically look over your shoulder and electronically make you do things. What do people hate about their EHR? It is not really Epic or Cerner’s fault. A lot of it is having to fill out thirty-two checkboxes because quality is being measured or it is going to produce a better bill. You could build enforcing functions to make the doctor do things that you could not before, and that made clinicians unhappy. It was the source of a lot of burnout.

The AI is going to be that on steroids. It is not only potentially going to make me fill out certain boxes. It is going to begin delivering decision support that is going to suggest, at first, and maybe something stronger than suggest, the diagnosis. It is going to suggest the right tests and the right treatments over time. There is going to be some real tension between physicians who are used to their autonomy and this technology. Underlying the technology will be some really important values. If you suggest that I use a new Alzheimer’s drug that might cost $50,000, slows the progression by three months, and has a 1 percent chance of a brain bleed, the decision about whether to suggest that has to have a lot of values embedded into it. Somebody is going to be deciding, and it might not be the doctor. It might be your health care organization, or it might be the insurer.

It is one thing to say the tool is going to scribe for you. Who is going to object to that if it does it well? It is going to summarize the patient’s 600-page medical record. Terrific. Those are things that are relatively unobjectionable. I am going to use OpenEvidence on my phone that is sitting in my pocket to suggest a diagnosis. Fine, that makes me a better doctor. But as these tools get institutionalized and increasingly impact and dominate decision-making, diagnosis, and treatment, there will be real tension between physicians who worry about what is underlying that recommendation, losing autonomy, the art of medicine, and ultimately their job. Those lessons flow a little bit from the experience with the electronic health record, but the electronic health record was baby steps compared to what we are going to see as decision support becomes more robust.

Kevin Pho: You said that the AI era is going to be electronic medical records on steroids. If you step back and take a wider look at it, we are in our infancy when it comes to medical institutions adopting AI tools. You are based in San Francisco near Silicon Valley. I am sure you have exposure to a lot of health tech startups. I interview a lot of health tech executives on the show, and it is AI everything. What are you seeing now when you talk to different hospitals as they consider AI rollouts? Are they taking a more measured approach like what you are describing, or do you feel like they are rushing into it to catch that bandwagon?

Robert Wachter: I think they are taking a pretty measured approach. These are conservative organizations. They worry about privacy, security, and hacking. They do not have the bandwidth to do one hundred implementations. Each one takes change management, organization, and buy-in. I worry the path of least resistance, particularly if you have Epic, is to just turn on the tools that Epic builds. Your roadmap may just be Epic’s roadmap. It is like the old saying that nobody ever lost their job going with IBM. In an Epic hospital, nobody ever lost their job waiting for an Epic tool.

In some ways, it resembles the old Microsoft debates. Is this good for one company to dominate a field like that? A lot of the tension comes over whether we stick with Epic as the dominant player here, or use tools built by giants like Google or Amazon, or the ubiquitous AI startups that are around us all the time. It is a tricky choice because the startups might be building a better tool because that is what they do for a living. But are they going to be in business in two years? You have to integrate the tool. It creates a playing field that does not feel exactly level. I worry about Epic dominating this game for everyone.

The speed with which we have all adopted AI scribes is pretty amazing. Two years ago, you wondered if it was going to cost a health care organization a couple million dollars. Are they going to do it, or are they going to insist on a return on investment? It has almost become an expectation of a practice. For the rest of them, they are looking at what the return on investment of these tools is. At UCSF, we have set up a robust governance process to vet these tools. A lot of organizations are scrambling to do that. Nobody has the time to deal with one hundred startups and listen to their pitches.

The thing that happened most quickly is the shift to OpenEvidence, which to me is mostly organic. In very few places have health care systems bought OpenEvidence because it is free and advertising-based. Clinicians have voted with their feet and said here is a tool that we think is better and we are just going to use it on our phones. That creates a little bit of hair on fire for some of the institutional officials who are worried about HIPAA, and therefore some motivation to say if everybody is going to use it, we better bring it into the system and govern it better. The fastest things I have seen are the scribes on the institutional side and OpenEvidence and other knowledge tools on the organic physician side. Those have been quick, but it does not feel like it is happening too quick or people are not thinking about the guardrails.

I worry more on the patient side. Patients are using GPT or Gemini for health-related queries constantly. The emerging evidence is that sometimes they get the wrong answer. Of course, there is no regulation for this at all. That is the part of this that is probably the scariest in terms of how quickly it has evolved and taken over.

Kevin Pho: Your point about Epic earlier is already happening. Epic is coming out with their own AI scribe, and they are using that Microsoft model to put some of the smaller AI scribe businesses potentially out of business. There is definitely a lot to think about there. From a physician standpoint, it is safe to say that the EHR rollout has gone wrong for physicians. There is a lot to complain about. If we were to do it over again, there are a lot of things that we would change. As we enter the AI era, from a practicing physician standpoint, what could make this moment go wrong?

Robert Wachter: The thing that probably dominates my thinking about what could go wrong will be a high-profile error where the AI kills somebody, setting things back five years and leading to a backlash against AI. People will say it is not ready for prime time in a high-stakes field like medicine. That is why I think it has been smart for us to begin with pretty small-bore problems. It is not the end of the world if AI gets something wrong in scribing a note. It will happen. The thing is not perfect, but it is better than the alternative. The same is true in chart summarization. It was important to try things early that built up a reservoir of trust. As we get to more high-stakes, higher benefit, but also higher risk entities like diagnosis and clinical decision support, at some point it is going to make a mistake. At some point, somebody is going to die. If you have not built up enough trust before that, it becomes a lawsuit and a cause célèbre.

I spent a lot of time in the book talking about Waymo and what it took to have a driverless car that I take about once a week and sometimes take a nap in the back. Everybody says AI is not ready for prime time in health care because it is too risky and we can kill somebody. Are you kidding me? Try making a left turn in rush hour onto a busy street in San Francisco with me sitting in the back of a car with no driver, and I trust it completely. That is amazing. You did not start with a driverless car. You started with cruise control, you started with automatic braking, and then you drove millions and millions of miles with a driver there just watching in case something was wrong before you said this thing is ready for prime time. In San Francisco, there was a second driverless car company, Cruise, that GM owned. It ran over a woman. It did not kill her, but almost killed her, and within a year it was out of business.

That is the thing that worries me the most, particularly in an environment where the incumbents, doctors and nurses, are a little bit worried about losing their jobs. If they felt the AI was not ready and was coming for their job, they would use the n of one case of a bad outcome to demonstrate we need doctors. There is going to be some of that in every industry. I argue in the book that the threat of job replacement for physicians and nurses is relatively low in the short term. We have seen that in radiology. Everybody thought radiologists were going to be out of business by now. At UCSF, we cannot hire radiologists fast enough, and they are all begging for more AI help. There may be a threat to them, but it is not tomorrow.

The more powerful incumbents like physicians and nurses are generally feeling like they need the help and are not so worried about job replacement. In a world where there is a lot of skepticism about AI, public attitudes about the AI companies are pretty sour. People are quite worried about this partly because of jobs, partly because of other risks, and partly because they just do not understand it. The level of trust is low, and it will not take very much for a bad error to become a national story. That could set the field back by five years.

Kevin Pho: We are talking to Robert Wachter. He is the professor and chair of the Department of Medicine at the University of California San Francisco. His new book is A Giant Leap: How AI Is Transforming Health Care. I want to transition now in terms of the clinical use of AI. You have mentioned OpenEvidence several times, and you still do rounds. How are medical students and residents today incorporating AI during rounds?

Robert Wachter: They are using it like I do, as a curbside consult. I am a hospitalist, so I am a generalist. What I tell a medical student about the good and bad parts of being a hospitalist is it is an analog of being a primary care doctor in a hospital. Any patient who comes onto my service, I can name five people in the building who know more about each of their problems than I do, and I know more about all of it than they do. That leads to some tension. One of the manifestations of the tension is that on rounds I will often have a question about the right management of a problem, pathophysiology, or treatment. It is not a big enough question where I need a procedure or a nephrology or infectious disease consult, but I would love to be smarter about it.

Three years ago, I hoped I would run into my favorite infectious disease doctor in the hallway or the cafeteria and do a curbside consult, where you ask to run a case by them and do a little snippet. Now I use OpenEvidence for that, and all of our residents and students do the same thing. That has become the standard way when you have a question that does not merit a full specialty consult, but you need specialty-level knowledge. It is scaled and hyper-convenient. Sometimes they are using other tools, including GPT, Gemini, or Claude, which are also quite good. OpenEvidence has become the default setting.

I think we have all gotten over the embarrassment of that. I remember in the old days when I was doing ambulatory care and I would see a patient and have a question that I needed to Google, I would say to the patient my beeper went off as a little white lie and go out in the hallway. Now we are pretty confident admitting that we can use some help, and this tool is available 24/7. In the medical education world, we are worried about deskilling. It is one thing for me to use it to ask a question. I know the questions to ask, and I can interpret an answer and know if it is a good answer or a dumb answer. A first-year or second-year medical student is not an expert. They are a novice, and they do not have some of the same capabilities.

The biggest struggle that we are having in the medical education world is whether this is a crutch. Is this going to prevent them from learning, or is this an adjunct to their learning? It gives them on-the-spot intelligence that they did not have access to before. I think it is net good, but we have to be careful about making sure that we do not bypass the developmental stages they need to go from novice to expert. That is what allows you to ask the right questions of a patient. It would be a mistake for us to say this thing is in your pocket and therefore you do not have to know things anymore.

I chaired a national conference on medical education and AI last year and tried to see if there is consensus on anything we can take off the curriculum. The only answer that everybody agreed on was the Krebs cycle. Everybody is pretty sure that can go away. Beyond that, there is a general consensus that it is premature to say medical students do not need to know things anymore or know diagnostic reasoning because this tool is smarter than they are. That may come, in which case we probably are out of a job. In the foreseeable future, it is still going to be important that they learn how to think, how to reason, and some of the foundational knowledge that goes into differential diagnosis.

Kevin Pho: Tell us the things about being a doctor that you feel AI cannot replace. You mentioned during teaching rounds that there are some things medical students need to maintain even during this AI era so that they cannot look everything up on OpenEvidence. What are those specific things that they need to keep under every circumstance, no matter how advanced AI gets?

Robert Wachter: I almost never say never anymore. These things are getting better so fast that things I would have thought we would never give over to AI, we have to treat as open questions at this point. Telling a patient they have a dread disease, cancer, kidney failure, or diabetes, I do not want AI to do that. People have to be really good at empathy. People have to be really good at bringing together teams and using members of teams for their appropriate uses and skills. Understanding enough clinical medicine that you know the right questions to ask and know how to interpret answers is going to remain exceptionally important. Having an AI decide whether you start on chemo or go to surgery are things that I cannot see happening for the foreseeable future.

I will hear people talk about this, and they divide the world into things that feel like they are AI-able, and then things that they almost say as a religion are profoundly human things. That is the conclusion I came to at the end of the book. I think we have to recognize that we are rooting for the humans because we are them. We work very hard to be doctors, and the idea that it could be replaced feels profoundly unsettling. Recognize that is a bias that needs to be tested, particularly with a younger generation of patients who may ask why they would go sit in a waiting room of a doctor’s office. This feels transactional in the same way my finances or travel is transactional. If it is cheaper, more convenient, and I am getting the answers I need, why would I go see a doctor? There is a generational shift here where the expectations of what you need to see a doctor for are going to change.

For now, things that are high-stakes, complex, and feel profoundly human really involve empathy. High-stakes decision-making and communication feel to me like they are fundamentally human. Every time I say that, I also recognize these tools are getting better and better. We may need to demonstrate that the value of seeing us is worth the inconvenience and the cost. Maybe we will, and maybe we will not. In the world of primary care, it feels like almost a sure thing that there is going to be a fair amount of what a primary care doctor does today that can be done effectively by AI at a lower cost and more conveniently for the patient. Yet, nobody wants a patient to learn that they have cancer from AI.

There is probably some level of complexity in terms of the number of different diseases with all their interactions that is beyond today’s AI, but probably not beyond AI forever. Then you end up with tough questions. What does the triage system look like if the patient is getting their cholesterol and hypertension managed by the bot, but somehow the patient can get to Kevin when they really need to? Questions are going to be posed to primary care doctors. You look at your schedule today, and it is an incredibly complex patient, hypertension, statins, and a difficult case. If all the easy ones are taken off your plate in the name of creating a job that is better for you, it may make your day even more hellish and make the cognitive load almost impossible because all you have is a centrifuged sample of tough patients. Lots of second-order questions flow from this. Four years ago, there was a whole lot of stuff that AI does that I would have said could never happen, and now it does.

Kevin Pho: A lot of consternation in terms of the future of our jobs, especially in primary care. I go to a lot of Facebook physician-only groups, and there is a lot of debate on the role of AI in the future. There is a certain consensus that AI will not necessarily replace our jobs, but you could have advanced practice practitioners with OpenEvidence or AI support that could do a lot of things primary care physicians can do. If you contrast that with pretty much every other field, you are right. There is a human component in medicine. That doctor-patient relationship is sacred. I am not going to say never, but I would say it is going to be very hard to be replicated by AI.

Robert Wachter: Even as you say sacred, which is incredibly loaded. As soon as we say it, and I feel it myself, I sort of catch myself now and say that is a man-made construct. It is partly to make me feel good that what I do is so important. That is a question that will be answered empirically. I do not think anybody ever thought the relationship with their bank teller or their travel agent was sacred. A whole lot of people are choosing to do that in different ways with digitally enabled tools. It could be completely technology-driven care or a lesser-trained and less expensive human who maybe has the relationship part but does not have the knowledge that you and I have, and they do not need it anymore because they have AI.

I would not only confine that to the cognitive parts of medicine. Do you need a $450,000-a-year gastroenterologist doing your colonoscopy when you have an AI-enabled colonoscope? That is a skill you could probably teach a high school student to do. I am sure I am going to make all the gastroenterologists unhappy, but that is life. It is not going to be our choice necessarily about whether patients see this thing as sacred. It is going to be their choice.

Kevin Pho: What are some things physicians can do? I will use the term future-proof to future-proof themselves as AI can do more and more things. What kind of advice could you give to our currently practicing physicians and early-career physicians?

Robert Wachter: I think future-proof is the wrong word. My wife, who is a journalist and wrote about tech for a long time, gave the commencement speech to the graduates of the computer science department at San Francisco State a couple of years ago. It was like speaking at a wake because these kids had future-proofed their careers by getting a computer science degree. These were mostly first-generation college students. I thought I was future-proofing my career by doing this thing called computer science that I was taught was a golden ticket, and now they are worried they will not have a job. The future is the future. It is hard to know, but what should everybody be doing?

You have to use these tools and see how they make you better, more efficient, or more productive. Part of my writing the book was designed to communicate what they are good at, what they are bad at, and where this takes the future of health care. Many people found it useful that way, but there is no substitute for using it in your daily life, whether in your clinical life or personal life, to help you plan your next trip or to help you write. You get a sense of what they can and cannot do. If you are in a leadership position in your health care organization, I do not see how you are going to be able to stay ahead of the curve because your competitors are going to be using these tools. You have to figure out how to use them. The nice part is they are not that hard. This is not like saying you have to go off to a two-week course to learn how to do prompt engineering.

Ethan Mollick, a professor at Wharton who writes about this, talks about the jagged frontier. They are scary good at certain things, and certain things they are just not very good at yet. The only way you are going to figure out how it can improve your life or your practice is to use them. There is some foundational knowledge you have to have to use them effectively, but it is not a massive investment in time. That is the only real advice I can give. I find this really interesting and profound, so I like keeping up with it. I have a Substack. Ethan Mollick writes about AI in life and work, but not so much about health care. His stuff is very helpful in staying at the cutting edge of where the tools are. Listening to the podcast Hard Fork that The New York Times produces is quite good in keeping track of the technology, but there is no substitute for trying it. Choose a few of them, as they are not very expensive, and see which one you like. I use OpenEvidence for my clinical work mostly. Each one has a different set of characteristics. I toggle between Gemini and GPT for regular Google replacement. Claude is the best of the writers and editors, so if you are using it for that purpose, it is really quite good.

Kevin Pho: Earlier, you joked that the Krebs cycle could probably go in medical education. Are you seeing medical schools adapt to this AI era? If not, what kind of changes should they make going forward as more students are using AI tools?

Robert Wachter: They are talking about it in the same way we just did. It is happening, and they are all using these tools. There is a part of medical school which is similar to a high school or middle school teacher asking how to assess students. Medical school assessment is easier. It is not like you are sending the kid home to write an essay, which is the scourge of middle school teachers who know it will be written by AI. Medical schools are recognizing there is going to be a trade-off between how much knowledge we have to stuff into the kids’ ears versus other skills that they need to have and that we should emphasize more. Good schools are making a subtle shift that way.

Pulling all that knowledge out of the curriculum is not the right call at this point, but we need to make sure they are using the tools safely, spotting hallucination, and knowing when to go to the primary source. They need to be better at the things we think of as fundamentally human, like empathy and communication, and using AI in new ways to do those things. For example, how do I know how good my medical student is at interviewing a patient? Do I watch them for 30 minutes talking to the patient? Usually not. They come out and present the case to me. That is a pretty poor proxy measure of their communication. Now we could record that conversation and the AI could give them feedback on how they did. Schools are beginning to build that sort of thing.

At UCSF, we have a feedback tool. At the end of a student presenting a case to me, I might have some feedback that I give the student. When I assess the student at the end of ten days, I have forgotten what I said. Now I will dictate into my phone what my feedback was, and at the end, it aggregates all of that for me as overarching feedback for their entire performance. We are using AI for grading tests, reading their notes, and new kinds of simulation. These world models are able to watch, not just listen to what they all said to each other during a simulation, but watch what happened in a room and give feedback about the entirety of the experience. Those are tools that forward-looking schools are working on.

The concept of precision education has been around the corner the same way the concept of precision medicine has been around the corner for the last twenty years. Precision education says why are we giving the same generic curriculum to every student? We should be assessing where they are. The student needs to learn more about diabetes, and they have heart failure nailed, therefore we should shunt the patients to them that they need. We could modulate their training time based on their achievement of competencies rather than having to be in medical school for four years.

If health care is slow to change, education is just as slow. If you are building new things in the curriculum, you have to figure out what to take out of the curriculum. The anatomy and biochemistry professors are very wedded to what they do. There are political and historical challenges. Forward-thinking schools are recognizing that students need a slightly different set of skills than they had before and need to be very comfortable with these tools. A lot of health care systems, including mine, are thinking about this. We have a committee decide if we turn on scribes. That committee up until recently did not have an educator on it. The decision to turn on scribes might be different for students than for the practicing doctor. Maybe we do not want them to have their notes scribed in the beginning. We want them to write fifty notes and get good at it before we turn it on. We have to figure out how to have an educational lens on what otherwise would be more clinically or business-driven decisions.

Kevin Pho: It is almost analogous to the invention of the calculator. You have calculators as a tool. How much do you still need to rely on basic arithmetic when you have a calculator? There is an opportunity in medical education to refocus some of that time and resources. I do not want to say less knowledge base, but less of an emphasis on knowledge because it could be easily looked up. Have that focus on deepening relationships or how medical students should interact with patients. I think it is that relationship that will persist as these AI models get more advanced.

Robert Wachter: Maybe we should repurpose some of the time on what we used to do on the new thing. I think that is right. There has also been the argument that Zeke Emanuel made recently in The Times that medical schools should be three years. There are schools that are three-year medical schools that seem to be doing fine and have similar outcomes to four-year schools. The process is long, and the process is expensive. If there are things we can take off their plate, I do not think we should automatically say let us fill back that cup with other stuff. We may have to make the whole thing more efficient.

Kevin Pho: What about your colleagues, your fellow professors? Obviously, you are on one end of the spectrum because you wrote a book about AI. What are the perspectives among your fellow professors regarding the advancement of AI and how that intersects with medical education?

Robert Wachter: I think it is generally positive. That may be partly because I am the chair of the department, so they are supposed to think the way I think, although they mostly do not. We are in San Francisco, which leads to a slightly more optimistic view. When I go to places in other parts of the country, it is not quite the burning issue that it is at my place. Most people are using AI knowledge tools and seeing what they can do. It is pretty unusual these days when I talk to people in places that are not very forward-thinking in the world of technology and they say they just do not use this or believe in it. They are trying it in the rest of their life.

We may be overlearning the example from radiology. Would you tell your kid to become a doctor? The story I tell in the book is in 2016 when Geoff Hinton, the founding father of generative AI, gave a speech in Toronto and infamously said we should stop training any new radiologists because it is obvious in five years we are not going to need them anymore. If we had listened to him, by 2021 we would have no new radiologists. As I say, there is a shortage of radiologists now. That has given people the impression that job replacement in medicine is not going to happen. There probably is a threat in fields like radiology and pathology over a decade or fifteen-year time horizon. As these tools get better, the radiologist and pathologist will find other new things to do. There still will be jobs, but it has given the sense to people that they do not need to be that worried about their job.

That has been helpful because the political blowback against that would be profound. If people are not that worried about it, they are more receptive to trying this in their life and seeing whether it can make things better. The harder questions are institutionally where it lives, where it fits into the workflow, Epic versus not Epic, and those sorts of things. Most people are either neutral or mostly enthusiastic. If they are worried about it, they are worried about it in the same way I am, which is patients using these tools themselves instead of seeing a doctor and getting themselves in real trouble, setting things back. As these companies get a little bit more hubristic about pitching themselves as AI doctors, you are going to see people using these tools as substitutes for the health care system. Some of them are going to have a bad outcome, and that is what most of us worry about the most.

Kevin Pho: I know GPT and Claude are appealing directly to patients. They are encouraging patients to upload an entire medical record to these tools without emphasizing things like privacy. From a patient perspective, if they are listening to you now, what are some of the red flags? What are the things that they need to be cautious about as they bypass the medical system and go straight to these tools to upload their medical record or simply ask for medical advice?

Robert Wachter: The decision to upload your medical record is a decision you should take thoughtfully. You already have uploaded your medical record. It is in Epic or Oracle. It is in an electronic record somewhere, so it already lives in a company. Those companies are governed under HIPAA, and they have strict privacy regulations. For the most part, that has been quite a safe thing to do. Uploading it to GPT, Claude, or Perplexity is an act where you do not have the same protection. Am I really worried about it? Not really. The companies have a pretty strong corporate interest in not leaking your data. They are going to be pretty careful, but anything I did not want to be publicly exposed, I probably would not allow to be uploaded.

The bigger question is whether they are giving the right answers. I still think they are better than anything that you had before. What are you comparing against? If you are comparing against Google, I would rather put my data and ask my question to Gemini or GPT than just do a Google search. You get something more useful back. Try doing it in two different models and seeing if they agree. I cannot prove that is better, but I feel more comfortable if I put this to GPT and Gemini and they give me the same answer about what to do.

There are certain red flag symptoms. If you have bad chest pain, shortness of breath, you are confused, or part of your body is not working, I do not care what the computer says. You need to see a doctor now. Patients may not know that, but there are certain things that we know where these tools are not trustworthy enough. I think the tools are going to get better. I would be looking for tools that replicate what you and I would do more than what they currently do. If someone says to you they have a headache, you do not say that sounds like a migraine. You ask a whole bunch of questions before you say what you think they have and what they should do. The tools of the future have to act more like a good doctor than the current tools do.

Kevin Pho: We have been talking to Robert Wachter. He is the professor and chair of the Department of Medicine at the University of California, San Francisco. His book is A Giant Leap: How AI Is Transforming Health Care. I want to be thankful for the time you have graciously spent with us today. What do we have to look forward to in the immediate future, and then maybe we can end off with some of your take-home messages to the KevinMD audience.

Robert Wachter: Kevin, thanks for the opportunity. The next 10 years are going to be pretty great for doctors, and I think they are going to be better for patients. Patients are going to have new tools that allow them to get these questions answered. They ask them all the time of technology to get better, more convenient answers. For physicians, some of the sources of burnout, the paperwork, the prior authorizations, the documentation, and even answering a whole bunch of patients’ email questions, new AI will help with a lot of those things. It will help us be better doctors, in part by scaling subspecialty knowledge. It is not going to be without speed bumps or real challenges, but it is going to make the lives of both patients and clinicians better. I came out of this much more optimistic than I thought I would be when I entered it. An important question is whether it will raise or lower the cost of health care. I do not know the answer to that. It could go in either direction. The cost of health care is one of the big problems in our system, so we have to work to see whether it can lower costs.

The take-home messages are that this is the biggest experiment in the history of medicine. We do not know exactly how this movie ends. We have to approach it with an open mind and try to be rigorous about studying it without just accepting that it is necessarily better. Health care is not about my employment or yours. It is about how we create the best health and health care system at the lowest cost for our patients. We need to be open to the possibility that technology is doing things that we right now think have to be human. There is still going to be a role for humans for the foreseeable future, but it is going to be different than it has been before, so we have to be open to change. The change is going to be net positive, at least in health care. In AI in the rest of our lives, I am quite worried in the same way everybody else is about all sorts of things. In health care, because the system is so broken, there is almost only room to go up, and I think it is going to make things better.

Kevin Pho: Bob, once again, thank you so much for sharing your perspective and insight. Thanks for coming on the show.

Robert Wachter: My pleasure. Thanks for having me.

Prev

The continuum of fertility care: Why IVF is not the only option

April 22, 2026 Kevin 0
…

Kevin

Tagged as: Health IT

< Previous Post
The continuum of fertility care: Why IVF is not the only option

ADVERTISEMENT

More by The Podcast by KevinMD

  • Why cervical cancer screening drops after menopause, and why that’s dangerous [PODCAST]

    The Podcast by KevinMD
  • I have cerebral palsy and I’m a doctor. Here’s what policy cuts mean for patients like me. [PODCAST]

    The Podcast by KevinMD
  • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

    The Podcast by KevinMD

Related Posts

  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • Bridging the rural surgical care gap with rotating health care teams

    Ankit Jain
  • What happened to real care in health care?

    Christopher H. Foster, PhD, MPA
  • To “fix” health care delivery, turn to a value-based health care system

    David Bernstein, MD, MBA
  • Health care’s hidden problem: hospital primary care losses

    Christopher Habig, MBA
  • Melting the iron triangle: Prioritizing health equity in dynamic, innovative health care landscapes

    Nina Cloven, MHA

More in Podcast

  • Why cervical cancer screening drops after menopause, and why that’s dangerous [PODCAST]

    The Podcast by KevinMD
  • I have cerebral palsy and I’m a doctor. Here’s what policy cuts mean for patients like me. [PODCAST]

    The Podcast by KevinMD
  • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

    The Podcast by KevinMD
  • Why I would never compromise on withdrawing care until I saw it firsthand [PODCAST]

    The Podcast by KevinMD
  • Why your patient isn’t filling that prescription (and won’t tell you) [PODCAST]

    The Podcast by KevinMD
  • Silence isn’t neutrality: Why medical students can’t wait to find their voice [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • When shared decision making gives way to medical paternalism

      DeAnna Pollock, MD | Physician
    • How xenotransplantation could finally solve organ shortages

      Rafael S. Garcia-Cortes, MD | Conditions
    • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

      The Podcast by KevinMD | Podcast
    • How one doctor navigated orthopedic residency while pregnant

      Christen Russo, MD | Physician
    • National Nurses Week needs better nursing recognition

      Brian Sutter | Conditions
    • How imposter syndrome affects high-achieving professionals

      Ritu Goel, MD | Conditions
  • Past 6 Months

    • Why clinicians fail at writing expert reports

      Tracy Liberatore, Esq, PA | Conditions
    • Rethinking the role of family physicians vs. specialists

      Ronald L. Lindsay, MD | Physician
    • How hindsight bias distorts clinical medicine

      Olumuyiwa Bamgbade, MD | Physician
    • Health insurance incentives and alternatives to opioids for chronic pain

      Molly Candon, PhD and Daniel Clauw, MD | Conditions
    • Why Florida physician background checks are driving doctors away

      Tamzin A. Rosenwasser, MD | Physician
    • Why we need a new medical specialty to fix corporate medicine

      Allan Dobzyniak, MD | Physician
  • Recent Posts

    • Gradually, then suddenly: Dr. Robert Wachter on health care’s giant AI leap [PODCAST]

      The Podcast by KevinMD | Podcast
    • The continuum of fertility care: Why IVF is not the only option

      Scott Morin | Conditions
    • Physician autonomy is not separate from patient care

      Corinne Sundar Rao, MD | Physician
    • Why heart failure care requires spaced repetition for doctors

      Vimal George, MD | Conditions
    • 51 cases that reframe methylene blue serotonin syndrome

      Steven E. Warren, MD, DPA | Meds
    • Therapeutic alliance in psychiatry matters more than ever

      Timothy Lesaca, MD | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

  • Most Popular

  • Past Week

    • When shared decision making gives way to medical paternalism

      DeAnna Pollock, MD | Physician
    • How xenotransplantation could finally solve organ shortages

      Rafael S. Garcia-Cortes, MD | Conditions
    • Clinicians are failing at value-based care because no one taught them the system [PODCAST]

      The Podcast by KevinMD | Podcast
    • How one doctor navigated orthopedic residency while pregnant

      Christen Russo, MD | Physician
    • National Nurses Week needs better nursing recognition

      Brian Sutter | Conditions
    • How imposter syndrome affects high-achieving professionals

      Ritu Goel, MD | Conditions
  • Past 6 Months

    • Why clinicians fail at writing expert reports

      Tracy Liberatore, Esq, PA | Conditions
    • Rethinking the role of family physicians vs. specialists

      Ronald L. Lindsay, MD | Physician
    • How hindsight bias distorts clinical medicine

      Olumuyiwa Bamgbade, MD | Physician
    • Health insurance incentives and alternatives to opioids for chronic pain

      Molly Candon, PhD and Daniel Clauw, MD | Conditions
    • Why Florida physician background checks are driving doctors away

      Tamzin A. Rosenwasser, MD | Physician
    • Why we need a new medical specialty to fix corporate medicine

      Allan Dobzyniak, MD | Physician
  • Recent Posts

    • Gradually, then suddenly: Dr. Robert Wachter on health care’s giant AI leap [PODCAST]

      The Podcast by KevinMD | Podcast
    • The continuum of fertility care: Why IVF is not the only option

      Scott Morin | Conditions
    • Physician autonomy is not separate from patient care

      Corinne Sundar Rao, MD | Physician
    • Why heart failure care requires spaced repetition for doctors

      Vimal George, MD | Conditions
    • 51 cases that reframe methylene blue serotonin syndrome

      Steven E. Warren, MD, DPA | Meds
    • Therapeutic alliance in psychiatry matters more than ever

      Timothy Lesaca, MD | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today

Copyright © 2026 KevinMD.com | Powered by Astra WordPress Theme

  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...