I remember the first time a resident came to me with a clinical question she had already answered, using ChatGPT. She was not asking for confirmation. She wanted to know if the answer was right. That moment changed how I think about artificial intelligence in residency education. I am a family medicine physician and associate program director. For years, I have been building what we call the Junior Preceptorship model, a structured way for senior residents to teach and mentor junior peers under faculty guidance. It grew out of a simple need: We did not have enough faculty to go around, especially in rural and underserved settings. Now I am watching artificial intelligence arrive at the doorstep of graduate medical education, and I am trying to figure out what to do with it.
Here is what struck me: A 2024 survey of over 4,500 students across 192 medical schools found that more than 75 percent had received no formal artificial intelligence training at all. Meanwhile, two in three practicing physicians were already using artificial intelligence in clinical work, a 78 percent jump from the year before. We are sending residents into a world we have not prepared them for. Some of the practical applications are already here. Artificial intelligence scribes are cutting documentation time for residents who spend half their evenings buried in notes. An NEJM Catalyst study found that artificial intelligence-assisted documentation actually gave clinicians more face time with patients, which, if you work in family medicine, you know is the whole point. Large language models can generate tailored patient education in multiple languages, something I leaned on heavily when training residents in California’s Central Valley, where health literacy and language access are constant challenges. And artificial intelligence-powered case simulations can fill gaps when patient volume is unpredictable, a reality in any rural program.
But I want to slow down here, because the efficiency story is not the whole story. And if we only tell the efficiency story, we will miss what matters most. No artificial intelligence tool can teach a resident how to sit with a patient who just got a terrible diagnosis and simply be present with them. It cannot model how to navigate a family meeting where everyone is scared and no one agrees. And it certainly cannot replicate the moment when a senior resident corrects a junior peer with kindness, then later reflects on whether she handled it well, that slow, uncomfortable process of becoming a teacher. Professional identity does not get downloaded. It forms through the awkward, real work of being trusted with someone else’s learning.
We saw this in our own Junior Preceptorship work. When PGY-3 residents taught PGY-1s in continuity clinic, teaching confidence went up. Feedback quality improved. Satisfaction was high across the board. But the finding that stayed with me was qualitative. Residents told us: “I learned to lead by teaching.” That identity shift, from learner to educator, is what drives retention in rural practice. No algorithm can manufacture it. Human mentorship does. Where artificial intelligence actually helps is in making room for more of that human work.
Picture a PGY-3 preparing to precept a junior peer for the first time. Before the session, she pulls up an artificial intelligence tool to review evidence-based feedback strategies, something she might not have time to look up otherwise. Afterward, she dictates a brief reflection that gets organized into her teaching portfolio. Her program director can review summaries of teaching encounters to spot who needs more coaching. None of that replaces the teaching itself. But it clears space for it. That, to me, is the real opportunity, not replacing the relational core of residency education but clearing away the administrative clutter so it can actually happen. A systematic review in BMC Medical Education found that peer teaching builds pedagogical skills, leadership, and professional identity in the residents who do the teaching. Near-peer instruction produces outcomes on par with faculty-led teaching. The framework already works. The question is whether we will use artificial intelligence to protect it or accidentally hollow it out.
I worry, though, about what happens when we get this wrong. A resident who lets artificial intelligence draft her feedback has skipped the hard part, the struggle to say something difficult in a way that actually lands. A program that swaps direct observation for artificial intelligence-generated assessments has quietly hollowed out the relationship that makes assessment formative in the first place. And if we let chatbot conversations stand in for the real faculty talks about burnout, doubt, and whether this career is worth it, we lose something that is especially hard to get back in rural medicine, where professional isolation is already a retention problem. The retention literature is clear: What keeps physicians in rural practice is their training experience, their professional community, and whether they felt like they belonged. Artificial intelligence can speed up workflows. It cannot build belonging. People do that for each other.
So what should family medicine programs do? I keep coming back to three things:
- First, teach residents to evaluate artificial intelligence critically, and I mean really critically, not just “here is how to prompt ChatGPT.” They need to spot hallucinated references, recognize when an artificial intelligence answer sounds confident but is wrong, and develop judgment about when the tool helps versus when it creates false certainty. This belongs in the core curriculum, not as an elective half the residents skip.
- Second, deploy artificial intelligence where it frees up teaching time. Documentation is the single biggest time thief in residency. If ambient scribes and smart templates can give a resident back even 30 minutes a day, that is 30 minutes she could spend precepting a junior peer, debriefing a hard case, or simply being present with a patient instead of charting.
- Third, and this is the one I care about most, protect the human infrastructure. Faculty development. Near-peer teaching programs. Structured mentorship for community physicians who want to teach but were never trained to. These are the things that actually produce the next generation of rural educators. Artificial intelligence can support that work. It cannot do it.
When that resident showed me her ChatGPT answer, I did not dismiss it. I did not panic, either. I said, “That is a reasonable start. Now tell me what you would do differently for THIS patient, in THIS community, given what you know about her life.” She paused, thought, and gave an answer the algorithm never could have. That exchange, the pause, the thinking, the answer rooted in knowing this patient in this place, is what I want to protect. Artificial intelligence will be part of family medicine education going forward. I am not fighting that. But the center of it has to stay human. Because the best teaching still happens between two people who are paying attention to each other.
Jyothi Ranga Patri is a family physician.











![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)




