When I first began working with clinical AI tools, I felt the kind of excitement many young clinicians and researchers feel today. For the first time, it seemed possible to reduce cognitive overload, surface hidden patterns, and give clinicians more time to focus on what mattered most: patients. AI felt less like a threat and more like a long-awaited collaborator.
As a young clinician-scholar, I did what many of us do. I read widely, tested tools, and began writing about the implications of AI-assisted decision-making. I was not arguing against AI. I was arguing for something more specific and, I believed, more urgent: how we preserve clinical judgment in an era where machines are increasingly confident, fast, and persuasive.
The reality of academic resistance
Then reality intervened.
As my work entered formal academic review, I encountered resistance that surprised me. Not hostility, but skepticism. I was repeatedly asked to “prove” that erosion of diagnostic reasoning was already occurring, to justify why this concern deserved attention now rather than later. Some reviewers questioned whether such risks were even plausible. Others suggested that if AI improved outcomes, concerns about judgment were secondary.
What unsettled me was not rejection itself. Rejection is part of academic life. What unsettled me was the realization that the problem I was describing did not yet have a recognized name, framework, or home. Without long-term data or institutional authority, raising early warnings felt less like scholarship and more like speculation (at least in the eyes of the system).
For a time, this was deeply discouraging. It felt as though enthusiasm for AI had left little room for careful reflection, especially when that reflection came from someone early in their career. I began to wonder whether I had misunderstood my role entirely. Was I too early? Too cautious? Or simply in the wrong place?
The risk of unexamined collaboration
Eventually, I realized the issue was not whether AI should be used. That question has already been answered. The real question is how humans and AI learn to work together without diminishing what makes clinical expertise meaningful in the first place.
Clinical judgment is not a static skill. It is shaped through uncertainty, error, reflection, and responsibility. AI systems, by contrast, offer clarity without accountability. When their outputs are treated as authoritative rather than advisory, the risk is not that clinicians become obsolete, but that they become disengaged from the very reasoning processes that once defined their expertise.
This does not make AI dangerous. It makes unexamined collaboration dangerous.
Reframing the role of the clinician-scholar
What restored my sense of purpose was reframing my role, not as an opponent of AI, nor as its cheerleader, but as a translator between systems. Young clinicians and scholars occupy a unique position. We are fluent enough in technology to see its promise, yet close enough to clinical training to recognize what may be quietly lost along the way.
Hope, I have learned, does not come from blind optimism. It comes from mature collaboration. AI can support clinicians without replacing judgment, but only if we deliberately design training, workflows, and professional norms that keep humans cognitively engaged rather than deferential.
For others navigating similar frustrations, especially early in their careers, I offer this reassurance: Encountering resistance does not mean your concern is invalid. It may simply mean that you are standing at the edge of a conversation that has not yet fully begun.
AI will continue to advance. The harder work (ensuring that human judgment advances alongside it) belongs to all of us. And that work is still worth doing.
Gerald Kuo, a doctoral student in the Graduate Institute of Business Administration at Fu Jen Catholic University in Taiwan, specializes in health care management, long-term care systems, AI governance in clinical and social care settings, and elder care policy. He is affiliated with the Home Health Care Charity Association and maintains a professional presence on Facebook, where he shares updates on research and community work. Kuo helps operate a day-care center for older adults, working closely with families, nurses, and community physicians. His research and practical efforts focus on reducing administrative strain on clinicians, strengthening continuity and quality of elder care, and developing sustainable service models through data, technology, and cross-disciplinary collaboration. He is particularly interested in how emerging AI tools can support aging clinical workforces, enhance care delivery, and build greater trust between health systems and the public.









![How should kratom be regulated? [PODCAST]](https://kevinmd.com/wp-content/uploads/Design-2-190x100.jpg)