The moment was small, almost trivial. I handed my daughter her polished curriculum vitae (CV), clear, structured, and professionally phrased. All the information had been distilled and reorganized from her resume. She looked at it for a second longer than expected, then asked a single question: “ChatGPT?” I said yes, and felt an unexpected flicker of shame. Not because I had done something wrong. Not because I had tried to deceive her (I had not). In fact, I have been explicit in my writing that I use artificial intelligence (AI) to assist me with stylistic refinement, clarity, and phrasing of selected passages. I have said so publicly, even in a note to readers in my most recent book on artificial intelligence.
And yet, in that moment of admission to my daughter, I felt the same hesitation others describe in similar situations: a pause, a slight tightening, a sense that I had crossed some invisible line. That feeling is becoming increasingly common with writers I have spoken with, a modern-day equivalent of “coming out,” often accompanied by guilt, then relief.
The new social contract of writing
We are living through a strange cultural transition. On one hand, AI is everywhere, drafting emails, autocorrecting text messages, summarizing meetings, and completing sentences before we finish typing. It is deeply embedded in the infrastructure of personal and professional life. On the other hand, writing, particularly good writing, has become suspect. Too polished? Must be AI. Too structured? AI again. Too articulate? That uncanny voice of the machine. Too scripted? Definitely AI. Too many favorite words? A dead giveaway.
We have entered a peculiar era in which excellence invites suspicion. So, writers adapt, not by improving their craft, but by disguising it. They remove em dashes. They avoid overused words. They deliberately roughen their prose to signal humanness. Authenticity, once measured by a declarative voice, is now measured by imperfection.
The quiet shame of artificial intelligence
Employing AI results not only in suspicion but also silence. People use AI every day to draft, refine, brainstorm, and organize. They meet deadlines that would otherwise be impossible. They produce work that is sharper, clearer, and more coherent. But when asked, “Did you use AI?” the answer comes with qualifiers:
- “Just a little.”
- “Only for editing.”
- “Mainly for research.”
- “It’s mostly my own work.”
The qualifier functions as an apology. This dynamic has been described as a kind of “quiet shame,” where writers lower their voices when discussing AI, not because they believe it is wrong, but because they sense that others might. What is striking is not the use of the tool, but the reluctance to acknowledge it openly.
What are we really afraid of?
The discomfort is not really about AI in my opinion. It is about identity. For generations, writing has been tied to a particular mythology: the solitary thinker, wrestling with language, producing something uniquely their own. Difficulty was part of the credential. The struggle validated the outcome.
The use of AI is at odds with that narrative. If a machine can help structure an argument, refine a sentence, or suggest a more elegant phrasing, then what exactly are we claiming when we say we wrote this? This is the question that lingers beneath the surface. Not technological, but existential. Because AI removes the familiar markers of authorship: the emptiness of staring at a blank page, the idiosyncratic cadence of a sentence finding its shape, the struggle between thought and language that once felt like proof of ownership.
The critics and their valid concerns
It is true that AI reduces effort. That is, in part, the point. It is also true that overreliance may blunt certain cognitive skills. And there are legitimate concerns about originality, attribution, and intellectual rigor. But critique becomes distortion when it condenses everything into misuse, when using AI to refine a paragraph is equated with outsourcing thought, or when assistance is labeled as deception. That way of viewing AI misses a more important distinction: augmentation versus substitution. Using AI to think with is not the same as using AI to think for you.
Why not turn to ChatGPT to create my daughter’s CV? Perhaps it was the easier path. But I have done the exercise many times before, and there was little to be gained by repeating it. In this instance, the value lay not in generating the document from scratch, but in ensuring its clarity, structure, and completeness.
The middle ground most people occupy
The public discourse often splits into extremes:
- AI as an existential threat
- AI as an unqualified good
Most people, however, are not living at either extreme. They occupy a middle ground. They are curious but cautious. Productive but uneasy. Open-minded but unsettled. They are using AI not to replace their work, but to make it manageable with faster drafts, clearer narrative, and fewer hours lost to mechanical tasks. They are not trying to deceive anyone. They are trying to keep up.
In medicine, analytic speed is not a luxury. It is essential to navigating dense research articles, summarizing them, distilling key points, and determining their relevance before engaging more deeply in the topic.
A culture problem, not a technology problem
If people feel the need to hide their use of AI, the problem is not the tool. It is the culture surrounding it. In environments where output is valued but process is policed, people will inevitably resort to what some have called “shadow AI,” using the technology silently, without disclosure, without shared norms, and without collective learning.
This way of operating is not sustainable. It isolates individuals, prevents open discussion, and slows the development of thoughtful standards. What we need instead are explicit expectations and shared understanding, not judgment. Still, the shame may be partly justified, but not in the way we think.
There should be discomfort, because something important changes whenever we use AI. Writing is evolving. Authorship is being redefined. The relationship between effort and output is being compressed. But shame is a poor label. It stigmatizes more than it clarifies. It stifles conversation when what we need is precisely the opposite: openness and honesty about how we are working with AI.
A more honest framing of authorship
Perhaps the better question is not whether you used AI, but rather:
- How did you use it?
- What did you contribute that the machine could not?
- What makes your prose meaningfully yours?
- Where does your judgment, your experience, your voice still matter?
Because it still should count. The final act of writing, the act of deciding what stays, what goes, what is true, what is worth saying, should remain uniquely human. Otherwise, we risk claiming authorship without accountability. The question is no longer whether we use AI but whether we are willing to be honest about how we use it and what we can legitimately claim as our own.
Arthur Lazarus is a former Doximity Fellow, a member of the editorial board of the American Association for Physician Leadership, and an adjunct professor of psychiatry at the Lewis Katz School of Medicine at Temple University in Philadelphia. He is the author of several books on narrative medicine and the fictional series Real Medicine, Unreal Stories. His most recent book is Artificial Intelligence in Medicine: Controversy and Commentary.









![Politics and fear have replaced science in U.S. pain management [PODCAST]](https://kevinmd.com/wp-content/uploads/11c2db8f-2b20-4a4d-81cc-083ae0f47d6e-190x100.jpeg)








![Oral Wegovy sounds easy, but the reality is more complicated [PODCAST]](https://kevinmd.com/wp-content/uploads/Gemini_Generated_Image_-190x100.png)