Most of us don’t notice moral development happening while it is happening. Moral character forms quietly, through repetition, through what we get used to doing and through what we slowly stop doing without much forethought.
Artificial intelligence matters from a moral perspective long before it ever potentially becomes conscious or autonomous. Although AI may or may not confront us with dramatic ethical dilemmas, it does change our moral lives in a far subtler way in that it reshapes our habits. Habits, more than principles, are where moral development actually lives.
The Humean perspective
This way of thinking about morality is not new. It can be traced to David Hume, an 18th-century Scottish philosopher who argued that human morality grows out of habit and feeling more than abstract reasoning. We like to imagine ourselves as moral reasoners, weighing principles and choosing the right path. In practice, however, we are largely creatures of custom. We respond before we reflect. We approve and disapprove instinctively. Over time, those reactions settle into patterns that feel natural, even inevitable.
For Hume, this was not a criticism of human nature. It was simply a description of how moral life works. Morality, in his view, is learned the way a language is learned. We acquire it by living among others, by seeing what draws praise or blame, by feeling sympathy and discomfort, and by being corrected and correcting ourselves. Moral character forms not through rule mastery but through experience.
Shifting the act of choosing
AI systems rarely tell us what to do outright. They recommend, rank, predict, and suggest options that are most likely to succeed, least likely to fail, or most consistent with past outcomes. These systems are often genuinely helpful. In many contexts, they outperform human judgment at pattern recognition and error avoidance, which is precisely why we listen to them.
Relying on a system to frame our choices subtly changes the act of choosing itself. Instead of beginning with the question, “What is the right thing to do here?” we begin with, “Do I have a reason not to follow this?” The difference may appear minor, but over time it becomes morally significant.
Hume would have recognized this shift immediately. Habits form not only through what we do, but through how we do it. When moral judgment is exercised primarily as an override rather than a starting point, it becomes less frequent. What is practiced less often gradually feels less natural. Responsibility does not disappear, but it becomes lighter, less personal, and easier to hand off.
The illusion of authorship
This shift is already visible in everyday language. People explain their decisions by pointing outward, to the system, the model, the score, or the metric. “That’s what the algorithm showed.” “That’s how it ranked.” “That’s what usually works.” These explanations are often accurate. They are not evasions, yet they subtly reposition the self. The speaker becomes less an author of action and more a relay through which decisions pass.
From a Humean perspective, this is neither a moral failure nor a lapse in integrity. It is a predictable response to an environment that rewards smoothness and efficiency while penalizing friction. When decisions arrive pre-justified, the emotional weight of choosing diminishes. The tension of uncertainty, the risk of being wrong, and the sense of personal stake all recede. These experiences are not incidental to moral life. They are the conditions under which moral character is formed.
The way children learn right and wrong makes this clear. They do not begin with principles. They begin with reactions. Embarrassment, pride, guilt, and relief teach them what responsibility feels like. Moral development occurs through involvement, often uncomfortable involvement, in the consequences of one’s actions. When that involvement is removed, something essential is lost.
Artificial intelligence does not eliminate morality. It reduces involvement.
Participation versus passivity
The central issue, then, is not whether AI systems can make ethical decisions. The deeper question is whether people continue to experience themselves as making decisions at all. A world can be orderly, efficient, and even fair while producing individuals who feel increasingly detached from what happens through them. Good behavior may increase even as moral ownership declines.
Hume helps clarify why this matters without turning it into a sermon. If moral life is built out of habits, then technologies that reshape habit inevitably reshape character. This happens not because such systems intend to do so, but because formation works that way. What we repeatedly delegate, we gradually stop inhabiting. What we stop inhabiting, we stop becoming.
The trade is not between right and wrong. It is between participation and passivity.
When moral decisions become easier, faster, and more optimized, they also become less formative. Wisdom does not develop in the absence of uncertainty. Responsibility does not develop without the felt weight of consequence. These qualities do not emerge automatically from well-designed systems. They emerge through practice.
The danger, then, is not that artificial intelligence will make us immoral. The danger is that it will make morality thinner. From the outside, everything may appear functional and well ordered. From the inside, something essential may be missing: the sense that this is my judgment, my responsibility, and my doing.
Hume would not urge us to abandon technology or retreat to an imagined moral past. He would remind us that human beings are shaped by what they repeatedly do without reflection. If that is true, then the moral question of artificial intelligence is both simple and difficult at the same time.
Timothy Lesaca is a psychiatrist in private practice at New Directions Mental Health in Pittsburgh, Pennsylvania, with more than forty years of experience treating children, adolescents, and adults across outpatient, inpatient, and community mental health settings. He has published in peer-reviewed and professional venues including the Patient Experience Journal, Psychiatric Times, the Allegheny County Medical Society Bulletin, and other clinical journals, with work addressing topics such as open-access scheduling, Landau-Kleffner syndrome, physician suicide, and the dynamics of contemporary medical practice. His recent writing examines issues of identity, ethical complexity, and patient–clinician relationships in modern health care. Additional information about his clinical practice and professional work is available on his website, timothylesacamd.com. His professional profile also appears on his ResearchGate profile, where further publications and details may be found.





![Overcoming the economic barriers of fee-for-service medicine [PODCAST]](https://kevinmd.com/wp-content/uploads/The-Podcast-by-KevinMD-WideScreen-3000-px-4-190x100.jpg)