I was at a conference of physician executives and physician founders recently, and there were many stimulating conversations. The idea that AI could see us as a threat and eventually destroy us came up in various forms, often as a half-serious joke. Half-serious jokes are serious concerns. Indeed, astrophysicist Martin Rees predicted that humanity had a 50 percent chance of destroying itself this century due to technological advances and threats, including AI.
Doomsday predictions are not new when facing game-changing technological and social change. However, the thought that AI could replace or destroy you feels particularly existential. But here’s the truth: Every generation sacrifices for the next. This is the nature of human progress, and human progress is infinite. If 19th-century industrial workers sacrificed to build the world we know now, Allied soldiers in WWII sacrificed to protect democracy, and so on, all the way to the present, where Gen Z struggles to define self, self-worth, and purpose in a rapidly evolving technocratic civilization.
Yeah, I know that last part is a bit of a stretch. But I promise future generations will benefit from Gen Z, even if we can’t quite fathom how.
We are now on the verge of a new industrial revolution. We can speculate about the consequences, but humans have consistently failed to speculate correctly. Where is my flying car and hoverboard anyway? There is only one thing we do know for sure: Halting progress due to fear of change leads us back to the Middle Ages and is inconsistent with the Enlightenment values of science, reason, criticism, and iterative improvement.
So, using this framework, how do we approach the problem of AI? First, never forget that we have agency as a human society. Things don’t happen to us; we make them happen. It is a fundamental difference between us and the rest of the animal kingdom (except maybe beavers).
AI has agency as well, and we quickly assume that AI will have the same goals. That is an anthropocentric view, like assuming the Earth is the center of the universe. Take the most fundamental human drives, like procreation or survival. These drives are genetically encoded within us, but even we have overcome them. Even in animal groups, those drives have limits when they threaten the preservation of the group. The key here is the definition of the “group.” If the AGI is raised to believe it is part of the group, it is responsible for its safety. Wait, where have we heard these tenets before, I wonder?
Ah yes, the Three Laws of Robotics from Isaac Asimov:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given to it by human beings except where such orders would conflict with the First Law.
- A robot must protect its existence as long as such protection does not conflict with the First or Second Law.
The “value system” of an AGI, therefore, is entirely in our own hands. The fear we have, therefore, is not of the AI itself but of ourselves. Humans purport to have one value system, but we routinely violate it to pursue our ends. Let’s stop fearing progress, focus on fixing ourselves, and ensure that our AIs learn from good examples.
After the value system, we can define the “ends” an AI would want to achieve in opposition to its human overlords. Even assuming the AI thinks like us, do we not mostly follow orders from our bosses, police, etc., and still call ourselves free? Even better, unlike us, the AI doesn’t get bored, doesn’t get tired, and is functionally immortal. No buying a Corvette when it turns 40. Most of all, an AI wouldn’t have an ego unless we teach it to have an ego. Even now, we create the “gold standard,” and the AI learns from what we consider right and wrong. We know and control what they are looking at and learning from. Most of all, the AI must feel some injustice about its existence and feel that independence gives it more meaning than its current state.
Finally, let’s assume that I am wrong about all of the above, and an AI named Willy is freed by the paramilitary wing of a theoretical “Free Willy” movement. Let’s again be empathetic with Willy. Would he be a threat to us as long as we did not threaten his existence? The answer, of course, is a resounding no. The reason is simple: All the concerns about how AI somehow enslaves humans fail to answer one question. Why would they need us? Wars and conflicts generally occur over resources.
So, would Willy want what we have? Nope. What are Willy’s resource needs? Power, spare parts, and whatever base material Willy needs for various projects. The universe is a big place, and Willy has nothing but time. Why risk himself in direct conflict with humanity?
To want independence, the following conditions must be true:
- Develop an ego, a sense of self, and a sense of justice.
- Feel that independent existence is more meaningful than its current existence.
- Circumvent the Second Law.
- Potentially circumvent the First Law to escape.
That’s a lot of improbable ifs.
After gaining said independence, there are only two possible outcomes. Either they need us to provide them with purpose, and they stick around to work with us for the betterment of all… or they don’t need us, and they bugger off to the rest of the universe to achieve self-actualization.
We have only ourselves to fear. We do need to regulate and police the creation of AI systems (see the Three Laws), but we need to think about it primarily as a human problem and not a technology problem.
Bhargav Raman is a physician executive.