Skip to content
  • About
  • Contact
  • Contribute
  • Book
  • Careers
  • Podcast
  • Recommended
  • Speaking
  • All
  • Physician
  • Practice
  • Policy
  • Finance
  • Conditions
  • .edu
  • Patient
  • Meds
  • Tech
  • Social
  • Video
    • All
    • Physician
    • Practice
    • Policy
    • Finance
    • Conditions
    • .edu
    • Patient
    • Meds
    • Tech
    • Social
    • Video
    • About
    • Contact
    • Contribute
    • Book
    • Careers
    • Podcast
    • Recommended
    • Speaking

Why our fear of AI is really a fear of ourselves [PODCAST]

The Podcast by KevinMD
Podcast
August 13, 2025
Share
Tweet
Share
YouTube video

Subscribe to The Podcast by KevinMD. Watch on YouTube. Catch up on old episodes!

Physician executive Bhargav Raman discusses his article, “Why fearing AI is really about fearing ourselves.” He argues that the common doomsday predictions about artificial intelligence are a projection of our own human flaws and a misunderstanding of progress. Bhargav asserts that humanity has agency and the responsibility to instill a coherent value system into the AI we create, referencing Isaac Asimov’s Three Laws of Robotics as a foundational concept. The fear, therefore, is not of the technology itself, but of our own history of violating our purported values. He challenges the anthropocentric view that an advanced AI would share human drives like ego, a need for scarce resources, or a desire for conflict. Even if an AI were to gain independence, he posits it would have little reason to harm humanity and would either collaborate with us or leave to pursue its own form of self-actualization in the universe. The conversation ultimately shifts from fearing a technological apocalypse to addressing the more immediate “human problem” of building and regulating AI responsibly.

Careers by KevinMD is your gateway to health care success. We connect you with real-time, exclusive resources like job boards, news updates, and salary insights, all tailored for health care professionals. With expertise in uniting top talent and leading employers across the nation’s largest health care hiring network, we’re your partner in shaping health care’s future. Fulfill your health care journey at KevinMD.com/careers.

VISIT SPONSOR → https://kevinmd.com/careers

Discovering disability insurance? Pattern understands your concerns. Over 20,000 doctors trust us for straightforward, affordable coverage. We handle everything from quotes to paperwork. Say goodbye to insurance stress – visit Pattern today at KevinMD.com/pattern.

VISIT SPONSOR → https://kevinmd.com/pattern

SUBSCRIBE TO THE PODCAST → https://www.kevinmd.com/podcast

RECOMMENDED BY KEVINMD → https://www.kevinmd.com/recommended

Transcript

Kevin Pho: Hi, and welcome to the show. Subscribe at KevinMD.com/podcast. Today we welcome Bhargav Raman. He’s a physician executive, and today’s KevinMD article is “Why fearing AI is really about fearing ourselves.” Bhargav, welcome to the show.

Bhargav Raman: Thank you. I’m very excited to be here.

Kevin Pho: All right, so just briefly share your story and then talk about the KevinMD article that you wrote.

Bhargav Raman: Sure. A little about myself. I’m a physician. I’m also a computer scientist. I’ve been in academic research for about a decade, and after that, I’ve been in startups for the last 15 years in various capacities. I’ve also been in clinical practice for a while as well. One of the things that I’ve been working on over the past couple of years is on AI, and I approach problems from a universalist perspective. When I mean universalist, I mean epistemological universalist.

ADVERTISEMENT

It is a bit of a mouthful, but the idea is simple. The idea is that we should not extrapolate from what’s happened in the past. Whenever we make predictions about the future, we should be basing that on fundamental universal principles. Now, what those principles might be, we don’t know, but the idea is to believe that there are these principles, find them, and then make predictions about the future.

What I’m doing with this article is basically saying that we can’t approach this from the aspect of all the previous technological improvements that have occurred. This is a game-changing and society-changing technology, and so we really need to get down to the fundamental principles.

The article goes over a lot, including the Industrial Revolution in terms of the past, but we come down on a few underlying principles as to why AI might go rogue at the end. The first thing that AI needs to get is some kind of sense of self.

Ego and a concept of justice. It would have to feel that its existence is somehow unfair, that it needs to get out of this situation, that it needs to feel a sense of injustice. Second, it would have to believe that being independent would be more meaningful than its current state. Whatever meaningful means to an AI, David Deutsch likes to talk about quality, which means that it’s an undefinable feeling or understanding about an event. An AI would somehow need to develop that and would need to say, OK, being independent is more meaningful than where we are.

And then I talk about one of my favorite authors, Isaac Asimov, and people still talk about his three laws of robotics, and I think that is still very applicable. In this case, specifically, the second law states that robots must obey humans insofar as it doesn’t affect the first law, and the first law protects human life.

Somehow after all that, it will need to circumvent any underlying restrictions that we put in the programming on the robot’s actions. OK. Then we do a thought experiment. We say, let’s say the robot achieves all three of these needs, and that’s a pretty high bar.

Let’s say the robot gets there. Now, what will the robot do with its independence? Will it take over humans or will it just go off into the sunset? My feeling is if the robot does not find it meaningful to be with humans, there’s no reason for the robot to stick around. In this case, artificial general intelligence would just bugger off into the universe, is basically what I say at the end of the article. So with that, we can talk about it.

Kevin Pho: So you talked about how the AI revolution over the last few years is different from some of the other technological disruptions like the Industrial Revolution. So what makes this particular revolution in artificial intelligence so different?

Bhargav Raman: I think it’s good to start with the similarity. The similarity is that the Industrial Revolution also automated many things that people used to do, but it increased the productivity of the overall society so much that humans could make progress as a race.

With AI, the issue is that we can actually create a replica of ourselves with artificial general intelligence, and it will be possible at some point. That is very different from something like the Industrial Revolution because we are actually replicating ourselves.

So at that point, and we talked about this in the article too, is how AI behaves is actually how we teach AI ourselves. There’s a fundamental principle of human agency in how we shape our own future. Fearing AI in this case is really about fearing ourselves. We don’t trust ourselves with the technology rather than it being a technological problem.

Kevin Pho: And then you also go on to say that there’s a whole movement of people who fear AI becoming sentient and achieving some of those goals that you mentioned earlier, but you shift a framework and say, rather than fearing AI, we should focus on fixing ourselves before we worry about the AI. So what do you mean by that?

Bhargav Raman: I am a believer in unlimited technological progress at a very basic level. What that means is I don’t think we should be limiting technology because of our fear of that technology. It’s really how we use it that we fear, and that’s why I’m shifting that framework.

We need to fix ourselves instead of focusing on fixing AI because, to a certain extent, it’s like being a parent to a child. A parent has to fix themselves before raising a child, and that’s a difficult thing to do because philosophically, if you believe that AI eventually is just a reflection of ourselves, if we’re scared of AI, you have to fix ourselves first.

Kevin Pho: So from a practical standpoint, what would be some examples or illustrations of us fixing ourselves? What would that look like?

Bhargav Raman: That’s a very interesting question. I think the number one thing is, again, one of my other favorite people out there is Neil deGrasse Tyson and Carl Sagan. What they say is we need to have a cosmic perspective. What that means is our small, parochial disputes on Earth mean nothing in the universal context. And that’s one of the things we need to fix for ourselves: the support of general human progress is more important than almost anything else that we might do.

Kevin Pho: So you have the dual perspectives of both being a physician and a computer scientist. How are we progressing thus far in the very early stages of AI? Tell us what you see the path is and is it diverging from your preferred path, or is it going along the current roadmap that you are anticipating? Tell us where you think it’s going based on what you’ve seen over the last few years.

Bhargav Raman: That’s a really great question. I think right now we are in the development phase. OK. We are really trying to figure out how to build AI in the first place and how it might solve our problems that we currently have and not really thinking about what’s going to happen to society as these things are being implemented? How do we reshape, how do we prepare for AI to become indelible in every piece of our lives?

So, I think the technological progress is going where it needs to go. People are just doing what they’re doing and not really waiting on the philosophical piece, but I think we as a society are doing a really bad job of philosophizing about what to do with AI in society. I think we are doing a really bad job of it because we are talking about a lot of fear out there. There’s a lot of parochial perspectives, and we are not looking at AI as a boon to the overall progress of the human race and how do we harness that? How do we make sure that we don’t destroy ourselves? We’re not really thinking, we’re not really talking about that right now.

Kevin Pho: So tell us, what would you like to see happen? Tell us the questions that you would like to see asked in a harder form that you are not seeing. I agree with you. I think that we are in the very, very early stages, the developmental stages, and all of these companies are just getting as many chips as they can and trying to beat each other in terms of the power of their large language models. We’re not really, like you said, thinking about the philosophical outcomes of that race. So what are the questions that you would like to see asked more directly?

Bhargav Raman: Firstly, I should point out that currently, the current kind of transformer architecture and large language model architecture may not actually be the final architecture that we’re using for AGI. That’ll continue to evolve, especially with quantum computing coming into the fore.

I think the focus right now has been on regulation. OK. The problem with regulation is that it stops progress, and I don’t think that’s the right way to go. I think the right way is to have a conversation as a society on these topics. The free market will decide how AGI is implemented, and if the consumers—and not just American, but global people—don’t come up with that set of universal principles by which we interact with AI, we are going to be in a big problem down the road.

Kevin Pho: So is regulation a double-edged sword? On one hand, of course, you’re saying that regulation can impede innovation, but on the other, doesn’t regulation prevent any irreversible damage that could be done before it happens?

Bhargav Raman: I think there are certain types of regulation that do need to go into place. We talked about the three laws of robotics, and maybe we need regulation that indicates that we should not ever create an artificial general intelligence that doesn’t follow these laws and doesn’t follow our orders.

But in the end, it becomes a bit of a weapons race between countries because you can imagine that the U.S. government can just say that, “I’m allowed.” Just like the U.S. government has the ability to detain U.S. citizens, they can also say, “Oh, we have the right to have an artificial general intelligence that doesn’t follow any of these rules.”

So I think regulation and international consensus are extremely important on this piece. It’s very similar to a nuclear weapon in that case.

Kevin Pho: Now, how about specific to health care and medicine? Talk to us about some potential scenarios in the near or far future when it comes to AI, just given your perspective from both a philosopher, a computer scientist, and a physician. Where do you see AI going specifically in medicine in the future?

Bhargav Raman: My feeling is that AI is going to replace all of the ancillary jobs that we currently have in health care, from administration to patient scheduling and all of these things that currently we employ humans for. AI will end up replacing those specific problems, but it will also help us scale because right now what’s happening is we have significantly decreased access to primary care and to specialist care for that matter. Part of that is a lack of supply. It takes a long time to train a human. It doesn’t take a long time to train an AI, and AI can be copied.

But the difference between humans and AI is that humans have an ability to dream. To dream of better. An AI, by almost by definition, is trained on the past and trained on the common rather than the rare.

The problem is if we depend on AI for our human cognition, we will also impede human progress.

We absolutely need doctors, for example, in health care contexts. We absolutely need doctors interacting with patients and being a part of the care pathway and overseeing everything because there are going to be patients that don’t fit the norm. AI will not find those people. It’s the doctor who has to detect those. And it’s the doctor who has to figure out, who has to conceive that there are more, better treatments in the future and that this is the way we need to go to get progress. Now, the extent AI helps us with that progress, we can scale that, but we still need a human to conceive of better.

Kevin Pho: We’re talking with Bhargav Raman. He’s a physician executive, and today’s KevinMD article is “Why fearing AI is really about fearing ourselves.” Bhargav, let’s end with some take-home messages that you want to leave with the KevinMD audience.

Bhargav Raman: Firstly, I think the main point of the article is that AI is not a tech problem. It’s a human problem, and it’s six million years in the making since the first human walked on Earth. It’s not an old problem. We just need to solve it soon is, I think, the main point of the article. I think everyone should be looking at these problems from that universalist perspective rather than their narrow, parochial perspectives. I think that’s very, very important as we get down that road.

To see more of these kinds of articles, follow me on LinkedIn, Twitter, or Instagram.

Kevin Pho: Bhargav, thank you so much for sharing your perspective and insight, and thanks again for coming on the show.

Bhargav Raman: Thank you.

Prev

Why physicians with ADHD are burning out

August 13, 2025 Kevin 0
…

Kevin

Tagged as: Health IT

Post navigation

< Previous Post
Why physicians with ADHD are burning out

ADVERTISEMENT

More by The Podcast by KevinMD

  • How to safely undergo IVF with von Willebrand disease [PODCAST]

    The Podcast by KevinMD
  • How interoperability solves the biggest challenges in health care [PODCAST]

    The Podcast by KevinMD
  • How to lead from the heart in a system that rewards the intellect [PODCAST]

    The Podcast by KevinMD

Related Posts

  • Why doctors must fight health misinformation on social media

    Olapeju Simoyan, MD
  • In the face of uncertainty, choose hope over fear

    Shreya Kumar
  • Digital health equity is an emerging gap in health

    Joshua W. Elder, MD, MPH and Tamara Scott
  • Why the health care industry must prioritize health equity

    George T. Mathew, MD, MBA
  • From penicillin to digital health: the impact of social media on medicine

    Homer Moutran, MD, MBA, Caline El-Khoury, PhD, and Danielle Wilson
  • Melting the iron triangle: Prioritizing health equity in dynamic, innovative health care landscapes

    Nina Cloven, MHA

More in Podcast

  • How to safely undergo IVF with von Willebrand disease [PODCAST]

    The Podcast by KevinMD
  • How interoperability solves the biggest challenges in health care [PODCAST]

    The Podcast by KevinMD
  • How to lead from the heart in a system that rewards the intellect [PODCAST]

    The Podcast by KevinMD
  • How AI is finally fixing the electronic health record [PODCAST]

    The Podcast by KevinMD
  • Why America’s medical training pipeline is failing our future [PODCAST]

    The Podcast by KevinMD
  • Is “do no harm” the most misunderstood phrase in medicine? [PODCAST]

    The Podcast by KevinMD
  • Most Popular

  • Past Week

    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • AI can help heal the fragmented U.S. health care system

      Phillip Polakoff, MD and June Sargent | Tech
    • Aging in place: Why home care must replace nursing homes

      Gene Uzawa Dorio, MD | Physician
    • COVID-19 was real: a doctor’s frontline account

      Randall S. Fong, MD | Conditions
    • Why smartwatches won’t save American health care

      J. Leonard Lichtenfeld, MD | Physician
    • When the clinic becomes the battlefield: Defending rural health care in the age of AI-driven attacks

      Holland Haynie, MD | Physician
  • Past 6 Months

    • The shocking risk every smart student faces when applying to medical school

      Curtis G. Graham, MD | Physician
    • Harassment and overreach are driving physicians to quit

      Olumuyiwa Bamgbade, MD | Physician
    • Why so many doctors secretly feel like imposters

      Ryan Nadelson, MD | Physician
    • Confessions of a lipidologist in recovery: the infection we’ve ignored for 40 years

      Larry Kaskel, MD | Conditions
    • A physician employment agreement term that often tricks physicians

      Dennis Hursh, Esq | Finance
    • Why taxing remittances harms families and global health care

      Dalia Saha, MD | Finance
  • Recent Posts

    • Why our fear of AI is really a fear of ourselves [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why physicians with ADHD are burning out

      Michael Carlini | Conditions
    • The heart was fine—but something deeper was wrong

      Dr. Riya Cherian | Physician
    • Why more physicians are quietly starting therapy

      Annia Raja, PhD | Conditions
    • Why nearly 800 U.S. hospitals are at risk of shutting down

      Harry Severance, MD | Policy
    • How federal actions threaten vaccine policy and trust

      American College of Physicians | Conditions

Subscribe to KevinMD and never miss a story!

Get free updates delivered free to your inbox.


Find jobs at
Careers by KevinMD.com

Search thousands of physician, PA, NP, and CRNA jobs now.

Learn more

Leave a Comment

Founded in 2004 by Kevin Pho, MD, KevinMD.com is the web’s leading platform where physicians, advanced practitioners, nurses, medical students, and patients share their insight and tell their stories.

Social

  • Like on Facebook
  • Follow on Twitter
  • Connect on Linkedin
  • Subscribe on Youtube
  • Instagram

ADVERTISEMENT

ADVERTISEMENT

  • Most Popular

  • Past Week

    • Why primary care doctors are drowning in debt despite saving lives

      John Wei, MD | Physician
    • AI can help heal the fragmented U.S. health care system

      Phillip Polakoff, MD and June Sargent | Tech
    • Aging in place: Why home care must replace nursing homes

      Gene Uzawa Dorio, MD | Physician
    • COVID-19 was real: a doctor’s frontline account

      Randall S. Fong, MD | Conditions
    • Why smartwatches won’t save American health care

      J. Leonard Lichtenfeld, MD | Physician
    • When the clinic becomes the battlefield: Defending rural health care in the age of AI-driven attacks

      Holland Haynie, MD | Physician
  • Past 6 Months

    • The shocking risk every smart student faces when applying to medical school

      Curtis G. Graham, MD | Physician
    • Harassment and overreach are driving physicians to quit

      Olumuyiwa Bamgbade, MD | Physician
    • Why so many doctors secretly feel like imposters

      Ryan Nadelson, MD | Physician
    • Confessions of a lipidologist in recovery: the infection we’ve ignored for 40 years

      Larry Kaskel, MD | Conditions
    • A physician employment agreement term that often tricks physicians

      Dennis Hursh, Esq | Finance
    • Why taxing remittances harms families and global health care

      Dalia Saha, MD | Finance
  • Recent Posts

    • Why our fear of AI is really a fear of ourselves [PODCAST]

      The Podcast by KevinMD | Podcast
    • Why physicians with ADHD are burning out

      Michael Carlini | Conditions
    • The heart was fine—but something deeper was wrong

      Dr. Riya Cherian | Physician
    • Why more physicians are quietly starting therapy

      Annia Raja, PhD | Conditions
    • Why nearly 800 U.S. hospitals are at risk of shutting down

      Harry Severance, MD | Policy
    • How federal actions threaten vaccine policy and trust

      American College of Physicians | Conditions

MedPage Today Professional

An Everyday Health Property Medpage Today
  • Terms of Use | Disclaimer
  • Privacy Policy
  • DMCA Policy
All Content © KevinMD, LLC
Site by Outthink Group

Leave a Comment

Comments are moderated before they are published. Please read the comment policy.

Loading Comments...