It is no secret that the hyperscalers (e.g., Google Cloud, Amazon AWS, etc.) and the frontier AI labs (e.g., OpenAI, Anthropic, DeepMind) are devoting extraordinary attention, time, and capital to the AI buildout. The big four (Amazon, Google, Microsoft, Meta) have earmarked over $600 billion to this in 2026 alone. This trend has been the central theme in the equity market for the last three years or so. During this time, Nvidia became the most valuable public company by selling the leading AI hardware, and private AI company valuations have grown dramatically. As the novelty of chatbots fades, the market focus is shifting toward the physical AI buildout (e.g., data centers, power grid, etc.) and the dissemination of autonomous systems (“agentic” AIs) capable of independent reasoning and multi-step execution.
One cannot get through the day without hearing about AI. What does this all really mean for a practicing surgeon and for our job security? The fearmongers have been saying radiology will be eaten by AI for years, and yet they are doing more work than ever. And what was that about Elon Musk saying medical school is “pointless” and do not bother going into it?
To answer these questions, I first need to give two definitions and outline one concept. Achieving artificial general intelligence (AGI) has been the primary goal of this technology arms race. AGI has a few definitions, but to oversimplify, an AGI model would be one that meets or exceeds human capability across all cognitive domains and can learn new skills better than a human despite no specific training on those domains. As a non-health care example, many readers will be familiar with Tesla’s Full Self-Driving (FSD), which is not AGI. It is capital-A-capital-I, AI in the sense that it is a computational construct pre-trained to perform a human task as well or better than a human, so it is a highly sophisticated artificial narrow intelligence (ANI), but it is not AGI. This is because when your Tesla parks in the driveway, it will not follow you into your house and cook your dinner or be your dental hygienist. It will not learn how to perform additional non-driving tasks without more training. That is because the goal of Tesla FSD was designed and fully focused to perform well only on a narrow task: driving.
If an AGI model emerges, then this model would (after being distilled and uploaded into a humanoid robot, more on that later) be able to get into your 1999 Ford Bronco and drive it better than Tesla FSD, and without the extra video cameras and the rest of the FSD hardware. So, despite the decade-plus head start that Tesla FSD has, an AGI would leapfrog Tesla FSD overnight. It will also be a better long-haul trucker, programmer, chef, biostatistician, and paralegal. An AGI would outperform humans at essentially all other cognitive tasks and white-collar professions without any specific pre-training in those things. But what about modern surgery? Does it not logically follow that an AGI would outperform a human surgeon at the cognitive tasks necessary for surgery? What about actual surgery?
The strategic divide in AI development
Here is the catch: There is a schism emerging among leading AI researchers and at the frontier labs. Some believe that there is no clear line of sight on a true AGI (more on this also below). This is being discussed quietly at these labs, because these same labs are on the receiving end of this massive CapEx spend that has been committed to the AGI buildout. Instead, some labs are pivoting, or at least hedging, with ANI solutions. Limited (narrow) agentic AI models have recently been released with approaches to replace programmers, do paralegal work and legal research, guide project management, replace client relationship managers, and replace many other horizontally integrated software solutions.
If you are in charge of AI strategy at Anthropic, OpenAI, or Google, then right now you have two choices:
- Start monetizing what you already have and release sophisticated ANIs to win a few technically narrow but wide-TAM frontiers.
- Keep doubling down as long as capital is available on being the first to true AGI, allowing you to jump ahead of everyone else on all frontiers when you achieve that.
In the first scenario, you do not end up with AGI anytime soon, but the massive investment into the space will yield many ANI agents, where each agent will be narrowly trained and therefore specialize in a narrow area, but will become as good as humans at that narrow task. These agents will then be linked together and communicate so that, as a group, the ANI cluster will eventually resemble something that looks like an AGI but is not true AGI. The second school of thought is that with a few additional programming breakthroughs and scaling, that we will develop a true AGI (more below, as promised), and spending time and capital on developing ANIs is a waste because they will be obsolete as soon as the first AGI emerges.
The multi-agent ANI surgical future
If we follow the first (no near-term AGI, but many ANIs emerge) scenario through, we may have, for example, one ANI that is superior to a vascular surgeon at reading CT scans for aneurysms and will be able to automatically provide all centerline measurements and angles, and be able to immediately tell which company’s devices are on-IFU (i.e., have an FDA-approved treatment) and which are not on-IFU (i.e., no FDA-approved treatment). If no on-IFU options are available, it will tell us what non-IFU options are available, if any.
This ANI will be better at case planning than any human or group of humans. Another ANI may be able to perform a chart review directly and determine which additional preoperative workup or optimization is provided, with better outcomes than any anesthesiologist or cardiologist. A third ANI agent that is sitting on Cook’s, Gore’s, or Medtronic’s servers, would complete the planning and ordering for the case. They would communicate with the patient and hospital scheduling team (which would have already been replaced by another ANI) directly to find an OR time. These ANIs might then liaise with a fourth ANI that is on the Philips C-arm machine with access to the hybrid room to register the patient, fuse the cone beam CT scan, and confirm patient positioning. Eventually, this will then communicate with an ANI-enabled robot that obtains vascular access and has actuators that can control and navigate wires and catheters as well as control the fluoroscopy machine, deploy the closure devices, and hold pressure at the end of the case.
This sounds like science fiction, but there is a high probability that we will have a complete solution for preoperative planning and preoperative cognitive tasks associated with aortic surgery to perform vascular surgery second opinions available in the next two years. Our scribes are already disappearing, and the human schedulers will quickly follow suit. Even if no additional progress is made in AI architecture or scaling, the economic incentives are too strong to ignore. For outside-the-OR surgical tasks, the tsunami is not over the proverbial horizon, it is in plain view and we are standing on the beach, and the water is starting to go out.
The robotic components and ANI interconnectedness are further out and have technical and (probably major) regulatory hurdles to overcome, but all the pieces of this currently exist or will exist soon. They just need to be hooked together. Time to buildout to scale and other regulatory friction means this will not be a complete reality for at least five years, probably more like 10 years, but this is also coming.
The AGI singularity in health care
The second school of thought (AGI will eat everything) leads to the conclusion that the business strategy of developing these multi-connected ANIs described above is a complete waste of time, money, and effort because we will see a true AGI in the next 12 to 24 months. If this occurs, it will be a singular event, and the full ramifications of this are hard to imagine. This technology would leapfrog the admittedly optimistic timeline above. An AGI loaded into currently available humanoid robots would outperform all humans at all cognitive tasks (including surgery, especially robotic and endovascular procedures).
An AGI-enabled humanoid robot (with currently available hardware, no additional engineering is necessary) would meet a patient in the emergency room. It would then, for example, diagnose a posterior knee dislocation with a tibial fracture and no pedal pulses. It would recognize this scenario, then act to relocate the knee and perform a vascular exam. After finding still no pedal pulses, it would transport the patient to the CT scanner, then to the OR after finding a popliteal arterial occlusion. It would then shunt the popliteal artery, plate the tibial fracture, confirm stable relocation, and then perform a bypass. Not only are they not limited by human biology such as the need to eat and sleep, but they would also not be limited by modern surgical specialty boundaries. They would perform all the tasks of the trauma surgeon, orthopedic surgeon, and vascular surgeon. It would also not recognize boundaries between what we think of as classically nursing tasks, medical tasks, or surgical tasks.
It may be that every patient at the hospital has an AGI assigned to its case when the patient shows up, and that the AGI performs all tasks for the patient regardless of specialty or profession: from peripheral IV, to bedside echo, to sternotomy, to rehab referrals, to billing. These AGIs would communicate with one another, and (assuming society has not completely unraveled in the process of these advances) this would lead invariably to the greatest improvement in modern medicine as these AGIs rapidly advance in knowledge together.
An AGI-enabled robotic surgeon would not be limited by the quality of the ICU, because they would be the ICU provider. They would not be limited by triage time at the CT scanner, because they would perform the triage. There would be no delay for admission orders or labs, because they would place the orders, draw and interpret the labs, and complete the documentation. Nothing would be out of stock or unsterile, as they manage the stock room and sterile core.
The technical hurdles to AGI
In addition to the broader social and political issues, there are a few computational and technical reasons why one should be bearish on this AGI-enabled surgical future. I have come to believe that there are at least two technical jumps needed to realize AGI as outlined above. The first is related to the mathematics underlying modern AI architectures. Historically, the “learning” (i.e., updating of model weights) was done with a technique called gradient descent with back propagation. This is a fancy way of saying that for each layer in the network and each neuron in that layer, during training, we look at the slope (the gradient) of the error in the model at that point, compute the slope of the error there, and then move in the direction down to less error (we descend the gradient, hence the name gradient descent).
The first well-known problem with this is that the gradient is local: meaning you can travel downhill until you find a local minimum, but it is not guaranteed to be a global minimum (i.e., it found the best nearby solution, but there may be better solutions out of view). In practical terms, an analogy might be that if you are performing a transfemoral endovascular SMA stenting procedure, an algorithm that uses “transfemoral endovascular” as a starting point may help you minimize error by suggesting you change to a stiffer wire or choose a larger stent, and thereby optimize the local solution, but gradient descent alone is not going to get you to stop, swing the arm out, and switch to brachial access when you start to struggle. Nor is it going to suggest that you convert to open; we would need some other AI architecture to get to that.
There are other well-known reasons to believe that gradient descent with back propagation will not work out for AGI that are beyond the technical scope of this article, but it is sufficient to say here that this I am not the first to suggest this problem, and lots of people are working on many different optimization strategies. There are many smart people who believe we have the core architectures necessary to realize AGI, but that we do not have the computational scale yet. They think if we just build larger models with more data and run these models for longer, that we will get there (also beyond the scope of this article, but they may be right, time will tell). Others believe that the scaling laws do not pencil out, and AGI based on current architectures will not get us there. This thinking goes back to the emerging schism about what is the best AI strategy moving forward.
The economic reality of the AI transition
All of this sounds hyperbolic at first glance. But just a month ago the leading narrative in public equity markets was: Is there an AI bubble? Are we overspending on the AI build out? Can these AI stock prices keep going up? But look at what has happened since then and think about what we can learn from it: An anti-bubble has emerged in software. Anthropic (Claude) released two ANI models, one aimed at replacing paralegal and legal research work and one to help with coding. These have been widely telegraphed for two years and should have been priced in (we knew this was coming!) but upon release, software companies sold off and there has been a sudden realization that AI is about to eat software.
This has led to a complete re-rating of publicly available SaaS companies, as investors question future cash flow estimates. The $IGV is down about 30 percent, and P/E and P/S ratios are at multi-year lows. This represents a major sea change for public and private investors. From the early 2010s to 2026 investors favored asset-light high-margin software businesses, and they were willing to pay high multiples to own them. Now they realize the next ANI model might kill their niche SaaS product. This is especially true if the product is not horizontally integrated. Companies are rethinking their moats, pricing, and labor needs. Layoffs have started as a result.
Simultaneously, private equity and private credit firms are realizing that they have an issue. They have been investing heavily in private SaaS companies, often with leverage. Now a sector-wide re-pricing is occurring in probably their most important space, and they are illiquid. Major publicly traded PE firms (e.g., KKR, Blue Owl, Blackstone, etc.) have experienced 20 to 60 percent sell-offs. Blue Owl froze redemptions last month. They were able to sell off some assets at par value (indicating that at least the underwriting was solid) as investors re-evaluate their risks. Nevertheless, it appears that private PE firms will be saddled with these losses as SaaS valuations decline. For now, at least, all of this is isolated to these specific sectors, and there has been no contagion.
Somewhat related, Citrini Research released a well-framed report on the broader impacts of AI to society, the public markets, and the financial sector last week. They highlighted the SaaS sell-off and pointed out previously under-appreciated but obvious threats to financial technologies companies. As a result, Visa and Mastercard sold off 4 to 7 percent. The point I am trying to make here with a few cherry-picked recent examples: No one will be immune to AI, and this is not all priced in. Of course, none of this matters to you, a surgeon, if you do not own Blue Owl or Visa stock in your IRA, right? Well, I would argue that if sophisticated financial operators are re-considering their future cash flows on financial services companies (the area they should be the most expert in) in response to AI, should we not consider our futures as well? How insulated from AI are we?
Is surgery a HALO career?
Josh Brown from CNBC and The Compound podcast coined the term “HALO stocks” last month to designate stocks during this sea change that he believes should be relatively immune to re-rating due to threats from AI. HALO stands for heavy assets, low obsolescence. He argues that companies that make an actual product or service, are asset-heavy, and are not easily undercut by AI, should be more insulated from this new macro-AI trend (think: Coca Cola, Waste Management, ExxonMobil). Indeed, these stocks are outperforming since his call during this recent re-rating.
Some careers will be HALO careers, some will not. Being an elk hunting guide will be a HALO career. An AGI would be smarter, stronger, and a more efficient elk hunter than any human. And though I, myself, am not an elk hunter, I suspect humans will always prefer a human guide to lead their elk hunt, instead of an AGI-enabled robot. Computer programming, on the other hand, is clearly not a HALO career. AI can already perform better than any human at most programming tasks. The leading narrative has been that computer programmers will either focus on the harder programming concepts and let the AI agent do the menial programming tasks, or the programmer will re-train and instead lead the AI agents that are writing the actual code. Neither of these will work long term, as AI is already replacing all human tasks in coding.
Surgery should be a HALO career, should it not? We are highly trained, expensive labor. We take all the medicolegal risk and there is significant regulatory burden to replacing us, right? At a high level, in the non-AGI scenario with ANIs slowly doing more and more, they may slowly replace certain parts of the job, like they are currently doing for computer programmers. Our jobs will morph and, during this evolution, hopefully we will spend less time doing the tedious non-surgical and less important tasks such as scheduling and documentation. But one of two things will eventually happen, either AGI will be realized in which case we would be one of the first professions targeted, or there will be a tipping point in the multiple-interconnected-ANI scenario where we become supervisors of AI agents, then shortly thereafter become obsolete.
Some may disagree with the first premise above: Why would surgeons be first targeted? Because if the hospital can have one AGI-enabled robot replace the 24/7 on-call trauma surgeon, it can conservatively effectively replace $1 million to $2 million of labor costs per robot, which may have currently required three to four human surgeons. Remember, AGIs do not take vacation, they do not take maternity leave, and they do not need to eat, sleep, or drive to the hospital. They also do not need to call in the vascular surgeon, the orthopedic surgeons, or the neurosurgeon. They can do their own anesthesia and manage their own complications. The ROI on a single robot replacing a single surgeon is massive. Compare that ROI to that of a nurse. The ROI of a nurse is far lower due to the salary discrepancy and the staffing ratios. So, counterintuitively, when the first AGI-enabled robots are ready for deployment, they will first replace the surgeons and anesthesiologists before the nurses and the CRNAs.
Preparing for the AI transition in surgery
What can you do to insulate yourself? Well, in the ANI-enabled future, proceduralists will be more protected than non-proceduralists as the regulatory hurdles will be harder to overcome here. Certain procedures will also be more insulated from ANI development than others. Procedures that are common, that are already connected to imaging or AI-enabled guidance, and are lower risk will be the first to phase out. Eventually, an emergency room provider will order a robotic cholecystectomy or a diagnostic angiogram, and an ANI cluster will perform it. An open Whipple procedure will be more insulated (this is a HALO procedure). Training data for an open Whipple would be harder to come by, the infrastructure needed to care for a Whipple patient is expensive, and the economic incentives (low TAM) make this higher hanging fruit.
So, if you can, focus on mastering procedures, especially rare, open, and more complex operations. These will be the last cases done by humans. Emergent cases (gunshot wound to the abdomen with hypotension) done without pre-operative CT imaging may also be well-insulated. Next, become an expert in AI. Download Claude to your desktop and spend a half day building an agent. Hedge your career AI risk by expanding your investment horizon to HALO stocks and AI-insulated companies, or those companies and sectors best poised to benefit from AI.
I do not mean to be too pessimistic on the AI-enabled future in surgery. Though the medium term is hard to predict, in the short term the future is bright. AI will make us more efficient, safer, and hopefully improve patient-reported outcomes as it slowly comes online. In the long term, however, the outcome is also clear: The last generation of human surgeons is being trained now.
David Stonko is a vascular surgery fellow.








![Politics and fear have replaced science in U.S. pain management [PODCAST]](https://kevinmd.com/wp-content/uploads/11c2db8f-2b20-4a4d-81cc-083ae0f47d6e-190x100.jpeg)









