I built an AI-powered system that sources and scouts healthcare startup companies and generates structured investment memos on them in under ten minutes. It pulls data from public filings, clinical trial registries, patent databases, and competitive landscapes. It scores companies across multiple risk dimensions. It produces fifteen-section analyses that would take a human analyst days to assemble.
I use it constantly. And I would never let it make an investment decision.
I say this not as a physician who’s skeptical of technology. I’m a physician-scientist with over 100 peer-reviewed publications who transitioned from clinical practice to running a healthcare venture fund. I’ve evaluated hundreds of startups across digital health, biotech, devices, and therapeutics. AI has made the information-gathering phase of my work dramatically faster. But it has also made something else very clear: the most important part of healthcare investing, the part that separates a good bet from a costly mistake, is the part AI cannot do.
What AI does well
Credit where it’s due. AI is genuinely transformative for the mechanical layer of investment analysis.
It can scan thousands of companies in a fraction of the time it would take a human analyst. It can identify competitive landscapes, map patent clusters, flag regulatory filings, and surface clinical trial data with speed and breadth that no individual could match. It can generate structured reports that organize disparate information into a readable format. For a fund evaluating dozens of inbound deals per month, this compression is invaluable.
I built my system specifically for healthcare deal flow. It uses a multi-section structured prompt with healthcare-specific triggers: flagging things like regulatory pathway complexity, reimbursement risk, and clinical adoption barriers. It scores companies across dimensions including team strength, market timing, competitive moat, and exit potential. The output is consistent, comprehensive, and fast.
And it is not sufficient.
Where AI breaks down
The fundamental limitation of AI in healthcare investing is the same limitation it has in clinical medicine: It can process information, but it cannot exercise judgment rooted in lived experience.
Consider a medical device startup. My AI system would analyze the company’s FDA clearance status, patent portfolio, revenue traction, and investor syndicate. It would note the published clinical study and the institutional backing. It would likely score the company favorably: strong IP, regulatory milestone achieved, growing unit sales, reputable co-investors.
But it cannot call five surgeons who perform the relevant procedures daily and ask whether they would actually change their practice for this device. It cannot detect the subtle difference between a clinical problem that’s theoretically important and one that’s operationally painful enough to drive adoption. It cannot understand that the integrated OR ecosystem, where hospitals buy bundled equipment suites from a single vendor, creates a switching cost that no single-feature device can overcome.
I’ve passed on deals that my AI system scored highly. In every case, the reason was the same: The clinical adoption thesis didn’t hold up when tested against physicians who live inside the workflow. AI saw the data. Physicians saw the reality.
The three things AI cannot evaluate
After running my AI system alongside physician-led diligence for over a year, I’ve identified three specific areas where the technology consistently falls short.
First, AI cannot assess clinical workflow integration. Whether a product fits into how physicians, nurses, and administrators actually work, or whether it adds friction, is a judgment that requires having been inside those workflows. An AI system can tell you a product is technically superior. It cannot tell you that adopting it would add two extra steps to a procedure that clinicians have spent years optimizing for speed. That insight comes from lived clinical experience.
Second, AI cannot evaluate stakeholder willingness to change. Health care has a unique adoption challenge: The user, the buyer, and the payer are often three different people. A physician might want a product that the hospital’s value analysis committee won’t approve. A patient might benefit from an app that insurers won’t reimburse. AI can model market size. It cannot tell you whether the specific humans in the decision chain will actually say yes. Understanding that requires knowing how hospitals make purchasing decisions from the inside.
Third, AI cannot distinguish between evidence that supports clinical utility and evidence that supports clinical adoption. A published study showing a device performs well in a controlled setting is not the same as evidence that surgeons need it in daily practice. Physicians trained in critical appraisal, reading methodology, assessing endpoints, questioning generalizability, bring a layer of evidence evaluation that AI pattern-matching cannot replicate. I’ve seen AI systems treat a single-center case series with the same weight as a multi-center randomized trial. A physician-scientist would never make that mistake.
Why this matters beyond investing
The enthusiasm around AI in health care is warranted in many areas. AI is accelerating drug discovery, improving diagnostic imaging, and enabling precision medicine in ways that genuinely advance patient care. I’m not arguing against AI adoption. I’m arguing for clarity about where the human layer remains essential.
In clinical medicine, we’ve already learned this lesson. AI can read a chest X-ray faster than a radiologist. But the radiologist who integrates that reading with the patient’s history, physical exam, and clinical context is the one making the diagnosis. The AI is a tool. The physician is the decision-maker.
Healthcare investing works the same way. AI can generate a comprehensive report on a company in minutes. But the physician-scientist who reads that report and then calls five specialists, walks through the clinical workflow, and pressure-tests the adoption thesis against frontline reality is where the actual investment decision gets made.
The information layer is getting faster. The judgment layer is as scarce as it has ever been. And in health care, whether you’re treating a patient or evaluating a company, judgment is what matters most.
Harsha Moole is an internal medicine-trained physician-scientist with more than 100 peer-reviewed publications, including work featured in the New England Journal of Medicine. After years of clinical practice and gastroenterology outcomes research, he made an unconventional transition from the bedside to the boardroom by founding PhysicianEstate, a health care-focused venture capital firm.
Over the past seven years, Dr. Moole has made 22 early-stage health care investments across digital health, medical devices, biotech, and therapeutics. He has also built a network of more than 200 physicians from institutions such as Johns Hopkins and Stanford who help source opportunities and provide clinical diligence before capital is deployed. His core thesis is that physician-scientists with firsthand clinical experience are uniquely positioned to identify health care investments that generalist investors often miss.
His research background is reflected in his publication record on Google Scholar, and he shares professional updates on LinkedIn.









![Clinicians are failing at value-based care because no one taught them the system [PODCAST]](https://kevinmd.com/wp-content/uploads/bd31ce43-6fb7-4665-a30e-ee0a6b592f4c-190x100.jpeg)






