As artificial intelligence (AI) continues to gain traction in health care, its potential to improve patient outcomes, streamline workflows and reduce human error is becoming increasingly evident. Many health care leaders are embracing AI as a core part of their digital transformation strategies. In fact, a recent study in Health Affairs, which analyzed data from the American Hospital Association, found that sixty-five percent of U.S. hospitals used predictive AI models for purposes including predicting health trajectories or risks for inpatients, identifying high-risk outpatients and recommending treatments.
Yet, despite AI’s increasing usage in health care, mistrust in the technology remains a major barrier. A single error can often cause clinicians and health system leaders to lose faith in an entire AI tool or model. This erosion of trust can have a ripple effect, stalling progress and preventing the full realization of AI’s benefits. Today’s health tech leaders must play a crucial role in reinforcing trust in AI by setting checks and balances and building an understanding of AI’s role in the future of care delivery.
AI bias and hallucinations drive mistrust
A cautious approach to implementing AI tools in health care is not only warranted but necessary. With so many companies today offering AI-enabled technology, it can be difficult to see through the noise and identify the tools that are ethically developed and will have a positive impact on care delivery. The AI regulatory landscape is also complex and rapidly changing, and maintaining compliance is especially challenging for global companies that must meet the regulations of multiple countries.
Then there is the risk of AI bias and inaccurate outputs. We know that embedded bias with AI models can unintentionally reinforce systemic inequities, and that AI tools can sometimes generate factually incorrect information, known as hallucinations. In health care settings where biased or false information could impact patients’ lives, it is understandable that users are concerned with the accuracy of their AI tools’ outputs, and even occasional mistakes can significantly damage trust.
While acknowledging the current limitations of AI and taking a measured approach to its implementation in health care is wise, this does not mean hospitals and health systems should be avoiding AI entirely. With the rapid pace at which AI can learn and improve, every time you use a model, it will only get better and more accurate the next time you use it. As an industry, we need to approach AI from the mindset that new technology should not be dismissed immediately after a mistake, but rather evaluated as a tool that improves through use and feedback. Ultimately, neither AI models nor humans are infallible, but with patience and the right resources, we can ensure that AI models, like people, can learn from their mistakes and improve.
Building trust through leadership and oversight
Trust in AI cannot be built overnight. It requires consistent communication, proactive governance and a commitment to ethical AI development. Both health care technology companies and health systems must be proactive about monitoring real-world performance of AI, addressing bias, setting and adhering to ethical guidelines of its development and deployment and offering ongoing education and training on the benefits and limitations of the technology in health care. By implementing these checks and balances, health care systems can harness the benefits of AI while minimizing risks and ensuring that it is used responsibly and effectively.
Recognizing that different organizations have varying levels of comfort with AI, health care technology companies should also offer tiered models of AI utilization, allowing users with a lower baseline of trust in AI to start slow and implement tools incrementally. For example, health systems with lower initial comfort levels could begin by using AI to automate routine tasks, helping them to build confidence in the tools before moving to more sophisticated applications.
Health care technology leaders can also play a critical role in fostering trust in AI by communicating the “why” behind every AI tool. The clinicians and health care staff who use these tools will not be impressed by jargon: They want to understand how the tool is going to improve care delivery and impact outcomes such as patient safety and staff burnout. By reinforcing the crucial role AI will play in the future of care delivery, setting rigorous checks and balances and meeting users at their level of comfort with AI, health care technology leaders can build confidence in the technology as it continues to evolve and improve. Trust, once earned and maintained, will serve as the foundation for AI’s long-term success in improving care delivery.
Miles Barr is a health care executive.