When artificial intelligence developers gather to build tools that will reshape health care, one critical voice is often missing: public health.
Despite AI’s potential to improve outcomes and streamline operations, it is being developed with limited regard for public health priorities. The absence of this input is not just a technical oversight. It is an equity issue with far-reaching consequences.
From algorithmic bias to the exclusion of community-level data, equity shortcomings are embedded in many AI tools before they ever reach a hospital or health department. Public health agencies are rarely seen as strategic collaborators in AI development, which means they have been left out of early design conversations. As a result, these tools often fail to address community-level needs or reflect the priorities of public health practice.
Public health is left out or opts out.
Public health leaders are increasingly aware that AI will shape the future of the field, but many are hesitant to engage due to concerns about HIPAA, data liability, and ethical risks. For already stretched departments, these concerns are not abstract. They stem from real risks and limited infrastructure to manage them.
As a result, public health gets boxed out of early design conversations. Instead of helping shape these tools, departments are left reacting to systems built without them. In some cases, staff are using AI informally or under the radar, often without guidance, training, or a full understanding of ethical and legal implications. This creates a dangerous disconnect. Equity is central to public health, yet AI tools are entering workflows without any assurance that they reflect that mission.
Current AI priorities overlook population health.
Much of today’s health care AI development focuses on billing, clinical workflows, and patient engagement. These are important goals, but they miss the broader context of structural inequities and social determinants of health.
Public health is often excluded from these conversations, not just from oversight but due to lack of infrastructure, staffing, and technical capacity. Departments lack the resources to engage, and many professionals are left waiting for the benefits of AI to trickle down. Some seek AI expertise but face recruitment and funding barriers.
- Where are the tools designed to detect overdose spikes using community data?
- Where are the models that factor in housing, food insecurity, or maternal health disparities?
These issues are central to public health practice, yet few AI systems are built with them in mind.
We know the gaps and the opportunities.
Public health leaders are used to working with limited resources. According to America’s Health Rankings, in 2022–2023, the national average for state public health funding was $124 per person. In Wisconsin, it was only $69, ranking 49th among states. This underinvestment contributes directly to the sector’s difficulty in adopting technologies like AI.
But the opportunities are clear. AI could improve disease surveillance by identifying patterns in emergency room visits, school absenteeism, and wastewater data. It could support misinformation monitoring and enable faster, more targeted messaging of accurate, reliable data. It could even help agencies identify where outreach is falling short and improve how services are delivered.
These are not theoretical benefits. They are needed now. Advocating for a stronger public health role in AI development and policy is essential to ensure these tools reflect the needs of communities and the systems that serve them.
What true inclusion looks like
To engage effectively, public health professionals need a foundational understanding of how AI works, including its limitations and risks. Many tools are built on reused code that may not prioritize equity or transparency. As a result, biased systems can spread without the knowledge or consent of those using them.
Inclusion means more than a seat on an advisory board. It requires involving people with community-level insight at every phase, from product scoping to data governance. Public health agencies must have a defined role in these decisions, supported by safeguards that promote trust, transparency, and shared responsibility.
A call to developers, funders, and policy leaders
Funders and policymakers have a critical role to play. They can prioritize equity by embedding expectations for public health inclusion into grants, contracts, and innovation initiatives. Safeguards should be built into funding mechanisms to ensure AI tools reflect diverse community needs and do not worsen existing disparities.
If you are building AI tools for health, ask yourself whether your team understands population-level strategy, prevention infrastructure, or the ethics of community-based data use. If not, your system may be efficient, but it will not be just.
Public health leaders, practitioners, and communities must be actively involved in shaping how AI is built, governed, and deployed. Inclusion must happen at the front end, not as a retrofit.
Laura E. Scudiere is a public health executive.