The Bay Area has become the epicenter of artificial intelligence, home to renowned startups and labs racing to prove AI’s power to transform our daily lives.
Just three months ago, California released its own Frontier AI Policy Report, commissioned by Governor Gavin Newsom, which warned that without stronger oversight and transparency, AI models threaten equity and accountability, especially in health care. Nowhere is this danger more urgent than in dermatology.
A few months ago, I experienced this gap firsthand while searching online, trying to figure out what the strange rashes on my skin were and why they were worsening. As I frantically scrolled through image after image, I rarely found any that matched my Southeast Asian skin tone. I felt frustrated and discouraged to see that the resources I turned to underrepresented patients like me.
But what I faced online was just a glimpse of a much bigger issue: Dermatology as a field does not equitably reflect all skin tones. This is especially dangerous in AI tools, which are becoming increasingly popular in health care.
AI is transforming how we detect, interpret, and treat skin conditions. Today’s apps can flag skin concerns in seconds. Machine learning tools can also assist dermatologists in confirming diagnoses for patients or identifying rare or more complex diseases. But here is the problem: Most AI models in dermatology are trained mostly on images of lighter skin. Because of this, AI tools often misread or completely miss conditions on darker skin.
A lot of people rely on these tools. They grant access for patients who otherwise struggle to see a specialist. In rural areas or among uninsured populations, mobile apps and teledermatology are often the only available option. I have personally used them too, especially when I could not get in to see a dermatologist for weeks. But with skin bias, they do not improve access to care as they are intended to do. They end up excluding the very people who need help the most. In the Bay Area, where communities of color make up the majority, the lack of representation in dermatology images has dangerous consequences for diagnosis and treatment. Even in clinics, about 12 percent of dermatologists already report using AI tools to assist with care. So when they do not work equitably, they may misdirect providers and put patients’ lives at serious risk.
For AI to truly support dermatologic care nationwide, patients and providers need a clearer understanding of how these tools are developed. That is why transparency matters. Companies should clearly share how their tools are trained and how they perform across different populations. This kind of openness should be expected and required before providers and patients can start relying on it.
The bias we see in dermatology AI is a direct result of the data these tools are fed. That means fixing it starts with building better datasets. The Bay Area, with its world-class tech companies, is in a powerful position to lead in building dermatology image libraries that reflect not only the diversity of California but also the world. These collections should include skin of every tone, representing all races and ethnicities. Whenever possible, this data should be open and accessible to support better research and fairer tools.
California’s Frontier AI Policy Report called for this accountability. If the state is leading on frontier AI, it should also lead equitable medical AI, too, starting with the Bay Area.
For me, the lack of images that reflected my skin was frustrating and delayed my treatment. For countless others, it could mean a missed diagnosis, which can sometimes be life-threatening.
AI will almost certainly play a growing role in how we diagnose and manage skin conditions in health care. The question now is not whether these tools will be used, but if they will be designed to serve everyone fairly. That requires conscious decisions around data, testing, and accountability. The Bay Area has the chance to set the standard.
Alex Siauw is a patient advocate.