As first reported by The Wall Street Journal late last month, the war against anti-vaccination propaganda now has a new battlefront. Pinterest, the social-media platform where users discover images and information, has begun blocking vaccine-related search terms on its site.
Anti-vaccine content contradicts evidence-based science and established research, the company told WSJ, while cautioning that the search ban is only temporary; a band-aid in place until the site can identify an appropriate long-term solution.
Other digital platforms like Facebook are exploring ways to block misinformation and cut off revenue streams for users who post anti-vaccine conspiracy theories. YouTube, for example, disabled ads on “anti-vaxx” channels so that those users cannot profit from advertising. But Pinterest, with its latest action, has taken by far the biggest step toward shutting down posts that contain anti-vaccine recommendations.
I applaud any online company committed to protecting visitors from false and dangerous health content. At the same time, I worry that a wholesale ban on vaccine-related searches prevents access to vital and accurate information, along with the questionable content.
At play are issues involving free speech, access to reliable medical information, and “fake news,” for which there are no easy answers. Nevertheless, social networks like Twitter and Facebook, along with internet search companies like Google and Bing, need to take immediate action to protect the safety of their users.
Medical information online exists in a predominantly unregulated and uncensored universe. As a physician who has dealt firsthand with the consequences of misinformation, I believe improvements are possible and necessary. Here are three ways to heighten consumer protections and advance the credibility of online health information:
1. Alter the algorithms. Back in 2011, Google launched its “Panda” algorithm update and, in so doing, began raising the bar on the quality of its search returns. Panda was a necessary reaction to clever web marketers who had been gaming Google’s algorithms for years.
In its quest to improve the validity of content suggestions, the search giant revealed the kinds of questions that helped Panda discern good sources from bad. Questions like these served as guidance for anyone hoping to publish a high-ranking article: “Would you trust the information presented in this article?” and “Does this article have spelling, stylistic, or factual errors?” and “Is the article short, unsubstantial, or otherwise lacking in helpful specifics?”
And whether or not you noticed it, the update worked. Websites that used sneaky tricks like “keyword stuffing” suddenly stopped appearing in search results. What’s more, Google refined Panda over and over again, releasing multiple versions to stay ahead of spammers.
If Google can rid the internet of low-quality content (and, even more importantly, agree on what constitutes low-quality content), why not deploy an algorithmic update that assesses the quality of medical content and asks the kinds of questions that can weed out false or misleading health information?
2. Default to credible sources. At a minimum, search engines like Google, Bing, and Yahoo! should accept guidance from the medical community on which sources provide the most valuable and credible information.
There is no shortage of reliable organizations that publish content on most health-related topics. They include the Centers for Disease Control (CDC), the Institute for Health Improvement (IHI), Leapfrog Group, and the National Committee for Quality Assurance (NCQA). These independent agencies and organizations should be given priority and consistently placed ahead of for-profit companies in the search sequence.
Likewise, studies from established and peer-reviewed medical journals like the New England Journal of Medicine (NEJM) and the Journal of the American Medical Association (JAMA) should appear in rankings above less-reputable publication or sources providing unvetted content.
For example, let’s say you go to Google and type in “best cancer treatment hospitals” or “should I get my child vaccinated?” Right now, the page-one results include paid placements, credible sources and several questionable websites that no doctor would share with a friend or loved one.
Of course, I understand Google’s revenue is dependent on “paid search” and the ad sales it generates. However, that unfettered approach needs to be limited in situations where the risk of harming consumers is great. We impose similar restrictions on dangerous products, so why not treat dangerous medical misinformation the same way?
All digital platforms with a search function can take a lesson from Pinterest and put the safety of its users first. The last thing Facebook or Google needs is another Congressional inquiry into its practices. Either these companies self-regulate now or take their chances on facing the wrath of frustrated lawmakers.
3. Add warnings to dangerous sites. Wikipedia currently lists 25 different “blocks” and dozens of different warnings to curb abuses and filter out content it deems inappropriate. There’s a “spam” block and an “advertising” block. There’s an “unsourced content” block and even a “sock puppetry” block, which prevents users from posting content under multiple accounts (aka “socking”) in order to deceive or mislead others.
These types of warnings help educate readers about suspicious content and should be implemented across the internet to warn visitors of dangerous health information online. These warnings could be developed by objective panels of experts (analogous to Wikipedia’s editors) and affixed to any website or platform that contradicts proven medical science.
Health care is unlike any other consumer industry. When most shoppers come across a Rolex watch selling for $19.99 online, they know it’s a scam. But when it comes to medical information, many patients and parents don’t have the scientific expertise to discern what’s real from fake. Online, those who post misleading or deceptive information about vaccination risks, unproven cancer treatments and untested therapies for genetic disorders are stoking fears, raising false hopes and delaying necessary treatments.
If internet search and social media companies want to be good corporate citizens, they should take these powerful first steps toward making health care information safer and more accurate for users.
Freedom of speech is not absolute
Most social and digital platforms have adopted “hateful conduct” or “dangerous speech” policies that strictly prohibit harassment or threats of violence. And they do so without unduly sacrificing freedom of speech — the essential right that underpins our democracy and remains vital to public discourse.
The courts have also drawn limits on the First Amendment when exercising it puts others at risk (e.g., screaming “fire” in a crowded movie theater). And they’re exactly the kind of restrictions that should be applied to medical pseudoscience.
Pinterest has taken an important step, albeit temporary. Going forward, I would recommend that the company, along with its peers, provide a more surgical solution — one that allows access to reliable information, but limits disproven, unscientific assertions. As the measles epidemic rages across 16 states, misinformation and pseudoscience present a clear and present danger.
These three recommendations aren’t panaceas, but they can serve as vital first steps toward preventing further harm.
Robert Pearl is a physician and CEO, Permanente Medical Groups. He is the author of Mistreated: Why We Think We’re Getting Good Health Care–And Why We’re Usually Wrong and can be reached on Twitter @RobertPearlMD. This article originally appeared in Forbes.
Image credit: Shutterstock.com