Just yesterday I was searching for a local surgeon, and on one website he had 2 out of 5 stars. Hardly anyone was recommending him. Yet prominently featured on another site he had 4.7 out of 5 stars. Both sites had a good number of reviews. What’s going on? Is one site cherry picking the reviews? Is someone falsifying reviews on one of the sites? Which reviews can I trust? As I continue to research online physician ratings, one thing is very clear — they can be confusing for the end user.
With the recent push for increased transparency in medical care and the patient experience, online ratings websites are exploding. Today roughly 80 percent of OB/GYNs and 60 percent of surgeons have more than 5 ratings across multiple sites. That is compared to just 4 years ago when a study indicated that only 1 in 6 physicians had any ratings online.
Are sites cherry-picking reviews?
They certainly can, though most state they do not. Yelp recently successfully defended a claim by four small businesses who claimed extortion when asked to buy advertising in order to “engineer” the positioning of positive reviews higher on Yelp’s page. Though Yelp denies any wrongdoing, the court dismissed the claim and in essence said that Yelp can do what it wants with the reviews. It’s unlikely but not farfetched to see something similar occurring in the physician ratings space in the future. Sites like Vitals.com have stated that they will not post inappropriate (e.g., racist) comments and Healthgrades.com has taken the tact of not posting comments at all. Despite physicians regularly requesting the removal of negative reviews, if someone crafts a review it almost always gets posted.
Why the difference between sites then?
Something far less insidious than selectively keeping or excluding ratings explains the most of the differences seen across sites. It comes down to one simple question: “Was the rating actively solicited or not?” Unsolicited reviews and complaints are those where a survey is not requested, but a patient has such a powerful experience that they feel the need to share their experience with the staff, the hospital, or sometimes even the world (online). Unsolicited reviews in general are the extremes.
Solicited reviews on the other hand are generated when an office or provider asks the patient to fill out a survey. This can be via phone, email, or mail request. It may even be an iPad in the doctor’s office. Results of solicited reviews in general are much more positive. This is because a large portion of those asked will fill out a review or survey, even if they do not have an extreme opinion, and in general people are nice and give good ratings. Of course, nothing is ever simple. Lines between solicited and unsolicited get blurred when an office has signs posted requesting patients to fill out reviews at one site or post requests to “Tell us about your experience.”
Axes to grind and pretend patients?
Certainly there are examples of suspected manipulation on one site or another by the physician or an angry patient, but they are very few and far between. Manipulation can only account for a small percentage of the disparities between sites. Most sites have at least rudimentary and often very advanced methods of detecting and correcting manipulation. Anecdotally, in cases where I have suspected manipulation by an angry patient or physician the manipulation has typically been isolated to one site. Indeed, one way to help identify manipulation is to compare the unsolicited ratings on one site to those on another.
So which are better, solicited or unsolicited reviews?
Both are helpful. One common misperception is that unsolicited reviews are mostly negative. Surprisingly a study of online unsolicited ratings found that an overwhelming 88 percent were positive. Both unsolicited and solicited reviews and ratings provided to the hospital or practice have been shown to be reliable predictors of malpractice risk. This is largely because lawsuits are commonly associated with physician communication issues as opposed to clinical outcomes. Solicited reviews require a bit more work to cancel out the effect of all the positive reviews inherent to the methodology, but once that is done they are also predictive of malpractice risk.
In a yet unpublished paper, I have examined over 65,000 online physician ratings and found similarly strong correlations between ratings and malpractice risk. For example, if a surgeon receives poor ratings and finds themselves in the lowest 10 percent of surgeons based on a normalized online ratings score across multiple sites, they likely have roughly four times the claims risk of a surgeon who is in the top 10 percent. When describing a “high-quality provider,” patients rank elements of the doctor patient relationship even higher than receiving the correct diagnosis and treatment. Though far from perfect, online physician ratings are an indicator of the ability of the provider to interact well with patients.
How does this all play out?
One thing is for sure, prospective patients know little to nothing about the difference between solicited and unsolicited reviews. With several of the recent partnership announcements between physician ratings sites, survey vendors, and electronic medical records companies, many sites are beginning to mix solicited with unsolicited reviews making it virtually impossible to objectively compare providers of the same procedure or specialty, even on the same review website. Increasingly, I believe online ratings sites will differentiate based on the quality and consistency in the methodology of their online reviews. If they don’t, they risk creating a mistrust in the ratings system and a poor patient experience all together.
Brant Avondet is founder, Searchlight Enterprises.