An excerpt from Justice in the Age of Judgment: From Amanda Knox to Kyle Rittenhouse and the Battle for Due Process in the Digital Age.
In the early 1970s, a group of researchers at Stanford University placed advertisements in a newspaper offering two dollars to people who would participate in an “experiment in decision-making.” Sixty women were presented with twenty-five cards containing suicide notes and asked to determine which ones were fake and which ones were real. Before they started, an administrator told them “the average score was about sixteen correct out of twenty-five.” After examining each card, they were told whether they had been right or wrong. Each woman kept track of her own score.
Here’s the catch: unbeknownst to the women, the results were predetermined. At random, the sixty women were preassigned to be told they had either gotten twenty-four right (a successful score), seventeen right (an average score), or ten right (failure, sorry). After the test was over, the researcher left the room for a few minutes, allowing the women to consider the results.
When the researcher returned to the room, he told the women what had happened: that the results were selected at random, that the score they wrote down had nothing to do with how they’d actually performed, and that the deception was necessary because they were actually studying “the effects of success and failure on physiological measures.”
To emphasize that the subject’s score had been determined randomly prior to her arrival and that it could not have been influenced by her performance, (the administrator) showed the subject the actual schedule that determined her assignment to experimental condition and her initial score. The (administrator) specifically stressed that this score contained absolutely no information about the subject’s actual task performance. Every subject was explicitly asked to acknowledge her understanding of the nature and purpose of the deception before the experimenter proceeded to the next phase of the study.
With that, the researcher got up, explained that he was going to fetch the participant her two dollars, and, by the way, “while I’m gone, would you mind filling out this questionnaire?”
The questionnaire asked the women three questions:
1. How many answers do you think you actually got right?
2. How many answers do you think the average student would have gotten right?
3. How many answers do you think you’d get right if you took the test again?
The results were fascinating.
The higher score they’d been told they had gotten—that is, the higher the random score they had been assigned—the higher they assumed they would score on the test if they took it again. In other words, the “success” group thought of themselves as better than average, and likely to perform well again in the future, while the “failure” group thought they did worse than average, and that in the future they would continue to do poorly. They still, basically, believed the scores, even though they’d been told they had been assigned completely at random.
Even after debriefing procedures that led subjects to say that they understood the decisive invalidation of initial test results, the subjects continued to assess their performances and abilities as if these test results still possessed some validity.
Think about that: even after people were told the results were totally random, it still gave them a false sense of how well they did at the task and how well they’d do in the future. They had been given information that was completely made up, then given concrete information to the contrary—told that they had been given completely false information—and yet they still believed what they’d first heard.
The researchers then did a second study, where they repeated what they did before, but this time they had some observers watch. They did a regular “debriefing” where the researchers told both the test subjects and the observers that they had faked the subjects out in telling them how they did, then they gave a “process” debriefing where the researchers told the subjects and observers the real purpose of the study.
This time, after the “regular debriefing,” both test subjects and observers continued to believe in their initial results. In other words, both believed that the “success” group would do better in the future and the “failure” group would do worse, even though they were told that the results were faked.
The group that had the “process” debriefing had mixed results. The test takers and the observers who knew the “real” purpose of the study, that both the scores were completely artificial and the true purpose of the experiment was to see how well impressions persevered even when false information was corrected, were split. The test takers mostly got it. They filled out the questionnaire and the results were less determined by how well they’d been told they had done. But for the observers, the perception persisted:
Observers, who overheard this process debriefing, however, were less dramatically influenced. Their estimates of the actor’s initial performance … continued to show clear perseverance effects.
Let’s think about that for a second. Let’s say someone was asked to watch you play poker, and to keep track of how well you played. You lost every hand and, understandably, when asked how well you played in the game and how well they thought you’d play in your next game, the person watching you thought you played lousy and you’d keep losing your money. Then the person watching you was told the deck was stacked. That you had no chance. That even the best poker player in the world would have gotten the exact same lousy score and gone home without any money. Furthermore, the person watching you play was told this wasn’t about poker at all. It was a study. A study looking at how much the impression of someone else perseveres even when presented with conflicting, accurate information, and told that what they used to know was false. The person was told they believe that “when you see someone play poker, and they play lousy, even if I tell you that it was a setup, that they had no chance of playing anything other than lousy, we’re guessing that the initial impression will persevere, that you’ll still think they’re a lousy poker player. That’s what this is about. Not poker!”
What this study shows is that, even after all that, they’re still going to think you’re a lousy poker player.
In other words, your first impression is the most important, even if it is based on false information, and even if you are shown evidence that the original information you were shown was false.
The study authors found two important takeaways in their experiments. The first, basically, is that once we’ve had our first impression, we want to stick with it. When presented with information to the contrary, we tend to dismiss it. Information that supports it, however, we tend to embrace. So jurors make up their mind during the opening statement. After that, it’s almost impossible to change their minds.
Like I said before, it’s called confirmation bias. We read something in the media about a high-profile criminal case or a political event and we make up our minds. And it’s almost impossible to change our minds after that.
Anne Bremner is an attorney.
Doug Bremner is a psychiatrist, researcher, writer, and professor at Emory University in Atlanta, Georgia. He can be reached at his self-titled site, Doug Bremner, on Twitter @doug_bremner, Instagram @dougbremner, and TikTok @jamesdouglasbremner.
They are the authors of Justice in the Age of Judgment: From Amanda Knox to Kyle Rittenhouse and the Battle for Due Process in the Digital Age.