You're reading this article. Should it change your opinion of the piece's quality to know that its author was white, black, or of another race? No, if you're evaluating it on its merits.
Yet a new study by the consultancy firm Nextions shows that reviewers of a legal brief did just that.
In an experimental context, when reviewers were told the author of a legal brief was black they consistently rated identical pieces lower in quality and identified more spelling, grammar, factual, or analytical errors. It's evidence that, even if the days of overt bigotry and explicit discrimination are mostly past, the United States still struggles with a deep problem of implicit racism.
Arin N. Revees, the president of Nextions and the author of the study, argues that the implicit racism happened because reviewers take the racial information she provided as a cue for how they should judge the work. When the author is supposed to be white, reviewers excused errors as out of haste or inexperience. They commented that the author "has potential" and was "generally a good writer but needs to work on" some skills. When the author is supposed to be black, those same errors became evidence of incompetence. A reviewer said he "can't believe he [the author] went to NYU," and others said he "needs lots of work" and was "average at best."
The evidence for
Nextions recruited five lawyers at five different law firms to co-write a research memo about trade secrets at Internet start-ups. Then they inserted several errors into the memos — errors in spelling, grammar, legal terminology, fact, and analysis. Last, they created two different headers for the memo. Both identified the author as "Thomas Meyer," a third-year associate with a degree from the New York University School of Law, a top-ranked school. Yet in one version of the header, Thomas Meyer was identified as "Caucasian"; in another, he was identified as "African American."
They sent these otherwise-identical memos to 60 different partners at law firms who had agreed to review the pieces and identify errors. The reviewers, though, weren't told that the experiment was about race — they were just asked for their opinion on the quality of the memo.
They judged the memo far more harshly when they were told the author was black than when they were told the author was white. On a scale of one to five, reviewers gave the black Thomas Meyer a 3.2 and the white Thomas Meyer a 4.1.
More troublingly, not only was the overall score lower, but reviewers seemed to find more errors when they believed the author was black. Out of seven spelling and grammar errors in the text, they found 2.9 of them on average in white Thomas Meyer's memo and 5.8 of them on average in black Thomas Meyer's memo. Out of six technical writing errors, reviewers found an average of 4.1 in the white version and 4.9 in the black version. Out of the five factual errors, they found an average of 3.2 in the white version and 3.9 in the black version.
There was no apparent effect from the race of the reviewer, and the sample of reviewers included a mix of races.
Possible grounds for doubt
There are two serious shortcomings in the Nextions study. It should be taken as suggestive, but not conclusive, evidence of the power of implicit racism. Those two flaws are that the sample size was puny and, more dangerously, the study design makes it impossible to control for the harshness of the reviewer.
Only 60 reviewers participated, and only 53 actually sent back their reviews. That's not enough for strong conclusions.
What was more problematic was that each reviewer only looked at one memo. This is a problem because some reviewers will be harsher in general, and a better study would control for differences in the reviewer. The Nextions report assumes that all reviewers are equivalent -- but, with such a small sample size, this is a very risky assumption. If the reviewers that received memos from black Thomas Meyer were just harsher overall, this study would conclude that the blackness of Thomas Meyer, and not the harshness of the reviewer, was at play. The way to solve this problem would be to have each reviewer look at more than one memo.