clock menu more-arrow no yes mobile

Robots are better than humans at predicting Supreme Court decisions

Two robots debate how the Pennhurst doctorine will factor into the Supreme Court's King v. Burwell decision.
Two robots debate how the Pennhurst doctorine will factor into the Supreme Court's King v. Burwell decision.
(Shutterstock)

The Supreme Court always issues its most important, impactful decisions near the end of June. Last year, the Court ruled that Obamacare's birth control mandate was illegal on June 30. The year before that, the justices issued a major ruling in favor of marriage equality on June 26.

This tends to make the early summer a legal silly season, with observers predicting how the court will or won't rule on the biggest cases. This year is no exception: you can find dozens of predictions about whether the Supreme Court will uphold Obamacare's insurance subsidies, for example, or what the justices will say in a new same-sex marriage case.

Here's a word to the wise: don't bother reading any of it. Research shows that legal experts have a terrible track record of predicting Supreme Court rulings.

Robots get Supreme Court cases right most of the time

Why does he have a clock on his chest

"Beep boop boop" — this robot's prediction on the King v. Burwell ruling, probably.

Shutterstock

In a 2004 Columbia Law Review article, researchers looked at how 86 former Supreme Court clerks, attorneys, and other legal experts' predictions for rulings in the 2002 term stacked up against the actual decisions. They also tested the experts against a statistical model that predicts the outcome using a few basic facts, like the subject matter of the case and which circuit court sent it up to the Supreme Court.

The statistical model got the outcome right in 75 percent of cases, and legal experts predicted the right answer in just 59 percent.

That means the experts are doing only slightly better than a coin toss in predicting how the Supreme Court will rule.

However, humans are better than robots at predicting how individual judges will rule

A separate study using the same statistical model went a bit more granular, looking at predictions of individual justices' votes. And it found, to the authors' surprise, that experts did better at predicting individual justices' votes — but the computer still beat them on predicting the actual decision.

What gives? It turns out the statistical model would trip up sometimes on predicting the votes that humans found easy. All the legal experts could predict with decent certainty how Ruth Bader Ginsburg, for example, or Steven Breyer might rule.

But the computer did better at predicting how the justices we think of as swing votes, like Anthony Kennedy, would rule.

"Critically, the model did significantly better than the experts at predicting the votes of Justices O’Connor, Kennedy, and Rehnquist [the three moderate justices in 2002]," the authors write. "This fact, coupled with the importance of those three justices in the ideological makeup of the current Supreme Court, explains much of the statistical model’s success."

Building an even better Supreme Court robot

The statistical model built for the two studies above only covered one set of justices — namely, the nine that sat, unchanged, from 1994 to 2005. More recently, a trio of researchers have built another model that predicts the outcome of Supreme Court cases since 1953 up through the present — and get 70 percent of them right. My colleague Dylan Matthews wrote about it:

The model itself is exceptionally complicated. It uses a total of about 95 variables with very precise weights ("to four or five decimal places," Blackman says), and each justice's vote is predicted by creating about 4,000 randomized decision trees. Each step in the tree asks a question about the case — is it about employment law?; what is the lower court the case was appealed from? — and then funnels the answers into conclusions about the justice's ultimate vote.

You can read more about that here.