/cdn.vox-cdn.com/uploads/chorus_image/image/58352187/GettyImages-52473774.0.0.jpg)
Last week, a federal court in North Carolina overturned that state’s congressional district map as an extreme partisan gerrymander. This is a big deal: Never before has a federal court overturned a congressional districting plan on grounds of partisanship (as opposed to, say, racial discrimination). The decision shows how courts can identify a gerrymander using methods that are much more precise than eyeballing funny-looking districts.
Far from being gobbledygook, as Chief Justice John Roberts memorably put it, the simplest statistical methods are more than a century old, invented for real-world needs like beer quality control. And if brewers can harness the power of statistical reasoning, surely judges and reformers can too.
By allowing politicians to, in effect, choose their voters, gerrymandering trumps other efforts to reform the voting system. Even if all adults were registered to vote, for instance, those voters could still be arranged so that most of their votes had little power. And partisan gerrymanders are more extreme and more frequent than at any point in the past 40 years.
The only remedy for North Carolina’s gerrymander was court action, because state law neither allows the governor to veto the legislature’s districting plan nor provides for an independent initiative process. The panel that considered the case was unanimous and “bipartisan,” being composed of three judges appointed by Presidents Jimmy Carter, George W. Bush, and Barack Obama.
The case was quickly appealed to the Supreme Court, where its fate will rely on two high-profile cases they are already considering, one regarding a Republican gerrymander in Wisconsin (Gill v. Whitford), the other a Democratic gerrymander in Maryland (Benisek v. Lamone). Reformers in this area have been striving to get back before the high court since a tantalizingly inconclusive decision in 2004, with Justice Anthony Kennedy, then as now, being the swing vote.
If the Court moves to curb gerrymandering, they may leave it to lower courts to settle on specific standards. That’s where the North Carolina decision is especially important. The 200-page document is, to a redistricting nerd, an enjoyable tour of some of the ways one might identify a partisan gerrymander.
However, not everyone loves the evidence. After his “gobbledygook” comment during the Whitford oral arguments, Chief Justice Roberts went on to say that if courts struck down maps based on tools created by social scientists, “the intelligent man on the street is going to say that’s a bunch of baloney.” (This echoes Justice Kennedy’s concern in 2004, when he agreed with four liberals that there could be objective ways of proving that partisan gerrymandering exists, but declined to embrace any of the tests put forward in that case.) Roberts also wondered if the Court would be accused of partisanship if it became the arbiter of when redistricting efforts had gone too far.
The Court’s concerns can be addressed easily. Some of the most promising statistical measures of gerrymandering can be understood by a high schooler or even a grade school student. The best tests were invented more than 100 years ago, and can easily be done by a judge — or passed off to a clerk. As for what an intelligent person would say? She’d likely say the Supreme Court had used an objective standard.
There are many ways to measure the partisanship of gerrymandering. This multiplicity is immensely useful. In jurisprudence as well as home repair, it can be handy to have more than one tool, so long as they all work toward the same end. With that in mind, what follows is an exploration of some of the best tools, including some used as evidence in the North Carolina case. Crucially, many of these tools do not require an expert witness at all.
Did redistricters act with partisan intent?
The judges’ first test was whether redistricters had discriminatory partisan intent. Determining intent relies on questions that are familiar to judges: Who was in charge of the process? Did they intend to draw partisan maps? Was the other party frozen out of the process? While the judges in the North Carolina case disagreed in general on how much partisanship could be injected into the map-drawing process, they all agreed that North Carolina Republicans had gone too far by shutting out Democrats and explicitly attempting to build the most partisan plan possible.
Did the redistricters gain an asymmetric advantage?
Next, a judge might like to know whether a district map treats the two major parties differently. There are various constitutional grounds for making sure this is the case, including the 14th Amendment (equal protection). But the overarching issue is basic fairness: Did both parties have the chance to translate votes to electoral victories? Or was one party unreasonably favored?
One way to detect unfairness is by comparing the outcome with a neutral situation in which parties are treated equally. This reasoning led to a pioneering idea of partisan asymmetry in elections, first developed by Harvard’s Gary King, Purdue’s Robert X. Browning, and others in the 1980s.
“Asymmetry” refers to situations in which identical performances by the two parties lead to very different results. Say, when one party gets 52 percent of the statewide vote in legislative elections, it wins a significant majority of the seats, but when the other party wins 52 percent of the vote, it wins only a minority of seats. However, the Supreme Court has explicitly, and correctly, concluded that a one-off outcome like the 52 percent example cannot be used to prove a gerrymander, because such an outcome could occur by chance.
Consequently, it’s useful to think about how sure we are that a particular outcome has deviated from the norm for reasons besides chance. That was the problem faced by experimental beer brewer William Sealy Gosset, who worked on a decidedly nonpolitical subject: detecting when hops or barley quality was worse, or better, than a prescribed average. His contribution to this question was critical for the booming business of Guinness, Gosset’s technologically minded employer.
Writing under the pseudonym “Student” in 1908 to protect his achievement from rival brewers, his “t-test” is now learned by all statistics students — and has become the most widely used statistical test in all of science and engineering.
Identifying an off batch of hops — or establishing the “packing” of districts, an overconcentration of members of one party into a few places — is easy to perform. Student’s test is the basis for a very simple measure of asymmetry, the “lopsided-wins test,” which checks if Democratic representatives won, on average, with much larger margins than Republican representatives. If the difference is large enough, and there is enough data — these statistical tests are always stronger the more data points there are — then their average is highly unlikely to have arisen from neutral principles.
In North Carolina in 2016, the three Democratic winners took an average of 68.5 percent of the vote, while the 10 Republican winners took an average of only 60.3 percent of the vote. According to the lopsided wins test, such a pattern would have only occurred by chance in one of 300 cases. Importantly, it doesn’t matter that Democrats won three seats; if they’d won four or five seats with similar averages, the lopsided wins still suggest that Democrats were denied an equal opportunity to elect representatives of their choice — but without suggesting a quota of seats.
An even older way to measure unequal opportunity is a test for “consistent advantage,” originally developed by Gosset’s mathematical mentor Karl Pearson in 1895. To carry out this test, compare the average statewide vote captured by each party with that of the median district — the district that falls in the middle when they are ranked by one party’s vote share.
When both parties are treated similarly, this difference is close to zero. If the “average-median difference” is large — with the median district tilted strongly toward one party — it means that one party gained a consistent advantage at the district level. Call it the Lake Wobegon test: The redistricting party has ensured that a majority of its districts perform above the statewide average.
(The lopsided-wins test and the consistent-advantage test are useful at detecting the kind of gerrymandering that’s done in states that are closely politically divided, as is the case in Wisconsin, Pennsylvania, or North Carolina. Strongly partisan states such as Maryland require examination of single districts, as the Benisek plaintiffs did, or more sophisticated statistical tests.)
Of the many ways of measuring asymmetry, one has taken center stage this year: the efficiency gap. Invented in 2014 by Eric McGhee, of the Public Policy Institute of California, it measures asymmetry using a formula (given here) that examines how many votes are cast for either party and the seats that are won as a consequence.
The efficiency gap measures the portion of votes each party has “wasted.” For example, in a district where party A defeats party B by a 60-40 margin, party A wasted 10 percent of the votes cast, since they were in excess of the bare 50 percent plus one vote needed to win. All of party B’s 40 percent were wasted.
This definition seems abstruse, but there is a much simpler way to think about it. The efficiency gap is zero when one party wins 50 percent of the statewide vote and 50 percent of the seats — but it is also zero for other election outcomes. For example, it is zero when 75 percent of the statewide vote elects 100 percent of the seats. This graph shows all the outcomes that are associated with an efficiency gap of zero.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/10048127/efficiency_gap.png)
Obviously, any other outcome would lead to a larger efficiency gap. The efficiency gap is a useful measure, but by itself it might not be the holy grail of redistricting litigation, as it is sometimes characterized. It comes perilously close to mandating a “correct” allocation of seats for any given vote split, a standard that, as we have noted, the Supreme Court has rejected as being too rigid.
And while a partisan gerrymander will usually have a large efficiency gap, the efficiency gap can also unintuitively take on large values when there are very few districts. (In the single-district example above, the efficiency gap of 30 percent between parties A and B is bell-ringingly big). So the efficiency gap does not tell the whole story.
Judges are also interested in durability: whether a gerrymander is likely to last under a variety of political conditions. For example, the original gerrymander of 1812 — which produced a vaguely salamander-shaped district in Massachusetts to favor Gov. Elbridge Gerry’s Democratic-Republican Party — was not durable at all. It produced a large majority for that party in the state Senate that year. However, in the very next election, following the unpopular War of 1812, the outcome was almost exactly reversed when the opposing Federalist Party won a wave election.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/8294889/The_Gerry_Mander_Edit.png)
In contrast, modern gerrymanders protect the party in power even when the votes aren’t there. North Carolinian GOP redistricters were able to construct a congressional delegation that held steady at nine or 10 Republicans for three elections in a row, out of 13 total, although the Republican statewide vote total varied from 49 percent to 56 percent. A more reasonable outcome would have been seven or eight Republicans.
On the other side of the aisle, the post-2010 Maryland delegation has consistently included seven Democrats out of eight seats, when six would have been more reasonable, given that the statewide vote split was 63 to 37 in favor of Democrats. Judges can look at the recent history of elections to get a sense of the durability of a gerrymander. Social scientists can also help by showing how a hypothetical swing in public opinion would play out, given the existing maps.
The district outlines would not have arisen from nonpartisan redistricting principles
Taking a step back from statistics, the final prong of the test used in North Carolina concerned whether a skewed electoral result could have arisen from nonpartisan map-drawing processes. For example, did political geography — the places voters have chosen to live, and the way city and county lines are drawn — create an underlying bias favoring Republicans? This has been a common Republican legal argument: The patterns we see have to do with geography, not gerrymandering.
To sort out the question, the court relied on expert witnesses who drew thousands of alternative maps and concluded that North Carolina’s geography carries no such inherent bias. There were many ways to draw maps following all the redistricting rules that did not lead to unfairness, they showed. (There is even a way to explore millions of outcomes without drawing a single map — but that’s a different story.)
Just like symmetry tests, the scrutinizing of maps as a diagnostic for gerrymandering — a longstanding practice — has several pitfalls. First, even a pretty shape can hide ill intent. When Democratic voters are clustered in towns and cities, it is easy to pack them or split communities apart while still using relatively neatly shaped boundaries. For example, the two following schemes have the same population pattern yet lead to quite different outcomes.
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/10048153/Toy_redistricting_with_nice_boundaries.png)
The North Carolina district map doesn’t look so contrived either — until you overlay a population density map on it:
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/10048159/nc_heatmap_high_res.png)
Here, it’s clear that some communities, like Raleigh and Hillsborough, were packed together in the Fourth District to concentrate Democratic voters, while other communities, like Greenville, Asheville, and Greensboro, were cracked between several districts to dilute their voting power.
In addition, the Voting Rights Act sometimes requires that individual districts be drawn in visually odd ways to make sure a minority group has a shot at representation. In other words, strange shapes can accompany a perfectly legal intent.
In the end, positive rulings from the Supreme Court won’t put partisan redistricters out of business completely. Lasting and fairer reform will require processes that lead to balanced outcomes without ever involving a judge. A particularly promising approach is the use of independent bodies such as California’s redistricting commission.
Such reforms might reduce or eliminate the necessity for lawsuits in the first place. But for now, suits like the one out of North Carolina can put guardrails on the process, preventing the most extreme abuses. And statistical tools, far from being “gobbledygook,” should play an important role in those suits.
Further reading:
Several of these tests are described in this Stanford Law Review article. You can explore the tests interactively at the Princeton Gerrymandering Project, and their application to Wisconsin and Maryland are in this Election Law Journal article.
The concept of partisan symmetry is reviewed here by Bernard Grofman and Gary King. The efficiency gap has been applied to many states, and is described here and further discussed here. The fundamental concept of fairness as a bedrock principle of democracy is described in this legal brief by Heather Gerken, Jonathan Katz, Gary King, Larry Sabato, and Sam Wang.
Sam Wang is a professor of neuroscience at Princeton University. Brian Remlinger is a statistical research specialist at the Princeton Gerrymandering Project.
The Big Idea is Vox’s home for smart discussion of the most important issues and ideas in politics, science, and culture — typically by outside contributors. If you have an idea for a piece, pitch us at thebigidea@vox.com.