clock menu more-arrow no yes mobile

Filed under:

What Alan Krueger taught the world about the minimum wage, education, and inequality

The late economist’s work totally changed the debate on each subject.

Obama Nominates Alan Krueger To Head His Council Of Economic Advisors
Alan Krueger and then-President Barack Obama in 2011, upon the former’s nomination to chair the Council of Economic Advisers.
Win McNamee/Getty Images
Dylan Matthews is a senior correspondent and head writer for Vox's Future Perfect section and has worked at Vox since 2014. He is particularly interested in global health and pandemic prevention, anti-poverty efforts, economic policy and theory, and conflicts about the right way to do philanthropy.

Princeton economist Alan Krueger died by suicide this past weekend at age 58, the university announced on Monday.

Casual political observers probably know Krueger best for his four years of service in the Obama administration, first as assistant secretary of the Treasury for economic policy and then as chair of the Council of Economic Advisers — effectively the White House’s top economist.

But to those in his field, Krueger was known for bringing a new level of rigor to economics research and puncturing a number of long-held pieties. His work with Berkeley’s David Card suggested that simplistic econ 101 predictions that minimum wage increases necessarily cause job losses don’t always hold true. His work on education with co-authors like MIT’s Joshua Angrist and Mathematica’s Stacy Dale brought new attention to the ways in which simplistic statistical analyses can lead to wrong conclusions, and popularized new methods for doing that analysis better.

And his work and speeches in the Obama White House helped make economic inequality a mainstream issue of debate in the early 2010s, influencing American political life for years to come.

Here are four big things that Krueger taught the world.

Minimum wages don’t always cause job loss

In introductory economics courses, students are typically taught that setting price floors — on milk, oil, or, perhaps most importantly, labor — causes supply to exceed demand.

In the case of labor, what that means is that if there’s a minimum wage, employers’ demand for workers falls (because they cost more), and the supply of workers increases (because they’re promised more money), meaning there’s unemployment, with all the costs and suffering that entails.

This conclusion was largely based on abstract theory, but it held sway for decades. This chart, from economist Lars Christensen, lays out this econ 101-type analysis as it applies to the minimum wage:

Minimum wage, Econ 101 analysis
A very simplistic model of what setting a minimum wage does.
Lars Christensen

Setting a minimum wage increases wages (Wmin > Weq), sure, but it also increases the supply of labor (N1 > Neq) while decreasing demand (N2 < Neq), and leading to a bunch of unemployment (N1 - N2).

As a consequence, for years many economists assumed, almost without questioning, that minimum wages destroyed jobs. They might be worthwhile, sure, but you have to weigh the harm they do to the demand for labor against their benefits for workers who remain employed.

In a paper first published by the National Bureau of Economic Research in 1993, Krueger and his co-author Card exploded that conventional wisdom. They sought to evaluate the effects of an increase in New Jersey’s minimum wage, from $4.25 to $5.05 an hour, that took effect on April 1, 1992. (At 2019 prices, that’s equivalent to a hike from $7.70 to $9.15.)

Card and Krueger surveyed more than 400 fast-food restaurants in New Jersey and eastern Pennsylvania to see if employment growth was slower in New Jersey following the minimum wage increase. They found no evidence that it was. “Despite the increase in wages, full-time-equivalent employment increased in New Jersey relative to Pennsylvania,” they concluded. That increase wasn’t statistically significant, but they certainly found no reason to think that the minimum wage was hurting job growth in New Jersey relative to Pennsylvania.

Card and Krueger’s was not the first paper to estimate the empirical effects of the minimum wage. But its compelling methodology, and the fact that it came from two highly respected professors at Princeton, forced orthodox economists to take the conclusion seriously. New York Times reporter Binyamin Appelbaum lays out some of the vitriolic reaction to the paper in a Twitter thread:

Card and Krueger expanded their results into a well-regarded book, Myth and Measurement, and then largely left the debate. “I’ve subsequently stayed away from the minimum wage literature for a number of reasons,” Card said in an interview years later. “First, it cost me a lot of friends. People that I had known for many years, for instance, some of the ones I met at my first job at the University of Chicago, became very angry or disappointed. They thought that in publishing our work we were being traitors to the cause of economics as a whole.”

But the effects of their research have remained. Both critics and supporters of minimum wage increases have gotten a lot less theoretical and a lot more empirical. Some empirical economists (most notably UC Irvine’s David Neumark) still think minimum wages cause job loss; others (most notably UMass Amherst’s Arindrajit Dube) argue that the employment effects of most hikes in the wage are minimal, and swamped by the reduction in poverty that the increased wage itself generates. But all agree that this is an empirical question best answered through careful consideration of what actually happens.

The most recent meta-analysis I’ve seen, reviewing studies published since 2000 (many of which wouldn’t have been written without Card and Krueger), concludes that the effect of most minimum wage increases on employment is small and potentially nonexistent, and that research in the wake of Card and Krueger has substantially lowered the average estimate of minimum wages’ effect on jobs.

There are better ways to figure out what causes what

Alan Krueger — along with Card and many other prominent microeconomists of their generation — was part of what their colleagues Joshua Angrist and Jörn-Steffen Pischke have termed the “credibility revolution” in economics.

They quote in a 2010 paper their older colleague Edward Leamer as stating in 1983, “Hardly anyone takes data analysis seriously. Or perhaps more accurately, hardly anyone takes anyone else’s data analysis seriously.” Data analysis was so subjective, so easily pliable to one’s own pre-chosen conclusions, as to feel almost useless.

Then a new generation of economists took it upon themselves to change that status quo, by carefully adopting better research designs better able to determine causation (not just correlation), and focusing heavily on actual experiments and quasi-experiments where it’s clearer what factor is causing what.

“Instrumental variables” were one of the more popular tools for sorting out causation to emerge from the credibility revolution, and one of their early test uses was in a paper by Krueger and Angrist on the effects of compulsory schooling (laws requiring people to be in K-12 education until they’re 16 or 17) in the United States, and thus, indirectly on the effects of going to school for longer.

The simplest way to see if compulsory schooling increases adult earnings would be to compare if people earned more before or after compulsory schooling took effect in the United States. But there are lots of other factors besides compulsory schooling that affect school attendance and earnings over those periods. Maybe the economy went into recession, or funding for schools also went up, or a lot more schools opened.

All those factors would affect earnings and school attendance, and could muddle an analysis trying to sort out what compulsory schooling, specifically, does for students. These other factors are called “omitted variables” and can really distort this kind of analysis. You can control for some of them directly, but usually there are omitted variables that you can’t measure or don’t even think of that still matter.

One way around this is to find an “instrumental variable”: a variable that predicts the things whose effects you want to know (compulsory schooling) but that should not separately affect the ultimate effects you’re measuring (earnings).

Angrist and Krueger used as an instrumental variable the time of year students are born. Whether you’re born in March or November shouldn’t, on its own, affect how much money you make or how many years of school you get. But because compulsory schooling laws typically mandate learning up to a certain age (like 16), they typically result in students with birthdates later in the school year getting more education once they drop out than students with birthdates earlier in the school year.

So birthdates allowed Angrist and Krueger to estimate the effects of compulsory schooling (or just going to school longer) without worrying as much about omitted variables. Unless there’s some other way that being born in the first quarter of the school year should lower the earnings of high school dropouts, beyond them being less educated, this method should give a sound estimate of the effects of compulsory schooling itself, and on the effects of having more months of school on earnings.

Their conclusion was that school really did boost earnings for students forced to stay in school longer. Their work has been critiqued since, and instrumental variables estimates can mislead if the instrumental variable chosen is badly selected, but the influence of this methodology is hard to overstate.

Selective colleges help minority students, but not white students

Krueger’s work, even when theoretically groundbreaking, tended to be incredibly practical. One great example, especially in light of the recent scandal over bribery in elite college admissions, is his work with Stacy Dale at Mathematica on whether selective colleges actually increase earnings: Does going to Harvard really cause you to earn more than going to UMass Boston?

Here, again, using proper methods is important. Krueger and Dale found that if you controlled for standard variables like GPAs, SAT scores, and so on, students who went to selective colleges earned substantially more. That might mislead a casual observer into thinking that the selective colleges caused them to earn more, even more than their natural smarts and work ethic would’ve earned them.

But Krueger and Dale, in two papers released in NBER in 1999 and 2011, found that selective college attendance actually has no effect if you compare students who were admitted to selective colleges and attended to ones who were admitted and did not attend. These two groups are not identical; presumably there is something different about students who, say, were admitted to Harvard but chose to go to UMass Boston compared to students who were admitted to Harvard and chose to attend. But they’re much more similar groups than Harvard students and UMass Boston students in general. That makes comparing them a better, if still imperfect, way of estimating what going to Harvard does to earnings.

In their second paper, Dale and Krueger used a better data set and looked at longer time horizons, and found some interesting nuances. “Our results suggest that students from disadvantaged family backgrounds (in terms of educational attainment) experience a higher return to attending a selective college than those from more advantaged family backgrounds,” they concluded. Black and Hispanic students also tend to benefit economically from going to selective schools. But white kids whose parents went to college and/or grad school don’t benefit much, economically.

That’s a good reason for rich white parents like Felicity Huffman and Lori Loughlin to stop bribing elite colleges to get their kids in; their kids will be fine in material terms regardless. But it’s also a good reason for elite schools to admit a lot more poor, black, and Latino kids, who are likelier to benefit than rich white kids.

Equality and equality of opportunity are intimately connected

As CEA chair for Obama in early 2012, in the wake of Occupy protests and Obama’s “Buffett Rule” proposal to fight inequality, Krueger coined a term that helped solidify the idea that income inequality is a serious harm: the Great Gatsby curve:

The Great Gatsby Curve in original form
The original Great Gatsby Curve, plotting income inequality against equality of opportunity.
Alan Krueger/Council of Economic Advisers

The curve shows the relationship between income inequality (measured using the Gini coefficient, a standard metric) versus inequality of opportunity (measured by the correlation between a parent’s earnings and their child’s — the idea being that in a world with equal opportunity, the correlation would be small). And it shows that these two measures usually give very similar answers. That suggests that rising income inequality in the US is likely to be accompanied by a decline in equality of opportunity.

“The persistence in the advantages and disadvantages of income passed from parents to the children is predicted to rise by about a quarter for the next generation as a result of the rise in inequality that the U.S. has seen in the last 25 years,” Krueger concluded in his speech. “It is hard to look at these figures and not be concerned that rising inequality is jeopardizing our tradition of equality of opportunity.”

The idea of a connection was not novel to Krueger; Miles Corak, an economist at CUNY, did the research that Krueger based his chart on, and tells some of the backstory here. And naturally, the idea of a correlation has been fiercely contested by conservatives (with others like Corak firing back).

But Krueger popularized the relationship and helped make income inequality a more mainstream concern among lawmakers by suggesting that it could be a proxy for the erosion of a value every politician in Washington claims to hold dear: equality of opportunity.


Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.