clock menu more-arrow no yes mobile

Filed under:

The feds tried to rate colleges in 1911. It was a disaster.

College students at Texas Christian University (rated as a third class school) in 1910.
College students at Texas Christian University (rated as a third class school) in 1910.
AmyMay52
Libby Nelson is Vox's policy editor, leading coverage of how government action and inaction shape American life. Libby has more than a decade of policy journalism experience, including at Inside Higher Ed and Politico. She joined Vox in 2014.

Somewhere in the US Education Department, statistical experts and policymakers are at work on a highly controversial idea: a federal system to rate colleges based on their quality, much as Consumer Reports rates refrigerators.

Many colleges hate this idea, and it turns out the uproar is nothing new. The forerunner of the modern Education Department tried a similar idea in 1911. At the time, colleges opposed the federal quality ratings so bitterly that two American presidents eventually intervened to halt their publication.

Quality ratings spent 102 years as a third rail of higher education policy. Then, last August, President Obama revived the proposal for a federal rating system. He quickly found out the idea hasn't become any less controversial in the past century.

In 1911, some colleges wanted the federal government to get involved in measuring quality

Students_at_victoria_college_1910

College students at Canada's Victoria University in 1910. (City of Toronto Archives via Wikimedia Commons)

At the turn of the 20th century, college was for the elite. Less than 3 percent of the US population had a bachelor's degree in 1910; just 14 percent had even finished high school.

Still, the number of colleges in the US had nearly doubled in the previous 50 years. And the new Association of American Universities was confronting a pressing question, according to David Webster, who wrote in 1984 about the early federal rating system in the History of Education Quarterly: When students applied to graduate school, how could universities know how good their undergraduate education was?

So the association did something that would be unthinkable in higher education today. It asked the federal government to step in.

The US Bureau of Education's top higher education official, Kendric Babcock, a former college president, agreed to tackle the question. Babcock thought he could create a rating system for more than 600 colleges that could judge "exactly the worth of the degrees granted by the widely varying institutions in the United States" and be accepted both in the US and abroad as an indicator of quality.

This turned out to be wildly optimistic

How the rating system worked

Screen_shot_2014-08-06_at_1.23.09_am

The first page of Babcock's ratings. (Library of Congress; h/t Robert Kelchen)

Babcock looked at the transcripts of thousands of students and talked to many academic administrators to make his judgments. He eventually rated just over half of the nation's colleges and universities, splitting them into four classifications based on how well-prepared students were for graduate school.

This was a rating system, not a ranking system. Babcock created broad categories to indicate quality, but he didn't delve within those categories to determine if Harvard should rank a rung above Yale. And while his list of top-tier colleges is varied — Harvard, Princeton, Williams and Oberlin are on it, as are flagship state universities in California, Michigan, Wisconsin, Kansas and Colorado — most of the colleges on it are still very well-respected today.

Babcock estimated students from the top colleges were so well-prepared that they would only need one more year of study to earn a graduate degree after finishing their bachelor's. Students from the other three classifications would require more time to finish graduate degrees. (The full Babcock report, with all ratings, is here.)

The bottom group represented colleges Babcock thought were so terrible that their graduates ended up two years behind peers from other places. Some of those bottom-tier colleges don't exist anymore; others, such as Kansas State and Virginia Tech, have survived and thrived.

Then, Webster writes, in 1912, newspapers got their hands on a draft copy of the ratings and made the results public.

Colleges were not happy with the results

Screen_shot_2014-08-06_at_1.38.24_am

How some colleges that are still well-known today came out in the early ratings.

Colleges in the top tier, perhaps unsurprisingly, didn't complain. But even in 1912, American higher education was in the grips of the Lake Wobegon effect: every college thought it was above average, and they weren't happy to find out that the government disagreed. An early 20th century US education official put it this way, according to Wester: "The bureau learned that there are no second- and third- and fourth-class colleges; that it was an outrage and an infamy so to designate institutions whose sons had reflected honor on the state and nation."

College presidents argued that their own institutions were wrongly categorized, that Babcock hadn't even visited campus, and that the whole effort was misguided. Public universities argued that looking only at grad school preparation was myopic, since their mission was to serve the needs of the state.

The uproar made it all the way to President William Howard Taft, who issued an executive order banning the distribution of the ratings developed by his own federal agency. The next president, Woodrow Wilson, who had spent much of his life in academia, let Taft's order stand despite pressure from the Association of American Universities to rescind it.

The episode spooked the federal government. In 1914, Wilson's Bureau of Education established a committee to consider trying a rating system again. Their conclusion, according to Webster: it was "at this time not desirable."

This would remain the case for 99 years, until August 2013, when President Obama said he was directing Education Secretary Arne Duncan "to lead an effort to develop a new rating system for America's colleges before the 2015 college year."

Déjà vu all over again

177202549

Obama calls for a federal college rating system at the State University of New York at Buffalo last August. (Jewel Samad/AFP via Getty Images)

Duncan, unlike the optimistic Babcock, has stressed that he's approaching the task of rating colleges with "a huge sense of humility." And unlike the 1912 ratings, which only looked at students' graduate school success, the new federal classifications will consider multiple variables. Among other things, they're likely to include the percentage of low-income students a college accepts and the likelihood that students can pay off their debt after graduation.

The stakes for colleges are higher now, too. While universities risked bad publicity at worst from the early 20th century ratings, Obama wants federal financial aid to depend on how colleges do in the new federal quality rating system. Such a linkage would penalize colleges that fare poorly in the ratings and reward those that do well.

Proposals from the system itself, though, sound a lot like Babcock's 1911 plan. The Center for American Progress proposed a five-tier system. The names of the tiers — platinum, gold, silver, bronze and lead — are different from Babcock's first, second, third and fourth classes, but the basic idea is more than a century old.

The reaction to the Education Department is familiar too. College presidents argue that the ratings effort itself is misguided, or that measuring graduates' incomes is the wrong way to measure the value of higher education, or that there is no way a rating system will capture the different missions of community colleges and elite private universities.

Even the defense of the ratings proposal is an echo of a century earlier. The only reason to oppose transparency, some proponents argue, is that reveals facts colleges would rather conceal. "The theory was rife then," Webster quotes an anonymous critic of the early 20th century colleges as saying, "that it is bad for business if the buyer be told exactly what he is getting for his money." The top US education official at the time of the ratings debacle later said the ratings were unpopular because they were "too nearly correct."

Webster's history suggests that Babcock's failed attempt to rate colleges had long, lingering effects. Colleges and universities decided inviting the government in was a huge mistake and that only academics should be allowed to determine educational quality. That's the position they still hold today.

For now, the proposed release date for a discussion draft of the ratings system has been postponed until later this fall. The question now is whether Duncan's rating system will meet a different fate than Babcock's. Will another public relations debacle make a federal quality rating system the third rail of higher education policy for another century?

(Hat tip to Seton Hall University assistant professor and rankings expert Robert Kelchen, who gave the Babcock report new life on Twitter and pointed me to Webster's 1984 journal article.)