Many low-income and first-generation college students don’t graduate from college. One thing that seems to change that? High-quality advising. When a school can offer advisers who work directly with students to address barriers to getting an education, students perform better and are more likely to graduate.
So how can we make advising work better? In 2013, the Gates Foundation started a new initiative called iPASS (Integrated Planning and Advising for Student Success) to see if it can make the difference. iPASS funds school efforts to use technology to help advisers identify at-risk students, follow up with them regularly, connect them to tutoring, and otherwise get them what they need to succeed and graduate.
But there’s a problem.
According to a new analysis of the effects of the iPASS program by the Community College Research Center at Columbia University and social policy research group MDRC, it’s not doing anything. “The enhancements have so far had no discernible positive effects on students’ academic performance,” the report concludes. Even worse, at one school the program backfired and made students likelier to drop out.
That’s bad news.
But I come here not to bury iPass but to praise it — or rather its creators. Like the program itself, the study was funded by the Gates Foundation. That’s significant; historically, charities haven’t always been great about funding careful third-party evaluations of their new programs. When the programs fail, they don’t always notice.
By funding such a study, the Gates Foundation was able to realize that iPASS is not yet working. And now, colleges are thinking about how to make it better.
Education research is really hard. Lots of well-meaning programs backfire, do nothing, or are worse than just giving people money. The latest research is a reminder of how hard this is, but also an example of how to do it right.
Can we make advisers better able to do their jobs?
Advisers at colleges play a critical role, but their job is very challenging. They’re often responsible for hundreds or even more than a thousand students, and they don’t always have the tools to see at a glance who might be in need of advice, support, or resources.
Could you solve some of these problems with technology? Some schools have hoped so. They’ve tested ideas like algorithmic tools that flag students at risk and let advisers know that those students could use additional support. Technology tools can also help advisers connect students with resources faster, and help schedule regular student-adviser meetings.
iPASS is intended to help schools build better technology-supported advising programs. Lots of schools have tried it: Twenty-six higher-ed institutions received grants of up to $225,000 from the Gates Foundation and the Leona M. and Harry B. Helmsley Charitable Trust as part of iPASS. The study by the CCRC and MDRC looked at three of them: California State University Fresno; Montgomery County Community College; and the University of North Carolina Charlotte. All three schools were trying to expand their advising programs.
The researchers evaluated the programs using a randomized controlled trial design. Some students were randomly selected into the new enhanced advising program; some had access to the school’s standard advising services. That way, the researchers could study the effects of the advising program on student outcomes.
At all three schools, the results were discouraging. At California State University, the enhanced program meant that more students had contact with an adviser. But there were no statistically significant differences in any student outcomes. Students didn’t take more classes. They were not less likely to drop out, and they were not less likely to fail a class. There were no statistically significant differences in any of those measures at the University of North Carolina, either.
At Montgomery County Community College, students in the enhanced program actually did worse. The researchers identified a likely culprit: a rule in the program that required students to meet with advisers before registering for classes. The idea was that if more students met with their advisers, they’d have more access to resources intended to help them succeed in school.
The program succeeded at getting more students to meet with advisers, but overall, students didn’t fare as well. Researchers speculate that some students who couldn’t make a meeting work just dropped out. Overall, students in the iPASS group enrolled in fewer credits than students in a control group with no requirements to meet with an adviser. The difference was statistically significant — the only statistically significant effect on academic performance found at any of these schools.
Getting results in education is really hard
Education is a huge focus for researchers, governments, and grantmakers alike. And for good reason. Figuring out how to give every kid a good education is a matter of fairness and justice, and it’s also sensible economic policy. When kids graduate from high school and college, they earn more and pay more in taxes. They are less likely to go to jail or abuse substances. Their own kids are more likely to finish school. As a result, it seems like effective spending in education can pay for itself in a way that spending in most policy areas can’t.
But it remains deeply unclear how to improve education outcomes. The same pattern repeats itself over and over. Researchers identify a promising pilot program. The program shows huge gains for kids. Governments or philanthropists try to make the program happen for more people, expanding it to new schools or new cities.
And the results typically disappoint. Sometimes, none of the benefits from the pilot program can be found in large-scale implementations. Sometimes, the benefits show up, but then “fade out” — present for a couple years, gone from long-term outcomes.
Some interventions may even backfire, as at Montgomery County Community College. Many schools have tried “early alert” systems that warn students who appear to be at risk of failing out. The idea is that with an early warning, the students can turn things around. But at some schools, they’ve had a large negative impact: Students are disheartened by the messages and conclude they’re not capable of success in school.
Why do interventions often go so badly?
It’s always hard to transfer a good idea from one context to another. A good strategy for public health in one country might be a terrible idea elsewhere, or an advocacy tactic that’s highly effective in one election cycle might go nowhere the next.
In this respect, education isn’t unusual, but scaling in education seems particularly challenging. And we simply know more about how to deliver vaccines than we do about how to deliver functioning schools, so our efforts to fund schools are likelier to run aground on barriers we don’t fully understand.
Why education might not be the best area for an altruist looking to make a difference
Because of all these problems — and because of the track record of embarrassing high-profile failures — my colleague Dylan Matthews has called for philanthropists to stop funding education interventions. His argument? It’s not that education isn’t important. It’s that it’s not tractable — we still don’t know what works. And it’s not neglected — there’s tons of public and charitable money floating around in the sector, meaning that diverting your money to less popular issues might do more good in the world.
And, he argues, even if eventually the Gates Foundation finds something, it’s likely to work less well than just giving people money: “Brookings Institution’s Russ Whitehurst estimates that cash programs, like the earned income tax credit, do considerably more to boost student test scores than even education interventions generally known to be somewhat effective, like reducing class size or investing in pre-K.”
Maybe instead of going to institutions, the grants should’ve gone straight to at-risk students. A thousand dollars can go a long way to keep a struggling adult in class (by letting them pay for babysitting, take an unpaid day off work, hire a tutor, or catch up on bills).
I agree with Dylan: I’d love it if foundations focused their efforts where there’s more cause for optimism. But I want to give credit where credit is due. Its early failures have made the Gates Foundation more cautious and more responsible with education interventions. The way they did iPASS, with careful research into its effects which they published openly, is exemplary.
By publicly admitting that they’ve seen no results so far, they’re demonstrating that they’re willing to keep learning from their mistakes — and willing to make them public so others can learn too.
We don’t know what works in education yet. But honest, thoughtful research like this is the only way we might ever eventually find out.
Sign up for the Future Perfect newsletter. Twice a week, you’ll get a roundup of ideas and solutions for tackling our biggest challenges: improving public health, decreasing human and animal suffering, easing catastrophic risks, and — to put it simply — getting better at doing good.