What does it mean to do the right thing? And how can we do it more consistently?
The effective altruism movement is an attempt to answer these questions, and hopefully make it easier for everyone to do more good in the world. Instead of doing charity in a way that makes people feel good, effective altruists rely on rigorous, evidence-based analysis to decide how to donate money, where to donate, and which careers are most ethical. (Dylan Matthews explained the history and philosophy behind the movement for Vox.)
William MacAskill is one of the founders of effective altruism and the author of Doing Good Better, probably the best introduction to EA. He is also a professor of philosophy at Oxford University and the president of the effective altruist group 80,000 Hours, which explores how people can do more good through their careers.
Full disclosure: I support the effective altruism movement and consider it a force for positive change in the world. But the movement is not without its critics, and I’ve heard some worthwhile criticisms of EA that deserve to be addressed.
For example, political theorists like Jennifer Rubenstein question whether EA’s focus on measurable outcomes leads to a bias against social or political movements, which are often about eliminating the structural origins of inequality or whose effects aren’t readily quantifiable. And Matthews has suggested that effective altruists might be spending too much time worrying about things like artificial intelligence and not enough time worrying about things like global poverty.
I spoke to MacAskill about some of these criticisms and asked him to lay out why he thinks the effective altruism model remains the best way to do good in the world. A lightly edited transcript follows.
Can you sum up the philosophy of effective altruism for people who aren’t familiar with it?
Sure. Effective altruism is about trying to use your time and money as well as possible to help other people. The core idea is very simple: Imagine you could save five people from drowning or you could save one person from drowning. It’s a tragic situation and you can’t save both. Well, it’s pretty commonsensical that in such a situation, you ought to save the five rather than the one, because there are five times as many interests at stake.
It just so turns out that that drowning-person thought experiment is not really a thought experiment; it’s exactly the situation we’re in now. If you give to one charity, for example, you can save one or two lives, but if you give the same amount of money to another charity, you might save tens of lives.
There’s this super-difficult question, which is, “Of all those things we could do, what are the best ways of doing good? What are the causes that actually can help the most people?” We use the best data we have to figure out what those causes are and then take action and make as much progress on them as possible.
So the highest moral aim, from your point of view, is doing the most good for the most people?
That’s exactly right. We look at how many people are affected positively or negatively by an action, and by how much, and then we add all that up and determine the right course of action.
How do you prioritize concerns or determine what’s worth caring about? I think most people will understand the general ambition of reducing suffering or doing the most good for the most people, but ultimately, we have to fall back on some kind of value hierarchy, right?
In terms of moral philosophy, you’ve got to interrogate your own views. Perhaps you might start off caring more about people in your own country rather than in other countries. But there are various ways of challenging that. For example, you might ask yourself, “What if I didn’t know who I was going to be in the world or in a particular society? How would I want the resources to be distributed? What sort of decisions would I make about how society should be organized?” That’s a good way of thinking about what is fair and ethical.
Similarly, just look at the history of moral progress. What are the major moral wrongs that people committed in the past? Why are they still committing them today? What you find is that most moral atrocities happened because we excluded people from our moral circle. We treated them as though they weren’t worthy of moral consideration.
Women were excluded, people of color were excluded, people of other countries were excluded, and so on. What we’re trying to do is include more creatures in our moral circle — including animals. Deciding the right values to be aiming at is always a tricky question, I admit that. But it’s not entirely subjective. We can use our ability to reason to get to better or worse answers on these questions.
And how do you quantify moral progress?
It’s an important question. One thing we can do is look at how people’s moral views are improving, and then secondly, track how much people are acting on those views. For instance, we know that people’s attitudes toward nonhuman animals — in the West, at least — have improved. People appreciate that nonhuman animals deserve our moral consideration too.
At the same time, we’re inflicting more and more suffering on nonhuman animals in factory farms every year. That’s a case where moral views are improving but the actions we’re taking on the basis of those views as a society are in fact getting worse. We can measure that.
So the way I really want to chart how we’re progressing as a civilization is to look at how many creatures we grant moral rights to, and then measure how well off they are and if their lives are improving over time. I think for humans, at least, that trend is pretty clearly positive. For nonhuman animals, however, it’s quite equivocal.
There are critics who say that EA overlooks something central to human nature and giving, namely our desire to feel emotionally connected to the people and causes we donate to. I know you want to disentangle the moral from the emotional, but do you think the overly rational, evidence-based approach you’ve adopted discounts this basic truth of human nature?
Yes and no. It depends whether you think of this truth of human nature as something that’s perhaps morally relevant or something that’s just an obstacle that we need to overcome. Taking them in turn, I think it is very hard to see that this could be a morally relevant consideration, at least as anything more than a tie-breaker.
Imagine if you’re a doctor working in a war-torn country or something. Should the fact that you like treating certain diseases rather than certain other diseases be morally relevant? I don’t think so. Or if you’re a government policymaker, perhaps you like people of one sort of background more than you like people of another background. Should that be relevant? Again, no. I think when we take this attitude of acting philanthropically, it’s like we’re taking this impersonal point of view. We should have an attitude of treating all potential beneficiaries equally.
There is already a lot of charitable marketing that tugs on your heartstrings. Effective altruism takes a different approach. We simply want to do the most good for the most people, and we use the evidence to help us do that. We don’t want to trick or manipulate people. We simply say, “Here are the facts and here’s how you can do a ton of good at almost no cost to yourself.”
It turns out that there’s a significant minority of people who really like that, and when they’re thinking about making much larger commitments, like donating 10 percent of their income, this sort of reasoning is very powerful.
As someone from the political world, I wonder if EA’s emphasis on causes with measurable outcomes like public health intervention or income creates a bias against social and political movements, which are equally important (because they’re often at the root of health or economic problems) but don’t fit as neatly into your evidence-based scheme.
Let’s distinguish between whether a cause is difficult to quantify and whether it’s political. Although it’s true that many political movements and political changes have outcomes that are difficult to measure, I don’t think the two distinctions line up quite so neatly.
Many effective altruists, myself included, have over time become convinced that the vast majority of lives we can affect are in the future. If that’s right, it’s plausible that the most important moral imperative is to make sure that the very long-run future goes well. The stakes are enormous: trillions and trillions of lives over hundreds of thousands of years.
But when it comes to influencing the long-run future, we can’t run randomized control trials. In general, we can’t have the same sort of robust evidence base that we’re used to in global health interventions, for example. So it’s not the case that effective altruists focus exclusively on things that are easy to measure.
When it comes to politics, there’s no reason in principle why effective altruists shouldn’t get involved. But we’re less likely to enter longstanding political debates, such as what the tax rate should be, where a few more voices are unlikely to make much difference. Instead, we’re more likely to look for comparatively neglected policy areas — for example, what to do with new technologies such as artificial intelligence and synthetic biology.
Another example is voting system reform. I’ll give a shoutout to an organization you covered a few weeks ago, the Center for Election Science. They promote the use of approval voting, a voting system in which you simply vote for as many candidates as you like. I think this leads to better political decisions being made, because the elected candidates better represent the will of the people. Again, this is an extremely neglected area.
The related objection here is that EA’s emphasis on improving the welfare of individuals leads to a kind of indifference to broader structural problems like inequality or oppression or how power generally works in the world. The implication is almost that we shouldn’t be working to alter the very things that perpetuate human misery. And I have to say, this is part of the circularity of the EA model that does give me a little pause.
I think this objection is off the mark, because what I’d need to see is an argument that opposing inequality in some particular way is actually going to be the best thing to do. Suppose you’re trying to combat inequality by increasing the US tax rate. Again, this is such well-trodden ground that it’s hard to see how your actions could have a really transformative effect.
That said, I’ll mention two promising examples in this area. Firstly, developments in AI could potentially lead to massive labor displacement and concentration of power in the hands of a very small number of people. Very little work has been done on understanding and responding to these possibilities.
Secondly, there has been quite a lot of interest in the effective altruism community in international labor and mobility. For those who are particularly concerned about inequality and oppression, I think the fact that people are forcibly kept out of richer countries where they’d be able to be more productive should loom large. So maybe that’s where I’d recommend people start looking.
What’s your advice to people who are torn between choosing a philanthropic career (in the nonprofit world, for example) or a high-paying profession that itself isn’t great for the world (and may actually contribute to human suffering) but will allow them to give more to charity?
The first question to ask yourself is: How bad is the high-paying career? Would you be violating people’s rights in the course of doing it? If so, don’t do it. Violating rights is generally regarded as immoral, even if it’s for the greater good.
But suppose it’s just a career that pays well without causing harm. Then the question is: What’s your comparative advantage? Thinking about this in the context of the effective altruism community, ask yourself: What percentage of effective altruists should be earning to give? What percentage should be working for nonprofits?
Relative to those numbers, how good would you be at earning to give compared to nonprofit work? Then earn to give if you think you are unusually well-suited to it: perhaps in the 15 percent most well-suited, out of the wider community of aspiring effective altruists.
In general, when you’re making decisions as part of a wider community and not just for yourself, considering where your comparative advantage lies becomes very important.
If people decide that they want to follow the effective altruism model, how can they get started?
There are a few different entry points. For an overview, people can go to the website and read about global health and development, positively influencing the long-run future, animal welfare, and community-building.
If they want more detailed information about donating, they can go to GiveWell, a nonprofit organization that researches and finds top-rated charities.
If they want advice on career choices, they should go to 80,000Hours.org. There’s huge amounts of in-depth advice on how they can use their career to do good. If people just want to learn as much as they can about the EA ideas in an easy-to-ingest format, 80,000 Hours has a podcast called The 80,000 Hours Podcast, and it’s a great guide for people interesting in informed giving.
Do you ever struggle to figure out where to donate that will make the biggest impact? Or which kind of charities to support? Over 5 days, in 5 emails, we’ll walk you through research and frameworks that will help you decide how much and where to give, and other ways to do good. Sign up for Future Perfect’s new pop-up newsletter.