The season finale of HBO’s The Last of Us — based on the video game of the same name — thrust a longstanding philosophical question into the cultural spotlight: Is it ever ethical to kill one person for the well-being of many others?
If you haven’t seen the show or played the game, a real species of fungus called cordyceps has evolved the ability to inhabit humans, turning them into mushroom-zombies that bite. Twenty years of apocalyptic chaos ensues.
The series follows a gruff man named Joel (Pedro Pascal) and a young girl named Ellie (Bella Ramsey), the only person to have displayed an immunity to the fungus. The duo travel to find a division of the renegade group known as the Fireflies, who are planning to engineer a vaccine using Ellie. What the two don’t know is that the surgery to engineer the vaccine will kill her.
Ellie is given no opportunity to provide consent, and the surgery had questionable-at-best odds of success for delivering a vaccine. Upon finding out, Joel saves Ellie from the surgery, killing plenty of Fireflies in the process, while also putting an end to the best — perhaps only — shot at saving humanity through a vaccine.
The finale presents a bioethics question: When the entire species is at stake, should our decision-making logic change? So I spoke with Arthur Caplan, head of the division of medical ethics at the NYU Grossman School of Medicine and a professor of bioethics.
Conspicuously absent from The Last of Us was an Institutional Review Board (IRB), the group charged with reviewing and monitoring biomedical research involving human subjects in line with FDA regulations. We discussed whether IRBs today are flexible enough to handle decision-making in an apocalypse, what the relevant considerations would be, and if higher scales and stakes ever justify otherwise non-permissible actions.
This interview has been edited for length and clarity.
Let’s say there was a fungal apocalypse, and an IRB had to decide whether to allow an experimental surgery that would kill the subject, but offer a chance to save millions. How would they approach that question?
So there are two ways to approach that question. One is to reason from what the IRB looks like today. If somebody comes to you and says there’s a terrible disease, we want to do an experiment. We think we could get something that could save many, but we have to kill you. And the answer to that is that would be the end of the discussion. Fatal experiments would not clear the standard IRB research ethics committee in the world today, even with the promise of big returns.
But in an apocalyptic scenario like in the show, where people have been dying for 20 years and someone proposes the experiment, I think you might get further. We almost got to this with Covid when the idea came up of doing challenge studies, deliberately infecting people with Covid [to help speed up vaccine research], not having any way to save them if they got very sick. And I defended the experiment.
Some said that you cannot do that, it’s unethical. Others said, well, look, if you are truly volunteering, like imagine the girl in the show [Ellie] says she wants to help save the world and be an altruist, then as long as you choose knowingly and understand the risk — that’s crucial — and as long as you’re pretty certain of the science, because the odds of the experiment’s success will drive some of the answer, but my own view is yes, in an apocalypse with the possibility of a real breakthrough, if the person volunteered and truly said, “I want to help, I’m going to be an altruist,” I think I could approve that.
In the show, Ellie wasn’t given the option to provide consent, but let’s say she did, and she was an adult. There’s still a lot of uncertainty around whether the surgery will work, whether it will actually produce a vaccine, or whether there might be other options. So even when someone gives consent, can the presence of uncertainties still make the experiment unethical?
Yes, the IRB’s job is to interpret the chances of the science working; consent is not sufficient. Some of the early pioneers of artificial hearts did consent and said, “I’ll take my chances, I’m gonna die anyway,” but the IRB had to step in and challenge whether the scientific protocol was sound, whether the background information they had pointed in the direction that they were likely to get an answer. The IRB’s job is to ensure that consent is there, but also to make sure that the science is sound.
Say we find ourselves somewhere in between the Covid pandemic and Last of Us on the apocalypse scale. Do you imagine current IRB processes are flexible enough to adapt to those sorts of situations? Is the IRB apocalypse ready?
IRBs can be flexible; let me shift to something analogous. Sometimes people are out hiking and they eat a poisonous mushroom. They show up at the ER, unconscious. There’s no antidote and no one knows what to do, and no time to bring in the IRB. Well, we have carved out a space where you could try an experimental antidote without the consent of the person. We have an emergency research waiver idea that says, facing certain death from this poisoning, most people would reasonably consent to the experimental agent.
You’re supposed to get consent after the fact, if they survive. You’re supposed to do what you can to warn people in advance, but the flexibility is there for research under emergency circumstances, so it’s not hypothetical. So yes, I think an IRB faced with a 20-year plague that was killing everybody, if you truly had an altruistic and consenting volunteer, I think they could go along with it.
In philosophy’s “trolley problem,” you have to decide if saving five people justifies killing one. In the show, the scale of the decision is much larger. Killing the one could save the entire remaining human race. From a bioethics point of view, does the scale of the sacrifice factor into the decision-making?
That actually has a name in ethics; it’s called “do the numbers count.” My answer is yes, it does morally make a difference.
This also comes up when you start thinking about world-destroying kinds of problems, like the debate we had over torture. Many people just said torture is off the table. But there were people who wrote memos who said, well, if there really is no other way, and if you knew that a guy had planted a nuclear weapon, and the clock is ticking, you might go to torture to get an answer. I’m not for torture, but you can spin a scenario or two where I might say, we know for sure a bomb is going to blow up a whole city and all we’ve got is this guy with two minutes on the clock, then I guess I would say to try and torture an answer out of him, because the numbers count.
The peak of the Covid-19 pandemic wasn’t apocalyptic, but it did stress-test our institutions and force us into difficult decisions. I’m curious how you think our institutions performed. Are you optimistic they’re well set up to handle future scenarios, from pandemic to apocalyptic, or were cracks revealed?
I was involved in things like trying to make a ventilator policy when we didn’t have enough, and I’ve been involved for a long time in rules about who gets organs for transplants, and I think that the institutions broke down both at the state level and the national level. But they held up weirdly enough at smaller scales like within the hospitals or local places. We all knew what we were going to do at NYU and who was going to get on the ventilator, who was going to come off. We talked about it and there was agreement on that. But if you had asked the Trump administration when things first started, no, they weren’t giving guidance. Even the state of New York, or Connecticut, you didn’t have guidance.
So to some extent, the people who set policy at larger scales didn’t do a very good job. But Covid was moving really fast, and we were arguing over who gets a mask, who gets protective gear, who gets a ventilator — that was real-time decision-making.
But in the TV show, they might have time to set up a national commission to debate whether they would let the girl volunteer for the surgery. But if there was a critical period and you had to decide within a month or something, I don’t think you’d get national guidance. You’re probably going to have a local institution, where the context will matter.