Brian Tomasik is a consultant at the Foundational Research Institute, which explores possible avenues for reducing suffering in humans and other sentient beings, now and in the future. He maintains the website Essays on Reducing Suffering, where he writes on issues in ethics, biology, philosophy of mind, and other fields relevant to the question of how best to reduce the amount of suffering in the world.
Recently, he has become interested in the question of what moral standing, if any, we should give to non-player characters in video games (NPCs). He argues that, while NPCs do not have anywhere near the mental complexity of animals, the difference is one of degree rather than kind, and we should care at least a tiny amount about their suffering, especially as they grow more complex. We spoke on Gchat this past Tuesday. An edited and condensed transcript follows.
Dylan Matthews: What exactly is your view about the moral standing of non-player characters (NPCs)? Is it moral to kill them?
Brian Tomasik: That depends on their degree of sophistication, and whether they're built in a way such that killing them would correspond to something harmful.
Very simple game algorithms would matter to an almost infinitesimal degree, and they may not have responses that we would consider aversive. A Goomba in Super Mario Bros. that just walks along the sidewalk back and forth is arguably as simple as a physical object bouncing back and forth. It doesn't seem to have pronounced goals that are being thwarted by its nonexistence, nor does it have machinery to try and avoid death or feel bad about death.
In contrast, a slightly more complex character that plans moves to avoid being shot or injured by the player of the game has at least the bare outlines of goals and attempts to avoid injury. This case might be marginally ethically relevant. The moral significance would increase further if, for example, the character had penalties applied to its health or wellbeing level as a result of injury, as is the case in some RPGs [role-playing games] or reinforcement-learned game characters.
Present-day video games mostly use extremely simple algorithms that resemble goal-directed and welfare-relevant behavior in very crude ways. They resemble complex, sentient animals in a similar way as two dots and a smile resemble a detailed picture of a face. Hence, it seems plausible to give any single game character extremely small weight compared with vastly more complex forms of purposeful, welfare-relevant behavior in larger organisms like animals.
Video games are not the only places where we can see planning, adaptive responses, welfare monitoring, and so on. These are common features of many programs that computers run on a daily basis, such as load balancers, query optimizers, machine-translation systems, and so on. The case of video games is interesting because our moral intuitions may have an easier time thinking about the situation, because we can see how the characters are behaving on our screens.
As far as whether it's moral to kill video-game characters: In extremely simple cases like Goombas, I probably wouldn't worry too much about it. For more intelligent characters (such as a boss that requires multiple hits to be killed, moves to avoid being hit with a sword, and so on), the ethical significance may rise to an extremely weak level. On any given occasion it's not a big deal, but aggregated over tens of millions of people killing thousands of these characters on a regular basis during game play, it does begin to add up to something nontrivial. That said, I don't think violence toward video-game characters is currently among the world's most pressing ethical problems.
Dylan: So when you think about a program in this category, whether it's a bot in Quake or a query optimizer, "suffering" means what exactly? Do the programs suffer when something that produces negative feedback to them? And how do we know which feedback is negative?
Brian: In animals, at least one important part of suffering seems to be conscious awareness of damaging stimuli (such as realizing that your hand is on a hot surface) or the expectation of such in the future (like when you learn you're going to have to go through a painful medical operation). It can also come from more "internal" signals like the pain of losing a loved one.
A few video-game characters are trained using an artificial-intelligence technique called reinforcement learning, which has striking parallels to classical and operant conditioning in animals, both in terms of its function and its algorithmic implementation. In this setting, agents receive rewards and punishments from the environment based on carrying out certain actions (e.g., moving away from another player) in certain states (e.g., when you're close to a dangerous player). The agent can learn what actions to take depending on the context in order to increase rewards and decrease punishments. In this case, suffering seems more clear: Presumably it corresponds to the agent receiving and processing its reward/punishment value from the environment and then using that for learning to change its actions.
How to determine whether this feedback is positive or negative is a tricky issue. In reinforcement learning, the rewards are numbers that can be either positive or negative, so a natural assumption is that positive numbers are (in an extremely crude sense) "pleasurable" and negative numbers are "painful," but this discrimination isn't necessarily accurate, because there are cases where we can shift the set of possible rewards/punishments the agent receives up or down (i.e., add or subtract a constant from all of them) without changing behavior. So we may need to think more about exactly what separates reward from punishment in these agents, perhaps by appealing to behavioral seeking vs. avoiding tendencies, or to other characteristics.
Reinforcement learning is mainly used in an academic setting right now, and most video-game AIs employ simpler techniques to control behavior, including "if-then" reaction rules, or possibly planning algorithms to determine the best path for reaching some destination. The agent can still act in its environment, just like a reinforcement-learning agent; it just doesn't necessarily adapt its behavior to new information.
It seems to me that agents can still matter even if they don't learn. One argument for this is that humans sometimes experience degradation in their ability to do reinforcement learning, such as in Parkinson's disease, and yet they can still act and have morally relevant goal-directed behavior. We can still think of simple game characters as having preferences for the world to be one way over another, and violating these preferences might be morally problematic to an extremely tiny degree. While it's true that humans are also fundamentally just goal-directed physical processes, the difference in complexity seems to matter. We care much more about sophisticated and algorithmically powerful systems than about simple ones. But the difference is one of degree, not kind, so the simple ones seem to matter a tiny amount.
Dylan: How important is the sheer number of NPCs out there? You also have written about suffering in insects, which are much simpler than vertebrate animals, but way, way more numerous in a way that suggests we should accord their interests more weight.
NPCs are even simpler than insects, but should we be concerned if they get too numerous?
Brian: Yes, numbers are important. If we care about these processes as individuals, it seems our concern should scale linearly with the numbers of them, because any given agent shouldn't matter less just because others like it also exist.
Fortunately the number of NPCs around today is not astronomical, because there are only so many video-game players, who kill or injure only so many NPCs per hour, apart from possibly some additional runs for game development and testing. It may be that there are other mundane computational processes that add up to being more serious in total due to their prevalence -- being run at high speeds on large computing clusters. On the other hand, some video games are more violent than almost anything else, so they may be concerning in that respect, especially if we place any weight on the visual rendering and animal-like appearances of the characters.
Insects are far more numerous than NPCs right now, and they're also far more sophisticated. Many insects have at least 100,000 neurons and exhibit not only reactive and goal-directed behavior like NPCs but also reinforcement learning, selective attention, memory, sleep-like states, cognitive generalization, social behavior, and so on. There are an estimated 1019 insects on Earth, compared with around 1010 humans or around 1011 to 1012 birds. Even if you count just raw number of neurons, insects outweigh humans by a few orders of magnitude. While humans may matter a lot more for instrumental reasons related to the trajectory of the far future, in terms of pure morally relevant amount of sentience, insects may dominate on Earth at the moment.
Unfortunately, this has pessimistic implications for the net balance of happiness and suffering in the wild. Many insects live just a few weeks, and they give birth to hundreds or thousands of offspring, most of which die shortly after being born. Life even for the survivors may also involve hunger, disease, and death by predation, lack of water, or something else.
In the long run, we may become relatively more concerned about digital sentience, including video-game characters, reinforcement-learning agents, and other computational processes. As computing power increases, both on Earth and perhaps eventually in other parts of the galaxy, humans may run massive numbers of computations at high speed, some of which may embody morally relevant processes. Video-game NPCs are one example of this. In addition, as these NPCs become more life-like, intelligent, and affectively sophisticated, the moral weight of any given individual will increase. It's possible that video games in 50 years will routinely contain characters as sentient as a minnow or salamander is today.
Dylan: You're a utilitarian, but that philosophy comes in a lot of different flavors. In particular, some utilitarians care mostly about increasing pleasure and decreasing pain ("hedonic" utilitarians) and others care about making sure as many people as possible get what they want ("preference" utilitarians). Which camp do you mostly identify with?
Put another way: what do you think matters, fundamentally?
Brian: I have a lot of sympathy for the utilitarian approach, though I'm less dogmatic about it than I used to be and see value in other moral approaches as well. A similar comment applies to my brand of utilitarianism: I'm sort of in the middle on some of these debates rather than taking a firm stance one way or the other.
This is especially the case for hedonistic vs. preference utilitarianism. I think both hedonic experiences and preferences matter somewhat. Hedonic experience is intuitively what makes life worth living (or not), and people often have intuitions that an "unfeeling" robot or mechanical system doesn't matter, even if it appears to work toward certain goals and avoid having them thwarted. On the other hand, we sometimes have preferences that contradict experienced pleasure, such as when a hungry mother gives all her food to her child or when people want to act altruistically in the real world rather than hooking up to an "experience machine" where they could bliss out in ignorance of the suffering of others.
Thinking about this question from the level of neural subsystems can help clarify our intuitions. Our brains are motivated not by a single, hedonistic system but by many competing components.
For instance, we have hard-wired reflex behavior. We have actions that have been trained by what's called "model-free" reinforcement learning, where our brain learns to react to certain conditions with habitual responses that may not depend on higher-level understanding of the situation. We have "model-based" reinforcement learning, which computes the value of actions using knowledge of how the world is likely to change in response to what we do. And even the reinforcement signals that we feel may have many components, including physical pleasure/pain, expectation of long-term reward, altruism toward kin and trading partners, and so on. We use heuristics about appropriate social behavior. And much more.
Constructing a single "happiness" or "preference" evaluation from this complex system is not obvious. It would be akin to asking "How happy is General Motors?" or "What does General Motors prefer?" There are many pieces of the system you could look at when answering that question.
Dylan: You argue that it's possible insects account for most of the morally relevant feelings and preferences held or experienced by any entities anywhere in the universe. You've written a lot about how to deal with this, but it'd be great if you could discuss the main options, as you see them. Do we mercy-kill most or all insects? How do we deal with the larger ecosystem ramifications?
Brian: It's plausible that insects on land, and zooplankton in the oceans, constitute a large portion of the sentience on Earth in the present. Of course, it's good to avoid being overconfident in statements like this, because we may change our views, either to down-weight insects/zooplankton relative to larger animals, or to upweight organisms or other processes that are even smaller than insects/zooplankton.
In the long run it may be that most of the sentience in the galaxy will be digital, because computers run much faster than biological brains and can exist in greater numbers, since they don't require such sensitive planetary conditions to support biology. Future civilizations may employ immense numbers of reinforcement-learning robots, goal-directed optimization processes, and other computational systems where we can see traces of things that remind us of sentience. Finding ways to develop more humane approaches to computation will be an important task for our descendants, especially if they colonize space.
Regarding insects in the short run, it's true that our options are somewhat constrained by external considerations. Most people don't want to destabilize the human future by disrupting ecosystems drastically, and indeed, instability may also cause harm to future beings if it hinders international cooperation, wise governance, and humane values. However, there are still steps we can take to reduce the suffering we cause to insects.
As one example, insects are raised and cooked for food in many parts of the world, such as Mexico and Thailand. In many cases, these insects are fried or roasted alive. And entomophagy may become more popular in Western countries as well. I think raising insects is a bad idea because they have such high infant-mortality rates that their cultivation inherently leads to lots of unavoidable suffering. But if insects are raised for food, there should at least be welfare standards for their living conditions and especially for their slaughter. Some entomophagy companies in the US claim that freezing their insects to kill them is humane, but it's disputed whether freezing is actually painless for insects. More research and attention is needed here.
Another area where humans affect even more insects is pest control on crops. It's not clear that insecticide use per se causes net harm -- because the insects killed would have died naturally, and it may be that insecticides keep insect populations much lower than they would otherwise be. But if we do use insecticides, we should favor those that kill more quickly and less painfully. "Natural" pest-control methods like Bt sprays or introduction of predator insects may be among the most unpleasant, because evolution has presumably designed insects to find death by bacteria or predators painful. Investigating which insect-control methods are most humane, and developing new ones, could be a way to help literally trillions of insects through a person's career.
Dylan: You obviously care tremendously about the suffering animals experience, but you're very suspicious of wilderness advocacy. Why? What's the problem with shoring up natural wildlife habitats?
Brian: It's commonly assumed that if you care about animals, you should aim to defend their habitats so that more animals can live happy lives in the wild. The problem is that the lives of many wild animals are not as rosy as we might like to believe. Animals face cold, hunger, disease, parasites, fear of predators, and of course, predation itself. Imagine being swallowed alive by a snake or having your abdomen torn out by a crocodile.
Mature members of most animal species live at most a few years, and only a fraction of all animals born even make it to maturity. Small animals typically follow what's called an "r-selective" breeding strategy, in which they give birth to hundreds or thousands of offspring, most of which die shortly after birth. And even if you give small animals like insects much less moral significance than big animals due to their limited complexity, the cumulative mass of invertebrate neural tissue still dominates that of bigger animals.
As in the case of insects, the solution is not to eliminate all habitats, because this would cause instability in human affairs and potentially set back efforts to create a humane future. However, we should at least take the suffering of animals in nature into account when doing cost-benefit calculations.
Dylan: At some point all of this comes down to when you think certain beings can or cannot have wants, need, and desires, or feel pain and pleasure, and the whatnot, and a lot of people just have the intuition that, while the mental states of complex animals are less intricate but somewhat analogous to the mental states of human beings, NPCs and insects just don't have mental states at all. It may be "like" something to be a bat, but it isn't like anything to be a mosquito or a Quake bot. Why do you think that's wrong?
Brian: We have a powerful intuition that it's a factual question whether there's something it's like to be a human, a bat, or an NPC. Are insects conscious or not? This is usually regarded as a matter for science to settle in an objective manner. And while science does play an important role, I claim that it's fundamentally a moral question —one for our hearts to decide.
On my view, there is only physics, and "consciousness" must be an emergent phenomenon within physics — a larger, more complex process built from smaller components. Life, planets, skyscrapers, and nation-states are also emergent phenomena. Thus, for something to be conscious means that it has mental traits, exhibits behavior, and uses cognitive algorithms that we decide qualify it as being "conscious."
We tend to think that certain mental processes — like attention, sensory integration, self-reflection, imagination, and so on — when put together constitute our conscious mind. Hence, when we see these kinds of processes in other places, it's reasonable to call those other processes "conscious" as well, to a greater or lesser degree. This is actually what scientists do when they assess the sentience of animals; they just aren't always explicit about the fact that these questions are judgment calls based on what criteria we feel ought to be relevant to what degree.
Now, you might object and maintain that your being conscious is not a matter of opinion, but is something you can be certain of based on introspection. I share this intuition, and it's very powerful. But think about what's going on in your brain when you have that thought. What neurons are firing in what pattern, and what algorithms are they playing out? Perhaps there are inputs from your perception combining with a mental representation of "yourself," which work together with cognitive centers to form the thought that "I'm conscious!", which is then verbalized by the muscles in your mouth, and so on. Whatever the specific steps of the process are, those steps constitute an instance of consciousness in your mind. And presumably you're also conscious when you have perceptions even if you don't explicitly think about that fact. In order to extend the range of things we call conscious beyond references to your personal introspection, we generalize what sorts of cognitive algorithms are at play here.
We could define consciousness narrowly, to only encompass very human-like brains. But this seems parochial, especially if similar behavioral and intellectual functionality can be achieved by somewhat different algorithms or cognitive architectures. If we define consciousness to be too specific, we may also rule out humans with certain cognitive disabilities, even if most of us would still call them conscious. If we think about the broader, fundamental principles that lead us to care about a mind — e.g., having a sense of what's better or worse for it, having goals and plans, broadcasting information throughout its brain about events that take place, being able to act successfully in the world, and so on — we can see rudimentary traces of these traits in many places, including insects and, to a much smaller degree, computational processes like some NPCs in video games.
We might think there should be a threshold of cognitive complexity below which computational agents stop mattering, but it's not clear where to set such a threshold. Each step in increasing an artificial agent's mental abilities by itself looks rather trivial; it generally means just adding some extra lines of code. It's when all of these seemingly trivial steps are added together that the agent traces out patterned behavior that looks more meaningful to us. If we insist on self-reflection as a clear cutoff for moral concern, we run into the problem of specifying where "self-reflection" begins and ends. Even a trivial computational agent may monitor its own state variables, assess its performance, generate statements about itself, store and retrieve memory logs of its past experiences, and so on. Without a clear dividing line here, I think a computational agent with these trivial abilities ought to be called marginally self-aware, and an agent with more powerful and advanced self-reflection ought to be called more self-aware.
Unless we adopt a parochial view, the traits of an organism that we consider to constitute its sentience and welfare come in degrees. Hence, really simple agents can be said to have really small but nonzero amounts of sentience.