/cdn.vox-cdn.com/uploads/chorus_image/image/49833189/robot-driving-car.0.jpg)
Every few months, there's another story about the vexing issue of ethics in self-driving cars. Wired had a great one back in 2016.
The stories all pose the same basic questions: What sort of ethical choices will autonomous vehicles make, when forced to make them? More importantly, who decides? Is it up to the engineers who program the software? Does the public get a vote? Or will we let the cars themselves figure it out?
I'm a fan of that last option: Let the cars figure it out.
That probably strikes you, at first blush, as creepy and slightly dystopian. I shall attempt to convince you otherwise.
Let me start by airing a pet peeve of mine, left over from my days in philosophy. It might seem like a diversion, but it's all going to come back around to self-driving cars in the end, I promise.
Remember the trolley problem? It’s dumb.
Almost every story on self-driving cars and ethics frames itself around the trolley problem, a familiar thought experiment in ethics.
There are many, many variations, but the nut of it is something like this: There’s a trolley car hurtling toward a split in the tracks. It’s heading in the direction of five people tied to the tracks. You could switch it to the other track, where there’s only one person tied down. Do you sacrifice the one to save the five?
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/6633321/trolley-problem-variants.png)
How you answer this question is supposed to tell you something about whether you’re a utilitarian, a Kantian, or some other moral species. It’s also supposed to raise the kind of tricky problem that will face autonomous automakers and regulators. Do you program a car to sacrifice one person to save five? What if that one person is the car’s own passenger? And so on.
A normal human being, confronted with the trolley problem, will ask questions about context. How much time is there to make a decision — enough time to free anyone from the tracks? Is there any other way to stop the trolley? Is there a rail employee around who might have a better sense of what to do? Are we certain of saving five people, or of killing one, if we make one choice or the other?
Philosophers have deliberately stripped out all those details, allegedly to focus on the core issue. These simplified thought experiments are meant to serve as what philosopher Daniel Dennett calls "intuition pumps," isolating your basic moral instincts. Details about context would only clutter the picture.
But I think this approach ends up being misleading. Context, and our sensitivity to it, is not some incidental feature of moral decision-making; it is essential to moral decision-making, at least as it occurs in the real world. Our instincts and intuitions evolved in rich contexts; that’s what they're meant to deal with. Stripped of that, they're like a fish’s fins out of water, just flopping around.
The problem with the trolley problem and similar thought experiments is that they focus in so narrowly on such tightly circumscribed decisions that they end up writing the character of the moral agent out of the picture. All the work that good moral decision-makers do in the real world is obviated by the constrained framing of the problem.
(No, really, this gets back around to self-driving cars eventually.)
Three characteristics of good moral decision-makers in the real world
Being a good person is a very different thing than thinking about moral philosophy. It involves not just believing certain precepts, or assenting to certain values, but possessing certain skills and cultivating certain character traits — skills and character traits that become invisible when morality is reduced to thought experiments about individual decisions.
1) Wisdom
Before we even contemplate which way to go on a moral decision, we have to figure out what the decision is.
In the trolley problem, by design, everything but a single decision has been removed, so it’s very clear what question has to be answered: one person or five?
But in the real world, it’s rarely so clear what the proper frame is. Just to take a random made-up example, imagine that a co-worker going through financial hard times asks you to co-sign for a risky loan. What’s the right thing to do? It depends on how you see the situation.
Should you help a friend in need? Seems like the right answer to that is yes. Should you cross inappropriate personal boundaries with a professional colleague? Seems like the prudent answer to that is no.
Which applies: the values and protocols of friendship or the values and protocols of professionalism? Before you make a first-order choice about whether to co-sign the loan, you have to (consciously or not) make a second-order choice about the right way to frame the decision. Very often, these second-order choices determine our answers to the first-order questions.
Think of all the proverbs, sayings, and rules of thumb floating around our culture and the way they contradict one another. "He who hesitates is lost," except "look before you leap." "A penny saved is a penny earned," except "penny wise, pound foolish." "Seek and ye shall find," except "curiosity killed the cat." Here’s a list of 37 of them.
Stripped bare in a philosophical thought experiment, none of these hold up. They don’t reduce to first principles; they are not universally applicable. But they hang around culture anyway, because they’re decent, rough-and-ready heuristics, tools in the toolbox, useful if you apply them in the right way at the right time.
Knowing which tool to use, and when, is a key moral skill, though we don’t have great vocabulary with which to discuss it. "Wisdom" comes closest. Whatever it’s called, it very much involves a keen sense of context, an ability to discern the varied personal, social, and practical crosscurrents in any situation.
2) Perceptiveness
We understand a given problem or dilemma better the more we know about the empirical circumstances surrounding it.
Say you’re standing there, watching the trolley car approach, pondering whether to throw the switch and divert it (and kill someone). Then you notice, peeking out from underneath a nearby pile of junk, an old, discarded flagpole, and realize you could put it on the track to slow or stop the trolley car entirely before it kills anyone. Your perceptiveness has reframed the decision at hand; you’re now answering a different moral question, weighing different options.
In philosophy class, that kind of thing is ruled out. The trolley problem contains no such details to notice. The situation is transparent; we know exactly what the choices are and what the consequences of our decisions will be.
In real life, however, we rarely know those things with any certainty or finality. There’s always more to notice, more to learn and absorb. With new information, there’s always the possibility of new choices.
Perceptiveness is a problem for us, particularly in charged situations; we are prone to projecting our preconceptions and biases, placing crude but familiar frames over situations before we fully understand them. Someone who stays open and receptive will have a better sense of the realities involved in a given situation and will almost certainly make better moral decisions.
3) Self-possession
It is one thing to know the right thing to do in a situation and another thing entirely to have the will and discipline to do it, especially in the heat of the moment, with all the temptations toward pettiness, selfishness, indifference, inattention, and shortsightedness that face us fallen mortals.
These three characteristics — wisdom, perceptiveness, and self-possession — are irrelevant to the trolley problem, by design. We’re all stuck in the same decision matrix, no matter our skills and characters.
But in real life, good moral decision-makers are not those who swear allegiance to the proper set of first principles, whatever those may be. In practice, almost everyone carries around a lot of inconsistent intuitions and heuristics, different sets for different sorts of situations.
Good, moral people are adept at discerning what kind of situation faces them, what moral question is being answered, what their options are, and what the consequences of their actions will be. And they have the will to do what they see as right.
Yet thought experiments like the trolley problem simply bracket all those skills and characteristics, leaving behind a kind of desiccated fossil of a decision, which tells us little about how to be good in the world we live in.
Self-driving cars promise to be good moral decision-makers
What does all this have to do with self-driving cars? I’m glad you asked.
Concerns over the ethics of autonomous vehicles often frame themselves around first principles — what moral algorithms should be programmed in, and who should decide on them? But that assumes that first principles are what make for good moral agents.
I’ve argued that they mostly aren't; moral behavior, in the real world, has much less to do with first principles than with wisdom, perceptiveness, and self-possession.
Will cars have those?
:no_upscale()/cdn.vox-cdn.com/uploads/chorus_asset/file/4135574/aev.jpg)
Well, wisdom comes from experience, from seeing a wide variety of situations and how different kinds of decisions affect those situations. Each autonomous vehicle will gather experiences and, crucially, share them with all the others. (This is assuming that autonomous vehicles will be web-connected, which I think is a safe assumption, given that, e.g., Teslas already are.)
Just as Google became smart by discerning patterns in billions of searches, so will Google autonomous vehicle software get smart by learning from experience. From Wired:
[M]any companies and researchers are moving towards autonomous vehicles that will make decisions using deep neural networks and other forms of machine learning. These cars will learn—to identify objects, recognize situations, and respond—by analyzing vast amounts of data, including what other cars have experienced in the past.
Autonomous vehicles will very quickly have seen more, and experienced more, than any human driver possibly could. And unlike most human drivers, they will deliberately comb through that experience for lessons and apply them consistently.
How about perceptiveness? Autonomous vehicles will be covered in sensors and will be in constant communication with other vehicles. Unlike human drivers, they will have 360-degree awareness at all times. And they will never fail to heed anything, forget anything, or become distracted by anything. Sounds pretty perceptive.
Self-possession? Autonomous vehicles are robots; they have only the goals we give them. They will act on the imperative to protect human life without hesitation, without thought of self-preservation or financial cost, without any of the temptations and failures of will that haunt human beings.
In short, autonomous vehicles are likely to be far better moral decision-makers than human beings, not because they’re programmed with some true and correct set of moral principles, but because they’re highly sensitive, focused, and consistent.
Rather than dictating the ethical choices of self-driving cars with an overly prescriptive set of rules, we’re better off with fuzzy heuristics (e.g., "don’t get people hurt’) and neural networks. They will eventually be capable of judgments based on a sensitivity to empirical detail and an assessment of probabilities that no software engineer could anticipate in advance or replicate even after the fact.
The way self-driving cars will be made more ethical, in practice, is the same way they’ll be made more safe, efficient, and effective: by improving their sensors, communications, and learning.
A smart enough car, with a capacious enough body of experience to draw on, will eventually solve the trolley problem in the only way it can be solved: by avoiding those kinds of situations in the first place.