clock menu more-arrow no yes mobile

Filed under:

Autonomous Cars and Their Ethical Conundrum

Stanford's Revs program is moving the ethics of self-driving technology beyond a mere academic discussion.

BlueRingMedia/Shutterstock

A version of this essay was originally published at Tech.pinions, a website dedicated to informed opinions, insight and perspective on the tech industry.


I am rooting for autonomous cars. AARP has said I am now a senior citizen. I have the card they sent me in the mail to prove it. While I admit that I am getting up there in age, I am concerned that someday the DMV will yank my license for either age or health reasons. When that time comes, I want a self-driving car in my driveway to make sure I have the freedom to go anywhere I want, as I have since I was 16 years old. Unlike others who are hesitant about driverless cars, I would embrace it wholeheartedly, as I would just as soon sit back and read, work on my laptop or peruse an iPad than have to deal with traffic and navigation. Even today, if all I needed to do is tell my car where to take me, and all I had to do is get in and sit back and enjoy the ride, I would be one happy guy.

Now, I know we are years away from getting autonomous cars on the road and getting the right kind of government regulations passed to make this possible. But the technology is getting close enough to create these types of vehicles and, in theory, they could be ready for the streets within the next three to five years. I suspect that I have at least another 15 to 20 years before the DMV pulls my license, so, as long as they are ready by then, I can live with that.

But as I have been thinking about the various obstacles and roadblocks that must be solved before everyone could embrace autonomous vehicles, there is one particular issue tied to their success that concerns me: Ethics.

At the recent Code/Mobile conference, they had a great panel on self-driving cars. At the end of the session, I posed this question to the speakers on the panel:

Let’s say that I am in a self-driving car. It has full control, and the brakes go out. We are about to enter an intersection where a school bus had almost finished turning left, a kid on his bike is in the crosswalk just in front of the car, and an elderly woman is about enter the crosswalk on the right. How does this car deal with this conundrum? Does it think, “if I swerve to the left, I take out a school bus with 30 kids on it. If I go straight, I take out a kid on a bike. If I swerve right, I hit the little old lady?” Is it thinking, “the bus has many lives on it, and the kid on the bike is young and has a long life ahead, but the elderly woman has lived a long life, so I will take her out” as the least onerous solution?”

I realize that this question is over the top, but one can imagine many types of ethical issues that a self-driving car will encounter. Understanding how the engineers, philosophers and ethicists design the final algorithms that sit at the heart of these autonomous vehicles will be very important to the success of these automobiles.

Doug Newcom, my colleague over at PC Mag, wrote a good piece on this ethical question. He said:

The day after getting a ride in Google’s self-driving car in Mountain View, California, I attended an event at Mercedes-Benz’s North American R&D facility in nearby Sunnyvale. Among several topics covered throughout the day, Stanford professor and head of the university’s Revs program Chris Gerdes gave a presentation that delved into the subject of ethics and autonomous cars.

Gerdes revealed that Revs has been collaborating with Stanford’s philosophy department on ethical issues involving autonomous vehicles, while the university has also started running a series of tests to determine what kind of decisions a robotic car may make in critical situations.

As part of his presentation, Gerdes made a case for why we need philosophers to help study these issues. He pointed out that ethical issues with self-driving cars are a moving target and “have no limits,” although it’s up to engineers to “bound the problem.”

To do this and move the ethics of self-driving technology beyond a mere academic discussion, Revs is running experiments with Stanford’s x1 test vehicle by placing obstacles in the road. He noted that placing different priorities within the vehicles’ software program have led to “very different behaviors.”

I am encouraged by the work being done at Stanford’s Revs program, and know that similar work is being done at many universities and inside all of the autonomous car makers. Solving this ethics problem needs to be at the top of their list when it comes to how they program these cars beyond the fundamental software tied to cameras, CPUs, sensors, etc., that control the car’s functions. While I doubt they could ever program a car’s ethical take on all situations it could encounter, these folks will have to go the extra mile on this issue if the public is ever to really embrace driverless cars and make them the future of personal transportation.


Tim Bajarin is the president of Creative Strategies Inc. He is recognized as one of the leading industry consultants, analysts and futurists covering the field of personal computers and consumer technology. Bajarin has been with Creative Strategies since 1981, and has served as a consultant to most of the leading hardware and software vendors in the industry including IBM, Apple, Xerox, Compaq, Dell, AT&T, Microsoft, Polaroid, Lotus, Epson, Toshiba and numerous others. Reach him @Bajarin.

This article originally appeared on Recode.net.

Sign up for the newsletter Sign up for Vox Recommends

Get curated picks of the best Vox journalism to read, watch, and listen to every week, from our editors.