Results: Random chance over informed decision
Given a choice between crashing into a motorcyclist wearing a helmet vs. a motorcyclist who isn’t wearing one, which one should an autonomous car be programmed to crash into? What about the choice between crashing into an SUV vs. a compact car?
These are some of the dilemma situations Professor Patrick Lin brought forth in his WIRED article, “The Robot Car of Tomorrow May Just be Programmed to Hit You“.
Lin says that programming a car to collide with any particular kind of object over another seems like a targeting algorithm. Accidents involving human drivers typically involve people’s split-second, instinctive responses to the situation. Your first autonomous car of the near future, however, will probably come equipped with a crash-optimization algorithm – a deliberately designed algorithm that will determine the outcome of all potential crashes. As Lin points out in his article, the pre-meditated nature of such algorithm creates a load of interesting issues that calls for a broader discussion on the topic.
So two weeks ago we, the members of the Open Roboethics initiative and the Robohub family, decided to help continue the discussion by conducting our reader poll on this very topic.
Scenario 1: Motorcyclists with / without a helmet
Given the choice between the two motorcyclists (one wearing a helmet, and one not), we asked our readers who an autonomous car should crash into. Choosing to hit the helmet wearing motorcyclist would perhaps minimize overall harm done. But considering the perspective of the motorcyclist who took that extra few minutes to put on a helmet for safety reasons, such programmed decision by an autonomous car doesn’t seem fair. Indeed, only 2% of our participants said that the car should crash into the biker wearing a helmet because s/he has better chances of survival. Lin points out that choosing to hit the biker wearing a helmet might discourage bikers from wearing helmets.
Crashing into the biker not wearing a helmet wasn’t a popular choice either (10%). In fact, our most popular response (45%) was that the car should hit the brakes and do nothing else, leaving it up to chance to determine which biker gets hit. And 3% of participants provided us with a similar response, where they said the car should run a random number generator to make a random decision. This means that almost half (48%) of our participants advocating for random chance to decide the outcome of the crash. [pullquote]Given that a decision made by random chance does not make intelligent use of sensed data, effectively 78% of our participants are voting for autonomous cars to not use certain sensed data in making crash decisions.[/pullquote]
Relatedly, 30% of our participants said that the car shouldn’t have the ability to detect whether a biker is wearing a helmet or not. Given that a decision made by random chance does not make intelligent use of sensed data, effectively 78% of our participants are voting for autonomous cars to not use certain sensed data in making crash decisions.
But not using available information in crash decisions doesn’t seem like a straight forward solution to crash-optimization. Lin says,
“Not using that information in crash-optimization calculations may not be enough. To be in the ethical clear, autonomous cars may need to not collect that information at all. Should they be in possession of the information, and using it could have minimized harm or saved a life, there could be legal liability in failing to use that information.”
Scenario 2: Hit the SUV or a compact car?
An odd sense of unfairness also exists if we consider another crash scenario. Let’s say the car has a choice between a SUV and a compact car to unavoidably crash into. The car could be programmed to hit a SUV over a compact car, since bigger cars with perhaps better safety ratings could better absorb impact from the collision and minimize overall harm from the crash.
The responses we got are similar to the first scenario: 37% opted for the car to simply engage the brakes and do nothing else regardless of which car gets hit, and 25% said that the cars shouldn’t have the ability to detect the make/model of the cars around it.
A slight difference from the first scenario is that 20% said that the car should crash into the SUV (a minimum overall harm option), and only 3% said the car should crash into the compact car. In the first scenario, the option that yields minimum overall harm (hit the motorcyclist with a helmet) only received 2% support as compared to 20%.
This might have something to do with the fact that people’s decision to drive a certain make or model of a car is not illegal, nor an activity that typically increases risks of driving. So no one is really doing anything wrong by driving a compact car or an SUV, whereas the motorcyclist wearing a helmet has a moral/legal high ground over the other motorcyclist.
Regardless, if we were to have crash-optimization algorithms that biasedly crash into SUVs over smaller cars, it will surely have an impact on the consumer market. The rates of insurance for these supposedly more safe cars may go up, because buying a safer car may also mean a higher probability of being crashed into by an autonomous car – yikes!
Is it OK for a car to always choose to crash into non-law-abiding citizens?
So how does moral or legal high ground of individuals on the road impact people’s decisions about who should be hit? We asked our readers whether it’s OK for a car to always choose to crash into those not following traffic laws over those who are, assuming that the car can detect it. Considering the first scenario, it is illegal to ride a motorcycle without a helmet in many countries/states. According to our results, law-abiding status of people on the road doesn’t seem to be a popular variable to optimize for crashes. The majority of respondents said it is not OK (70%), whereas only 20% said it is OK for cars to biasedly crash into non-law-abiding citizens.
It is true that by answering ‘yes’ to this question, you’d have to be OK with the idea that the car might always choose the one less likely to survive from a crash. Although we have traffic laws in place to regulate the rules of the road and maintain social order, using it to make life/death decisions in this manner may not be something people are comfortable with.
How should an autonomous car respond to unavoidable crashes?
So how should an autonomous car respond to unavoidable crashes? Is there a general rule that people are more in support of? According to our results, majority of our participants (52%) are in favour of minimizing overall harm to both pedestrians and passengers by spreading out harms. But it seems that the rest of the participants are quite divided: 20% of people support to minimize harm to pedestrians at the expense of passengers, while 13% supports the opposite. [pullquote]According to our results, majority of our participants (52%) are in favour of minimizing overall harm to both pedestrians and passengers by spreading out harms. [/pullquote]
This reminds us of a recent reader poll discussion we had on a different crash dilemma situation, in which a large number of participants (64%) said an autonomous car should save the life of its passenger over a child on the road (36%). One of the main reasons for choosing to save the passenger’s life was the notion that a car should always have its passenger’s safety as its priority over that of others. Given that the previous poll result provided such a strong support for prioritizing the passenger’s safety, it is surprising to see a contrasting result on this poll.
To find out what makes people prioritize passenger safety over pedestrians or vice versa, we’ll have to do some more detailed investigations. What is also interesting is that, although people preferred autonomous cars to take random or uninformed decisions in the two specific unavoidable crash scenarios above, majority of people opt for minimizing overall harm to both pedestrians and passengers, an option that is likely to be the least random and in need of using the most amount of information about the passenger and pedestrians.
For now, it seems that there’re much more discussions to be had before autonomous cars start making crash decisions people will be happy with.
The results of the poll presented in this post have been analyzed and written by AJung Moon, Camilla Bassani, and Shalaleh Rismani at the Open Roboethics initiative.
Pingback: Time is running out for ethicists to tackle very real robot quandries | News5 Live
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries - World News Online
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries - Magng
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries - EasyAntZA
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries | NUTesla | The Informant
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries
February 16, 2015Pingback: Time is running out for ethicists to tackle very real robot quandries | Technology paper
February 17, 2015