Who to sacrifice in unavoidable accidents? This is a question many philosophers, lawmakers, ethics professors and developers of autonomous vehicles have asked themselves. Many more will have to ask this question and we will likely never get a definitive answer. The dilemma in autonomous vehicles seems very similar to a famous thought experiment called the trolley problem, which has been stated over 50 years ago: the driver of a runaway tram can only steer from one narrow track on to another; five men are working on one track and one man on the other; anyone on the track he enters is bound to be killed. However, there are some analogies which do not hold up under closer examination. For example, there is the question whether innocent bystanders are more important than passengers who willingly entered the vehicle. Therefore, in a situation where we could save a group of pedestrians by swerving and driving into a wall, killing the passengers, should we?
This report aims to examine this and similar questions by looking at them from different viewpoints. In the first chapter some background is given. In the following subsections, the different aspects of the moral dilemma are examined. Under the header ‘Trolley Problem and Moral Dilemma’ comparisons with the aforementioned trolley problem are discussed, and the breakdown of this analogy is examined. Some additional moral questions are posed. In the following subsection simply titled ‘Laws and Ethics’ a look at the current situation in Switzerland and Germany is taken. Afterwards the findings of the German Ethics Code are summarised and discussed. In the last subsection titled ‘Hypocritical Humans and Adaptation of New Technologies’ the problem of slow adaptation of self-driving cars is discussed. Finally, under the header ‘Conclusions’ some conclusions to be drawn from all the information are brought up. Additionally, some recommendations about further actions are made there.
Autonomous vehicles are on the rise. Some of them have driven thousands of miles with very low accident rates. In future self-driving cars might eliminate as much as 90% of accidents. . But not all accidents can be avoided, especially while there are still manually driven cars on the road. Additionally, the most vulnerable road users – namely pedestrians and cyclists – will exist even in an era after manual cars. Therefore, decisions need to be made, what to do in those moments, where harm cannot be avoided. As depicted in Figure 1, there are a multitude of possible, although rare, situations where the vehicle must choose what to do. Exemplary situations include whether to kill a group of jaywalkers or an innocent passer-by as in situation A, or whether to kill a single person or the passengers of the car as in situation B.
Nyholm & Smids illustrate, that an autonomous vehicle cannot make decisions like these on its own. Even if driven by artificial intelligence, there are clear guidelines that need to be programmed, long before the first real life situation can ever happen (2016). These guidelines cannot be developed by a single person. They need to be based on an ethics code and obey the law. While these laws do not yet exist globally, several regulations have been passed in different countries, mainly focussing on road safety.A third aspect to be examined is the question whether people would even buy autonomous vehicles, that might sacrifice passengers to save pedestrians as depicted in Figure 1 Situation C. Discouraging potential buyers might lead to slower adaptation rates of this new technology, which again leads to more deaths.
Trolley Problem and Moral Dilemma
The trolley problem is often stated as depicted in Figure 2, where a bystander sees an unmanned trolley is racing towards five people who are stuck on the track. The observer has access to a switch that redirects the trolley to a side track, where only one person is stuck. The most common response to this situation is to activate the switch, effectively killing one person to save five. The situation gets much more complicated when talking about autonomous vehicles, as there are many more factors to consider. One thing that clearly differentiates the trolley problem from its real-world counterpart is the certainty of death.
While many pedestrians might not survive being run over by a car, death is no certainty. Furthermore, the chances of survival depend heavily on the health condition of the pedestrian, their age and many more factors, where the vehicles AI can only guess. The pedestrian might even be able to jump out of the way, reducing damage to zero. On the other end of the argument are the passengers. Their chance of survival in a car-pedestrian-crash are much higher, but not guaranteed. However, when driving headfirst into a wall, their chances of survival drop significantly, while still staying non-zero. Just with this argument it is clear, that the real-world application is not a black-and-white question, but rather a mix of greys.
The problem becomes even more complicated, when you need to choose between different victims. Consider a situation where a child suddenly jumps in front of the self-driving car. The only other option to running over a careless child is to swerve and hit an innocent elderly person on the sidewalk. Whose live is worth more? Similar situations can be imagined with criminals, homeless persons, people with terminal diseases or pregnant women. Assuming a childs life is more important than a criminals, does this hold up when comparing one child with 5 criminals, or 100? These questions and much more general ones must be answered when programming autonomous vehicles. However, this lies outside the scope of this report and is best left to philosophers, ethics experts and especially to lawmakers.
As of 2020, there are no laws in Switzerland specifically tackling the topic of autonomous driving. In Germany, there are laws concerning assisted driving, but completely autonomous driving is still disallowed. If these laws are ever created, lawmakers will hardly be able to completely ignore the German Code of Ethics, which, while not legally binding, gives some important guidelines concerning the topics of this report. It clearly states that: Automated and connected technology should prevent accidents wherever practically possible the protection of human life enjoys top priority.
Luetge, who was part of the committee who wrote this ethics code, declares these statements to be very simple. They are in his view often overlooked when talking about ethics in self-driving cars. Much of the literature on these subjects only talks about unavoidable accidents, while forgetting how much better autonomous vehicles are at preventing accidents in the first place. He also states that, even though it is not always the perfect solution, property damage and damage to animals is always preferable over damage to people.
However, the Code of Ethics also gives guidelines for dilemmatic decisions. In guidelines 8 and 9 this topic is discussed. It states that: Genuine dilemmatic decisions, such as a decision between one human life and another, depend on the actual specific situation, incorporating “unpredictable” behavior by the parties affected. They can thus not be clearly standardized, nor can they be programmed such that they are ethically unquestionable. With that, even an ethics commission founded specifically for the purpose of finding an answer to questions like these could not give a definitive answer. It therefor suggests having an independent agency solely for the purpose of dealing with these matters.
However, it also states that: any distinction based on personal features (age, gender, physical or mental constitution) is strictly prohibited. Those parties involved in the generation of mobility risks must not sacrifice non-involved parties. Luetge explains that, while this guideline prohibits the autonomous vehicle from selecting targets based on personal characteristics, it also allows to minimise injury by selecting victims more likely to survive. How to implement this is left open to further investigation. The statement to not sacrifice non-involved parties was added to hinder computer code which would always safe the passengers.
The decision to sacrifice passengers when this safes lives, might however not always be the best solution. The reason therefor lies in the hypocritical tendencies of human nature. Studies conducted show controversial results. Most participants agree that the sacrifice of the passenger(s) in order to save pedestrian(s) is morally right. The agreement rate rises with the number of lives saved, but drops when there are co-passengers to consider, especially if the co-passenger is a family member. However, the number of people who think this kind of sacrifice should be legally enforced is much lower. Lower still is the wish to purchase or ride a car, where the sacrificing algorithm is implemented.
This opinion poses a problem for lawmakers and manufacturers of autonomous vehicles. The adoption of autonomous driving technologies would save many lives. If however nobody buys these new cars, accidents continue to happen. The dilemma arises, whether it is worth to program self-driving cars in a way that is generally viewed as morally inferior, in order to safe even more lives. This also correlates to one of the key findings of Bonnefon, Shariff and Rahwan: such regulation could substantially delay the adoption of AVs [autonomous vehicles], which means that the lives saved by making AVs utilitarian [= always saving as many lifes as possible] may be outnumbered by the deaths caused by delaying the adoption of AVs altogether.