Next Article in Journal
Cuts, Qubits, and Information
Previous Article in Journal
Philosophy of Information: Revolution in Philosophy
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

The Misconception of Ethical Dilemmas in Self-Driving Cars †

by
Tobias Holstein
1,2
1
School of Innovation, Design and Engineering, Mälardalen University, 72123 Västerås, Sweden
2
Department of Computer Science, University of Applied Sciences, 64295 Darmstadt, Germany
Presented at the IS4SI 2017 Summit DIGITALISATION FOR A SUSTAINABLE SOCIETY, Gothenburg, Sweden, 12–16 June 2017.
Proceedings 2017, 1(3), 174; https://doi.org/10.3390/IS4SI-2017-04026
Published: 9 June 2017

Abstract

:
Self-driving cars are a transdisciplinary topic and well discussed in public and science. However, ethical dilemmas, such as the trolley problem, seem to dominate those discussions and consequently obfuscate much bigger ethical challenges in the development and operation of self-driving cars. We propose a systematic approach by creating a conceptual ethical model that connects components, systems and stakeholders to pinpoint ethical challenges for self-driving cars. This will help to move away from stagnating discussions over abstract thought experiments and to move forward to address and solve actual ethical challenges.

1. Introduction

Self-driving, also called fully autonomous or driverless cars are in focus in many domains, such as engineering, computer science, human-computer interaction and ethics. From an engineering and scientific perspective, technical problems are challenging, but are solved one step at a time. When it comes to ethics, it seems that many discussions run into a dead end. In a constructed ethical dilemma there is per definition no solution: whatever you do, the result will be bad.
The trolley problem, which is an ethical thought experiment [1], is a commonly used example of an unsolvable ethical dilemma: The self-driving car drives on a street with high speed. A group of people suddenly appears in front of the car. The car is too fast to stop before it reaches the group. If the car does not react, the whole group will be killed. The self-driving car could however evade the group by entering the pedestrian way and consequently killing a previously uninvolved pedestrian (Option A). Replacing the pedestrian with a concrete wall, which in consequence will kill the passenger of the self-driving car, is another option (Option B). Varying the personas of people in the group, the single pedestrian or the passenger can be used to alter the experiment. The use of personas allows including an emotional perspective [2], e.g., stating that the single pedestrian is a child, a relative, very old, very sick or a brutal dictator, who killed thousands of people.
Even though the scenarios are similar, the responses of humans, when asked how they would decide, differ [3]. The problem is that the question asked has a limited number of possible answers, which are all ethically questionable and perceived as bad or wrong. Therefore, a typical approach to this problem is to analyze the scenarios by following ethical theories, such as e.g., utilitarianism, other forms of consequentialism or deontological ethics. For example, utilitarianism would aim to minimize casualties, even if it means to kill the passenger, by following the principle: the moral action is the one that maximizes utility (or in this case minimize the damage). Depending on the doctrine, different arguments can be used to prove or disprove the decision.
Applying ethical doctrines to analyze a given dilemma and possible answers can only be done by humans. How would self-driving cars solve such dilemmas? There are a number of publications that suggest to implement moral principles into algorithms of self-driving cars [3,4,5,6]. We find that this does not solve the problem, but it reassures that the solution is calculated based on a given set of rules or other mechanisms, moving the problem to engineering, where it is implemented.

2. Ethical vs. Engineering Problems and Decision Making

It is worth to notice that the engineering problem is substantially different from the hypothetical ethical dilemma. While an ethical dilemma is an idealized constructed state that has no good solution, an engineering problem is always by construction such that it can differentiate between better and worse solutions. A decision making process that has to be implemented in a self-driving car can be summarized as follows. It starts with an awareness of the environment: Detecting obstacles, such as a group of humans, animals or buildings, and also the current context/situation of the car using external systems (GPS, maps, street signs, etc.) or locally available information (speed, direction, etc.). Various sensors have to be used to collect all required information. Gaining detailed information about obstacles would be a necessary step before a decision can be made that maximizes utility/minimizes damage. A computer program calculates solutions and chooses the solution with the optimal outcome. The self-driving car executes the calculated action and the process repeats itself.

3. Identifying Ethical Challenges

The process itself can be used to identify concrete ethical challenges within the decision making by considering the current state of the art of technology and its development. In a concrete car both the parts of this complex system and the way in which it is created have a critical impact on the decision-making. This includes for instance the quality of sensors, code and testing. We also see ethical challenges in design decisions, such as whether a certain technology is used because of its lower price, even though the quality of information for the decision making would be substantially increased if more expensive technology (such as sensors) would be used.
Since building and engineering of self-driving vehicle involves various stakeholders, such as software/hardware engineers, sales people, management, etc., we can also pose the following questions: does the actual self-driving car have a moral on its own or is it the moral of its creators? And who is to blame for the decision making of a self-driving car?

4. Approach

Prototypes of self-driving cars are already participating in public traffic among human-driven cars [7]. Therefore it is important to investigate how self-driving cars are actually built, how ethical challenges are addressed in their design, production and use, and how certain decisions are justified. Discussing this before self-driving cars are officially introduced into the market, allows taking part in the setting and definition of ethical ground rules. McBride states that “Issues concerning safety, ethical decision making and the setting of boundaries cannot be addressed without transparency” [8]. We think that transparency is only one factor, as it is necessary to start further investigations and discussions.
In order to give a more detailed perspective on the complex decision making process, we propose to create a conceptual ethical model that connects the different components, systems and stakeholders. It will show interdependencies and allow pinpointing ethical challenges.
Focusing on important ethical challenges that should currently be addressed and solved is an important step before ethical aspects of self-driving cars can actually be meaningfully discussed from the point of view of societal and individual stakeholders as well as designers and producers. It is important to focus not on abstract thought experiments but on concrete conditions that influence the behavior of self-driving cars and their safety as well as our expectations from them.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Foot, P. The Problem of Abortion and the Doctrine of Double Effect. Oxf. Rev. 1967, 5, 5–15. [Google Scholar]
  2. Bleske-Rechek, A.; Nelson, L.A.; Baker, J.P.; Remiker, M.W.; Brandt, S.J. Evolution and the trolley problem: People save five over one unless the one is young, genetically related, or a romantic partner. J. Soc. Evol. Cult. Psychol. 2010, 4, 115–127. [Google Scholar] [CrossRef]
  3. Bonnefon, J.-F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. [Google Scholar] [CrossRef] [PubMed]
  4. Goodall, N.J. Can you program ethics into a self-driving car? IEEE Spectr. 2016, 53, 28–58. [Google Scholar] [CrossRef]
  5. Dennis, L.; Fisher, M.; Slavkovik, M.; Webster, M. Formal verification of ethical choices in autonomous systems. Rob. Auton. Syst. 2016, 77, 1–14. [Google Scholar] [CrossRef]
  6. Dennis, L.; Fisher, M.; Slavkovik, M.; Webster, M. Ethical choice in unforeseen circumstances. In Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics); Springer: Berlin/Heidelberg, Germany, 2014; Volume 8069, pp. 433–445. [Google Scholar]
  7. Persson, M.; Elfström, S. Volvo Car Group’s First Self-Driving Autopilot Cars Test on Public Roads around Gothenburg. Volvo Car Group Press Release. 2014. Available online: https://www.media.volvocars.com/global/en-gb/media/pressreleases/145619/volvo-car-groups-first-self-driving-autopilot-cars-test-on-public-roads-around-gothenburg (accessed on 6 October 2016).
  8. McBride, N. The Ethics of Driverless Cars. SIGCAS Comput. Soc. 2016, 45, 179–184. [Google Scholar] [CrossRef]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Holstein, T. The Misconception of Ethical Dilemmas in Self-Driving Cars. Proceedings 2017, 1, 174. https://doi.org/10.3390/IS4SI-2017-04026

AMA Style

Holstein T. The Misconception of Ethical Dilemmas in Self-Driving Cars. Proceedings. 2017; 1(3):174. https://doi.org/10.3390/IS4SI-2017-04026

Chicago/Turabian Style

Holstein, Tobias. 2017. "The Misconception of Ethical Dilemmas in Self-Driving Cars" Proceedings 1, no. 3: 174. https://doi.org/10.3390/IS4SI-2017-04026

Article Metrics

Back to TopTop