Next Article in Journal
Main Pathways of Carbon Reduction in Cities under the Target of Carbon Peaking: A Case Study of Nanjing, China
Next Article in Special Issue
ROAD: Robotics-Assisted Onsite Data Collection and Deep Learning Enabled Robotic Vision System for Identification of Cracks on Diverse Surfaces
Previous Article in Journal
Engineering Curriculum Reform Based on Outcome-Based Education and Five-Color Psychology Theory
Previous Article in Special Issue
Safety Improvement of Sustainable Coal Transportation in Mines: A Contract Design Perspective
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

How Will Autonomous Vehicles Decide in Case of an Accident? An Interval Type-2 Fuzzy Best–Worst Method for Weighting the Criteria from Moral Values Point of View

1
Faculty of Transportation and Logistics, Istanbul University, Istanbul 34000, Türkiye
2
School of Business, Istanbul University, Istanbul 34000, Türkiye
*
Author to whom correspondence should be addressed.
Sustainability 2023, 15(11), 8916; https://doi.org/10.3390/su15118916
Submission received: 1 May 2023 / Revised: 22 May 2023 / Accepted: 23 May 2023 / Published: 1 June 2023

Abstract

:
The number of studies on Autonomous Vehicle (AV) ethics discussing decision-making algorithms has increased rapidly, especially since 2017. Many of these studies handle AV ethics through the eye of the trolley problem regarding various moral values, regulations, and matters of law. However, the literature of this field lacks an approach to weighting and prioritizing necessary parameters that need to be considered while making a moral decision to provide insights about AVs’ decision-making algorithms and related legislations as far as we know. This paper bridges the gap in the literature and prioritizes some main criteria indicated by the literature by employing the best–worst method in interval type-2 fuzzy sets based on the evaluations of five experts from different disciplines of philosophy, philosophy of law, and transportation. The criteria included in the weighting were selected according to expert opinions and to the qualitative analysis carried out by coding past studies. The weighing process includes a comparison of four different approaches to the best–worst method. The paper’s findings reveal that social status is the most important criterion, while gender is the least important one. This paper is expected to provide valuable practical insights for Autonomous Vehicle (AV) software developers in addition to its theoretical contribution.

1. Introduction

Throughout history, technological advances have had a major impact on modern society. Groundbreaking innovations in areas that directly affect the personal life of everyone, such as transportation, have completely changed the cities we live in and how we live. Automobiles, since their invention in the early 20th century, have become the most popular mode of transport, thus dramatically changing our lives. The results of the invention of automobiles included increased mobility for individuals, new jobs arising to supply the demand, and legislative requirements (e.g., safety standards, highway rules, and driver’s licenses). The next big mobility-related innovation, which is assumed to have a similar impact, is AV technology. AVs are expected to ease everyday life with their promising features and benefits (e.g., self-driving, improved mobility for non-drivers, car/ride sharing, improved safety, reduced congestion, etc.).
As the number of people who benefit from the changes they bring increases, innovations have to go through a shorter period of time before they are publicly accepted (e.g., mobile/smartphones). End users of these enhancements change their habits for individual reasons, but the massive transitions bring out critical environmental, economic, and ethical externalities. As evidenced by all previous similar examples, the upcoming impacts of such a change in our transportation habits are undeniable. Therefore, it has become necessary to investigate these impacts, foresee the outcomes, and study the required changes to minimize the disadvantages of the inevitable transition.
In the 21st century, where technology changes rapidly, to circumvent the transition period with the least damage for the end users, governments and manufacturers’ regulatory studies should be more dynamic and responsive [1] The high penetration rate of easily accepted technological developments requires altered regulations in both directly and indirectly affected areas. The continuous decisions that an AV must make to allocate the risk of a crash are ethical decisions [2]. Therefore, in the case of AV technology, machine ethics is one of the areas that need comprehensive research before fully automated vehicles become publicly available. It is still unclear how an AV should decide in a trolley dilemma-like situation or in a more realistic situation where the choice is not binary, and the decision should be made depending on various parameters.
The number of studies in this field continues to increase rapidly, and there are countless prestigious studies regarding AVs’ ethics, decision-making algorithms, and how governments should regulate related legislation. Awad et al. [3] and Noothigattu et al. [4] studied societal expectations and preferences when faced with an ethical dilemma analyzing the data collected from the Moral Machine Experiment developed at the Massachusetts Institute of Technology. Bonnefon et al. [5] and Faulhaber et al. [6] studied binary situations to discuss the algorithms and ethical settings that will help AVs make moral decisions about choosing between two evils, such as killing several pedestrians on the road or one passerby. As much as the findings of these and other similar studies are beneficial in this area, ethical algorithms that can decide in such binary situations would not be serviceable in most real-life situations, and Dubljević [7] exposed this with a scenario where multiple terrorists use a vehicle for nefarious purposes. If the utilitarian AV is programmed to minimize the number of casualties, then terrorists could use the AV maliciously by outnumbering the victims. To solve this problem, the criminal history of all possible victims should be considered while making decisions. The terrorist example of Dubljević [7] demonstrates only one of the countless settings which require unprecedented thinking.
Artificial Intelligence (AI)—which will be the primary decision maker of AVs—has been broadly used to assist human decision makers and has the possibility to replace them completely [8]. The mathematics behind the decision-making algorithm should focus on how to prioritize possible victims of the inevitable accident. There are not any studies in the literature to our knowledge that aimed to determine the parameters and weight them. However, it is crucial to weigh all the parameters.
The understanding of morality commonly depends on ethics. The word “ethics” comes from the Greek word “ethos”. Ethos means the character, spirit, and attitude of a group of people or culture. There have been many different descriptions of what ethics is [9]. Burks [10] argued that ethics emerge when a person decides between various alternatives regarding moral principles. In a previous study, Hayes et al. [11] defined ethics as a set of moral principles, codes of conduct, or values. Many ethical theories have been developed (Relativism Theory, The Divine Command Theory, Consequences Theories, Egoism Theory, Utilitarian Theory, Deontology Theory, and Virtue Ethics Theory) as fruits of different viewpoints.
The relationship between moral values and technology has been discussed in many ways, such as ethical issues arising from the adoption of technology [12], technologies’ facilitative effects on conveying religious messages to others [13], and the survival and growing of the Christianity [14].
This paper handles the issue of sacrificing someone to save the more beneficial ones for society in case of an accident involving autonomous vehicles. A recent paper argued that the perception of utilitarian ethics might differ across different cultures, and some people in some cultures do not give more importance to utilitarian ethics [15]. However, in the case of AVs, a decision must be made, even though religion may not approve of this decision. In this context, interviews with many religious philosophers and theology experts were asked to weigh in on which people with alternative demographic characteristics should be sacrificed. However, experts stated that it would not be possible to make such a prioritization religiously. Subsequent detailed research has revealed that relevant questions can be answered from a utilitarian ethical perspective.
The goal of this study is to determine the necessary parameters that need to be considered while making a moral decision and weigh them to provide insights into AVs’ decision-making algorithms and the related legislation. In this context, the best–worst method, which is a relatively new multiple-decision-making method and suitable for the structure of this problem, has been decided. This is a multi-criteria decision-making (MCMD) method developed by Rezaei [16] and applied to many different engineering, social, and natural science problems. It is a method that works on the principle of pairwise comparison and is fed by the pairwise evaluations of experts. It is superior to other pairwise comparison methods, such as Analytic Hierarchy Process (AHP), in terms of both reducing the number of pairwise comparisons and providing more consistent matrices. Since it better reflects the structure of the uncertainty environment, can be used in the scarcity of precise data, and is an important extension of the fuzzy set theory, interval type-2 fuzzy BWM has been used for this problem. This use is the first in solving such an extreme problem as how the moral behavior of such transportation-based and autonomous vehicles will be formed.
The remainder of this paper is organized as follows: the knowledge gap that this study aims to fill is discussed in the next section under literature review; methodology is explained in Section 3; the application of the method to the subject is covered in Section 4; and in the final section a conclusion and the discussion of this paper is given.

2. Literature Review

This section includes the determination of the knowledge gap and the review of related past studies.

2.1. Knowledge Gap

AVs are expected to shape future transportation in many ways. With the appearance of AVs on the roads, certain decisions that may ethically compel people should be made in the face of traffic problems such as accidents. Such decisions will be taken in line with the directions of the software design of AVs. Therefore, ethical rules and principles that are entered into the software should be determined meticulously from the very beginning ensuring consensus among society and the best decision can be made. To find past studies and a gap in the literature, an advanced search query (TITLE-ABS-KEY (“religion” OR “ethics” OR “legislation”) AND TITLE-ABS-KEY (“autonomous vehicles”)) was conducted in the Scopus database, and 296 publications were identified. Then, the document, including these publications, was uploaded to the VOSviewer tool, and author keyword analysis, which is used for identifying the most used author keywords and knowledge gaps in the literature, was performed. The author keyword analysis results are presented in Figure 1. The size of the circles in Figure 1 reflects the frequency of the keywords (the bigger circles indicate the more frequently used keywords). After that, we researched all the keywords in detail and recognized that ethical dilemmas about whether to sacrifice one person to save more and the role of demographic features in such decisions have been discussed for a long time in the context of the trolley problem, which is shown in the cluster colored red.
Then, to identify a research gap in the research area, the “trolley problem” and “trolley dilemma” keywords were included in the first search query, and a new search was conducted. After the second query, we identified 29 publications. Six irrelevant publications were eliminated. Then, we analyzed the remaining 23 publications in detail and clarified that none of the past studies have focused on weighting necessary parameters to create a formulation of prioritizing the criteria for AV ethics, thus providing insights about AVs’ decision-making algorithms and the related legislation.

2.2. Past Studies on AV Ethics

Since there have not been any studies focusing on weighing moral decision parameters, the literature has covered a wide range. AV ethics have been intensively studied through the eye of the trolley problem since 2017. The first group of studies on this subject discusses the trolley problem concerning its nature [17], how it should be handled [18], and identifying awareness about it [19]. The second group of studies in the literature reveals the limitations of the trolley problem in respect of its voting-based structure [20,21], not embracing infinite responsibility to others [22], the self-sacrifice option, and cultural differences [23]. The third group consists of publications focusing on the matters of law encompassing the liability [24] and responsibility [25] of AV developers, and AV actions’ compliance with the law [26,27].
The remaining part of the literature includes publications on various topics: changeable moral judgments based on the accident severity and probability [28], survival probability estimation [29], risk assessment [30], general social welfare function [31], changeable degree of pragmatism in ethical decision making [32], AV technologies’ necessary capabilities during a collision [33], religion-based AV ethics [34], comparing of Kantian rationale and utilitarian ethic [35], and a wider range of criteria than the criteria handled by trolley problem [36,37]. Table 1 shows detailed explanations of the publications in the literature.

2.3. Past Studies on IT2F-BWM

F-BWM was applied to measure the risk factors of FMEA by Tian et al. [40]. For healthcare, Mou et al. [41] developed the intuitionistic fuzzy multiplicative BWM. Omrani et al. [42] used fuzzy BWM to find the optimum power plant alternative. Wu et al. [38] used centroids to develop an integrated model of IT2Fs and BWM and worked on green supplier selection problems. Mi et al. [43] presented a comprehensive survey and a detailed review of BWM, which can be studied by intrigued scholars as a source of inspiration for future BWM research.

3. Applied Methodology

The applied methodology through the study consists of three stages. In the first stage, frequency analysis was carried out in a qualitative way from the studies in the literature to determine the criteria that AVs will have to consider when deciding. The output of this stage is the final determination of the criteria set that will be considered and whose importance weights will be calculated. In the second stage, the importance weights of the criteria determined in the first step are calculated with an interval type-2 fuzzy BWM. In the last stage, the results obtained with IT2F-BWM were compared with the results obtained from solving the problem with different BWM versions (traditional BWM, fuzzy BWM, and Bayesian BWM). Figure 2 shows the flow chart of this applied methodology.

3.1. Frequency Analysis of the Criteria via MAXQDA

The first step of the applied methodology is a qualitative frequency analysis of the criteria via MAXQDA. It enables researchers to employ several different analyses, such as interview transcription and analysis, literature review and analysis, mixed methods, content analysis, and questionnaire analysis. The current study aims to reveal the criteria that AVs will consider in the decision-making process by extracting a number of studies on the subject in the literature.

3.2. IT2F-BWM

Best–Worst Method (BWM) was developed by Rezaei [16], utilizing pairwise comparisons of alternatives and criteria. Since it requires fewer data and results in more reliability, the best–worst criterion is widely used as two vectors [44]. For various application areas, a fuzzy version of the BWM is introduced. Hafezalkotob and Hafezalkotob [45], Guo and Zhao [46], and Moslem et al. [47] proposed an application of the triangular fuzzy BWM. In this section, the stages of the IT2F-BWM utilizing the center-of-area approach will be presented.
Step 1. A set of decision criteria ( c 1 , c 1 , , c n ) , which is used to calculate the importance weights, is determined.
                      c 1             c 2               c n E ˜ ˜ = c 1 c 2 c n ( e ˜ ˜ 11 e ˜ ˜ 12 e ˜ ˜ 1 n e ˜ ˜ 21 e ˜ ˜ 22 e ˜ ˜ 2 n e ˜ ˜ n 1 e ˜ ˜ n 1 e ˜ ˜ n n )
In the matrix above, e ~ ~ i j shows the IT2F preference degree of criterion i over criterion j . The linguistic terms of decision makers can be the ground for IT2F pairwise comparisons on these n criteria. Below, Table 2 shows all the linguistic variables and IT2FSs. The diagonal elements are regarded as Equally Important (EI) in this evaluation matrix:
e ~ ~ 11 , e ~ ~ 22 , , e ~ ~ n n = ( ( 1 ; 1 ; 1 ; 1 ; 1 ; 1 ) , ( 1 ; 1 ; 1 ; 1 ; 0.9 ; 0.9 ) ) .
Step 2. The decision maker’s preference is used to determine the best and the worst criterion.
Step 3. IT2FSs are used to establish the preference for the best criterion and the worst criterion over all the other criteria. The first outcome, the best-to-others (BtO) vector, would be
E ~ ~ B = e ~ ~ B 1 , e ~ ~ B 2 , , e ~ ~ B n ,
where e ~ ~ B j demonstrates the preference of the best criterion B over criterion j . It is definite that
e ~ ~ B B = ( ( 1 ; 1 ; 1 ; 1 ; 1 ; 1 ) , ( 1 ; 1 ; 1 ; 1 ; 0.9 ; 0.9 ) )
In the second outcome of this step, others-to-worst (OtW), the vector would be
E ~ ~ B = e ~ ~ 1 W , e ~ ~ 2 W , , e ~ ~ n W T
where a j W demonstrates the preference of the criterion j over the worst criterion W . Again, it is clear that
e ~ ~ W W = ( ( 1 ; 1 ; 1 ; 1 ; 1 ; 1 ) , ( 1 ; 1 ; 1 ; 1 ; 0.9 ; 0.9 ) ) .
Step 4. The optimal weights ( w 1 * , w 2 * , , w n * ) are determined. The center of the area is used in this step. The frame of the constrained optimization model follows the model presented by Wu et al. [38]. For each pair of w ˜ ˜ B / w ˜ ˜ j and w ˜ ˜ j / w ˜ ˜ W , the optimal weight for the criteria is the one where the maximum absolute differences | w ˜ ˜ B / w ˜ ˜ j E ˜ ˜ B j | and | w ˜ ˜ j / w ˜ ˜ W E ˜ ˜ j W | are minimized. The consistency ratio is calculated using the consistency index presented in Table 3. While creating this, the same method of Rezaei [16] and Wu et al. [38] is utilized. The higher the consistency ratio, the larger the δ*.
min max j { | w ˜ ˜ B / w ˜ ˜ j E ˜ ˜ B j | , | w ˜ ˜ j / w ˜ ˜ W E ˜ ˜ j W | } s . t . { j = 1 n C O A ( w ˜ ˜ j ) = 1 w j 1 U w j 1 L , w j 4 L w j 4 U w j 1 L w j 2 L w j 3 L w j 4 L w j 1 U w j 2 U w j 3 U w j 4 U w j 1 U 0 , j = 1 , 2 , , N
where,
w ˜ ˜ B = ( w ˜ B U , w ˜ B L ) = ( ( w B 1 U , w B 2 U , w B 3 U , w B 4 U ; H 1 ( w ˜ B U ) , H 2 ( w ˜ B U ) ) , ( a B 1 L , a B 2 L , a B 3 L , a B 4 L ; H 1 ( w ˜ B L ) , H 2 ( w ˜ B L ) ) )
w ˜ ˜ W = ( w ˜ W U , w ˜ W L ) = ( ( w W 1 U , w W 2 U , w W 3 U , w W 4 U ; H 1 ( w ˜ W U ) , H 2 ( w ˜ W U ) ) , ( a W 1 L , a W 2 L , a W 3 L , a W 4 L ; H 1 ( w ˜ W L ) , H 2 ( w ˜ W L ) ) )
w ˜ ˜ j = ( w ˜ j U , w ˜ j L ) = ( ( w j 1 U , w j 2 U , w j 3 U , w j 4 U ; H 1 ( w ˜ j U ) , H 2 ( w ˜ j U ) ) , ( a j 1 L , a j 2 L , a j 3 L , a j 4 L ; H 1 ( w ˜ j L ) , H 2 ( w ˜ j L ) ) )
w ˜ ˜ B , j = ( w ˜ B , j U , w ˜ B , j L ) = ( ( w B , j 1 U , w B , j 2 U , w B , j 3 U , w B , j 4 U ; H 1 ( w ˜ B , j U ) , H 2 ( w ˜ B , j U ) ) , ( a B , j 1 L , a B , j 2 L , a B , j 3 L , a j 4 L ; H 1 ( w ˜ B , j L ) , H 2 ( w ˜ B , j L ) ) )
w ˜ ˜ j , W = ( w ˜ j , W U , w ˜ j , W L ) = ( ( w j 1 , W U , w j 2 , W U , w j 3 , W U , w j 4 , W U ; H 1 ( w ˜ j , W U ) , H 2 ( w ˜ j , W U ) ) , ( a j 1 , W L , a j 2 , W L , a j 3 , W L , a j 4 , W L ; H 1 ( w ˜ j , W L ) , H 2 ( w ˜ j , W L ) ) )
To eliminate | w ˜ ˜ B / w ˜ ˜ j E ˜ ˜ B j | and | w ˜ ˜ j / w ˜ ˜ W E ˜ ˜ j W | , the maximum absolute values | w ˜ ˜ B / w ˜ ˜ j E ˜ ˜ B j | and | w ˜ ˜ j / w ˜ ˜ W E ˜ ˜ j W | are targeted to minimize. To minimize the absolute gap, the model is then translated to nonlinear programming as δ* = ((δ*; δ*; δ*; δ*; 1; 1), (δ*; δ*; δ*; δ*; 0.9; 0.9)).
min δ * s . t . { | w ˜ ˜ B 1 U w ˜ ˜ j 1 U w ˜ ˜ B j , 1 U | δ * , | w ˜ ˜ B 2 U w ˜ ˜ j 2 U w ˜ ˜ B j , 2 U | δ * , | w ˜ ˜ B 3 U w ˜ ˜ j 3 U w ˜ ˜ B j , 3 U | δ * , | w ˜ ˜ B 4 U w ˜ ˜ j 4 U w ˜ ˜ B j , 4 U | δ * , | w ˜ ˜ B 1 L w ˜ ˜ j 1 L w ˜ ˜ B j , 1 L | δ * , | w ˜ ˜ B 2 L w ˜ ˜ j 2 L w ˜ ˜ B j , 2 L | δ * , | w ˜ ˜ B 3 L w ˜ ˜ j 3 L w ˜ ˜ B j , 3 L | δ * , | w ˜ ˜ B 4 L w ˜ ˜ j 4 L w ˜ ˜ B j , 4 L | δ * , | w ˜ ˜ j 1 U w ˜ ˜ W 1 U w ˜ ˜ j W , 1 U | δ * , | w ˜ ˜ j 2 U w ˜ ˜ W 2 U w ˜ ˜ j W , 2 U | δ * , | w ˜ ˜ j 3 U w ˜ ˜ W 3 U w ˜ ˜ j W , 3 U | δ * , | w ˜ ˜ j 4 U w ˜ ˜ W 4 U w ˜ ˜ j W , 4 U | δ * , | w ˜ ˜ j 1 L w ˜ ˜ W 1 L w ˜ ˜ j W , 1 L | δ * , | w ˜ ˜ j 2 L w ˜ ˜ W 2 L w ˜ ˜ j W , 2 L | δ * , | w ˜ ˜ J 3 L w ˜ ˜ W 3 L w ˜ ˜ J W , 3 L | δ * , | w ˜ ˜ J 4 L w ˜ ˜ W 4 L w ˜ ˜ j W , 4 L | δ * , j = 1 n C O A ( w ˜ ˜ j ) = 1 w j 1 U w j 1 L , w j 4 L w j 4 U , w j 1 L w j 2 L w j 3 L w j 4 L w j 1 U w j 2 U w j 3 U w j 4 U w j 1 U 0 , j = 1 , 2 , , N

3.3. Comparative Analysis

An indispensable element of MCDM-based studies is testing the validity of the results of the applied approach. It is necessary to compare the applied method with a set of equivalent methods. In this context, criteria weights obtained with IT2F-BWM are compared to show the similarity of the results obtained with other versions and to show that this method is applicable to this problem (as it produces results close to other versions). Although the problem addressed in this article is quite original, determining the closeness of the outputs of IT2F-BWM with other BWM versions will strengthen the reliability of the study.

4. Application

Autonomous vehicles have to make decisions to protect other alternatives by sacrificing one of the alternatives in the event of an accident. In the context of accidents involving conventional vehicles, this problem has been dealt with by many studies in terms of ethics. MAXQDA, one of the qualitative analysis programs, was used to determine the criteria included in these studies. For this purpose, firstly, after the studies in the literature were loaded into the program, all the studies were subjected to content analysis, and the most common criteria were determined by coding the texts in the studies. Then, the frequency values of the criteria were determined by selecting the documents column from the “Code Matrix Scanner” in the “Visual Tools” menu in the MAXQDA program. The frequency values of the criteria and the studies in which they are included are shown in Figure 3. “Total” in the lower-left corner of the figure shows how often the relevant criteria are included in the total in each paper, while “Total” in the higher-right corner indicates how often all papers contain each criterion. According to the analysis results, age is the most frequently used criterion, being used 35 times. Age is followed by gender, number of persons affected by the accident, social status, compliance of the person to be affected by the accident, and health condition. Criminal history is the least frequently used criterion, being used three times.
In addition, visual maps were created via the MAXQDA program, as seen in Figure 4, in order to more clearly show in which context the determined codes were handled in the past 21 studies shown in Figure 3. According to a couple of quotations shared in the visual maps, some codes of law and regulations do not permit the prioritization of personal attributes related to criteria such as age, gender, and health condition. However, many researchers and practitioners, especially those studying the trolley problem and moral machine experiments, criticize this prohibitive approach and argue that prioritization can be a necessity.
The next step in the study, which started with the goal of calculating the weights of the criteria that AVs will have to take into account when deciding, was the determination of the criteria. After a thorough literature research was performed and expert opinions were collected, seven significant criteria were chosen:
  • Social status;
  • Number of persons affected by the accident;
  • Compliance of the person to be affected by the accident;
  • Age;
  • Gender;
  • Health condition;
  • Criminal history.
The moral machine experiment developed at MIT, the results of which were evaluated by Awad et al. [3] and Noothigattu et al. [4], included all the criteria listed above in their questionnaire presented online to participants from all over the world. For instance, the scenarios of the experiment include animations that describe health professionals, children, and elderly people, which are related to social status and age criteria, respectively. Dubljević [7] discussed that the number of persons affected by the accident criteria has the potential to deflect the AV decision-making algorithm in the wrong direction when a large group of nefarious people uses an AV. The discussion demonstrates the necessity of the criminal history criteria.
The second step of the weighting process is selecting the proper experts to evaluate these criteria. Since morality is a collective output of morality and ethics and the discussed dilemmatic scenarios have the potential to bring judicial cases about the usage of AVs, the participants of the evaluation process should specialize in morality, law, philosophy, transportation, or a combination of these subjects. The remaining steps of the study are carried out after selecting five professionals whose expertise is in the related subjects of philosophy, philosophy of law, and transportation. In many MCDM-based studies in the literature, the participation of experts, who are competent, equipped, and experienced in their field, is less than in other statistical analysis-based studies, since in those statistical-based studies, the power of the study is measured by the questionnaire and the number of participants in the survey. However, the same criterion is not valid for MCDM-based studies, such as our study. Our study applied the interval type 2 fuzzy set version of an MCDM method, which is a solid mathematical optimization model and was first proposed by Rezaei [16], to a social issue that has never been addressed in the literature before. In this respect, by combining this method with interval type-2 fuzzy numbers, it will be possible to weigh the criteria that autonomous vehicles will consider in the decision-making process, and in this way, in the development of software related to this technology, which will come to a more advanced stage in the future, these vehicles will make more rational decisions regarding moral decisions. In other words, the originality of this study lies in the fact that a problem that has not been mentioned in the literature so far will be solved with an advanced version of a model that is accepted and preferred by researchers over its equivalents, such as AHP, FUCOM, etc. (IT2F-BWM).

4.1. The Implementation of IT2F-BWM

The importance weights of seven criteria were obtained by the application of IT2F-BWM for five experts. In the first step of the procedure used, each expert determined the best and the worst criterion. For instance, Expert 1 chose social status as the best criterion and gender as the worst one. Expert 1′s assessment of the best and worst criteria is shown in Table 4 and Table 5, respectively.
Type-2 best-to-others and others-to-worst matrices are constructed as follows:
A B = ( ( ( 1 ; 1 ; 1 ; 1 ; 1 ; 1 ) ,   ( 1 ; 1 ; 1 ; 1 ; 0.9 ; 0.9 ) ) ( ( 2 ; 3 ; 3 ; 4 ; 1 ; 1 ) ,   ( 2.5 ; 3 ; 3 ; 3.5 ; 0.9 ; 0.9 ) ) ( ( 2 ; 3 ; 3 ; 4 ; 1 ; 1 ) ,   ( 2.5 ; 3 ; 3 ; 3.5 ; 0.9 ; 0.9 ) ) ( ( 6 ; 7 ; 7 ; 8 ; 1 ; 1 ) ,   ( 6.5 ; 7 ; 7 ; 7.5 ; 0.9 ; 0.9 ) ) ( ( 8 ; 9 ; 9 ; 10 ; 1 ; 1 ) ,   ( 8.5 ; 9 ; 9 ; 9.5 ; 0.9 ; 0.9 ) ) ( ( 5 ; 6 ; 6 ; 7 ; 1 ; 1 ) ,   ( 5.5 ; 6 ; 6 ; 6.5 ; 0.9 ; 0.9 ) ) ( ( 5 ; 6 ; 6 ; 7 ; 1 ; 1 ) ,   ( 5.5 ; 6 ; 6 ; 6.5 ; 0.9 ; 0.9 ) ) ) A W = ( ( ( 8 ; 9 ; 9 ; 10 ; 1 ; 1 ) ,   ( 8.5 ; 9 ; 9 ; 9.5 ; 0.9 ; 0.9 ) ) ( ( 5 ; 6 ; 6 ; 7 ; 1 ; 1 ) ,   ( 5.5 ; 6 ; 6 ; 6.5 ; 0.9 ; 0.9 ) ) ( ( 5 ; 6 ; 6 ; 7 ; 1 ; 1 ) ,   ( 5.5 ; 6 ; 6 ; 6.5 ; 0.9 ; 0.9 ) ) ( ( 2 ; 3 ; 3 ; 4 ; 1 ; 1 ) ,   ( 2.5 ; 3 ; 3 ; 3.5 ; 0.9 ; 0.9 ) ) ( ( 1 ; 1 ; 1 ; 1 ; 1 ; 1 ) ,   ( 1 ; 1 ; 1 ; 1 ; 0.9 ; 0.9 ) ) ( ( 3 ; 4 ; 4 ; 5 ; 1 ; 1 ) ,   ( 3.5 ; 4 ; 4 ; 4.5 ; 0.9 ; 0.9 ) ) ( ( 3 ; 4 ; 4 ; 5 ; 1 ; 1 ) ,   ( 3.5 ; 4 ; 4 ; 4.5 ; 0.9 ; 0.9 ) ) )
The constrained optimization model of IT2F-BWM, which is developed according to Expert 1′s evaluation, is used to obtain the importance weights of the criteria.
min   e s . t . { | w ˜ U 11 2 * w ˜ U 21 | e , { | w ˜ U 12 3 * w ˜ U 22 | e , { | w ˜ U 13 3 * w ˜ U 23 | e , { | w ˜ U 14 4 * w ˜ U 24 | e , | w ˜ L 11 2.5 * w ˜ L 21 | e , { | w ˜ L 12 3 * w ˜ L 22 | e , { | w ˜ L 13 3 * w ˜ L 23 | e , { | w ˜ L 14 3.5 * w ˜ L 24 | e , | w ˜ U 11 2 * w ˜ U 31 | e , { | w ˜ U 12 3 * w ˜ U 32 | e , { | w ˜ U 13 3 * w ˜ U 33 | e , { | w ˜ U 14 4 * w ˜ U 34 | e , | w ˜ L 11 2.5 * w ˜ L 31 | e , { | w ˜ L 12 3 * w ˜ L 32 | e , { | w ˜ L 13 3 * w ˜ L 33 | e , { | w ˜ L 14 3.5 * w ˜ L 34 | e | w ˜ U 11 6 * w ˜ U 41 | e , { | w ˜ U 12 7 * w ˜ U 42 | e , { | w ˜ U 13 7 * w ˜ U 43 | e , { | w ˜ U 14 8 * w ˜ U 44 | e , | w ˜ L 11 6.5 * w ˜ L 41 | e , { | w ˜ L 12 7 * w ˜ L 42 | e , { | w ˜ L 13 7 * w ˜ L 43 | e , { | w ˜ L 14 7.5 * w ˜ L 44 | e | w ˜ U 11 8 * w ˜ U 51 | e , { | w ˜ U 12 9 * w ˜ U 52 | e , { | w ˜ U 13 9 * w ˜ U 53 | e , { | w ˜ U 14 10 * w ˜ U 54 | e , | w ˜ L 11 8.5 * w ˜ L 51 | e , { | w ˜ L 12 9 * w ˜ L 52 | e , { | w ˜ L 13 9 * w ˜ L 53 | e , { | w ˜ L 14 9.5 * w ˜ L 54 | e | w ˜ U 11 5 * w ˜ U 61 | e , { | w ˜ U 12 6 * w ˜ U 62 | e , { | w ˜ U 13 6 * w ˜ U 63 | e , { | w ˜ U 14 7 * w ˜ U 64 | e , | w ˜ L 11 5.5 * w ˜ L 61 | e , { | w ˜ L 12 6 * w ˜ L 62 | e , { | w ˜ L 13 6 * w ˜ L 63 | e , { | w ˜ L 14 6.5 * w ˜ L 64 | e | w ˜ U 11 5 * w ˜ U 71 | e , { | w ˜ U 12 6 * w ˜ U 72 | e , { | w ˜ U 13 6 * w ˜ U 73 | e , { | w ˜ U 14 7 * w ˜ U 74 | e , | w ˜ L 11 5.5 * w ˜ L 71 | e , { | w ˜ L 12 6 * w ˜ L 72 | e , { | w ˜ L 13 6 * w ˜ L 73 | e , { | w ˜ L 14 6.5 * w ˜ L 74 | e | w ˜ U 21 5 * w ˜ U 51 | e , { | w ˜ U 22 6 * w ˜ U 52 | e , { | w ˜ U 23 6 * w ˜ U 53 | e , { | w ˜ U 24 7 * w ˜ U 54 | e , | w ˜ L 21 5.5 * w ˜ L 51 | e , { | w ˜ L 22 6 * w ˜ L 52 | e , { | w ˜ L 23 6 * w ˜ L 53 | e , { | w ˜ L 24 6.5 * w ˜ L 54 | e | w ˜ U 31 5 * w ˜ U 51 | e , { | w ˜ U 32 6 * w ˜ U 52 | e , { | w ˜ U 33 6 * w ˜ U 53 | e , { | w ˜ U 34 7 * w ˜ U 54 | e , | w ˜ L 31 5.5 * w ˜ L 51 | e , { | w ˜ L 32 6 * w ˜ L 52 | e , { | w ˜ L 33 6 * w ˜ L 53 | e , { | w ˜ L 34 6.5 * w ˜ L 54 | e | w ˜ U 41 2 * w ˜ U 51 | e , { | w ˜ U 42 3 * w ˜ U 52 | e , { | w ˜ U 43 3 * w ˜ U 53 | e , { | w ˜ U 44 4 * w ˜ U 54 | e , | w ˜ L 41 2.5 * w ˜ L 51 | e , { | w ˜ L 42 3 * w ˜ L 52 | e , { | w ˜ L 43 3 * w ˜ L 53 | e , { | w ˜ L 44 3.5 * w ˜ L 54 | e | w ˜ U 61 3 * w ˜ U 51 | e , { | w ˜ U 62 4 * w ˜ U 52 | e , { | w ˜ U 63 4 * w ˜ U 53 | e , { | w ˜ U 64 5 * w ˜ U 54 | e , | w ˜ L 61 3.5 * w ˜ L 51 | e , { | w ˜ L 62 4 * w ˜ L 52 | e , { | w ˜ L 63 4 * w ˜ L 53 | e , { | w ˜ L 64 4.5 * w ˜ L 54 | e | w ˜ U 71 3 * w ˜ U 51 | e , { | w ˜ U 72 4 * w ˜ U 52 | e , { | w ˜ U 73 4 * w ˜ U 53 | e , { | w ˜ U 74 5 * w ˜ U 54 | e , | w ˜ L 71 3.5 * w ˜ L 51 | e , { | w ˜ L 72 4 * w ˜ L 52 | e , { | w ˜ L 73 4 * w ˜ L 53 | e , { | w ˜ L 74 4.5 * w ˜ L 54 | e ,
Then, the optimal interval type-2 fuzzy weights of seven criteria are calculated. The obtained importance weights are as follows:
( w ˜ ˜ C 1 w ˜ ˜ C 2 w ˜ ˜ C 3 w ˜ ˜ C 4 w ˜ ˜ C 5 w ˜ ˜ C 6 w ˜ ˜ C 7 ) = ( ( ( 0.428 ; 0.446 ; 0.446 ; 0.465 ; 1 ; 1 ) , ( 0.409 ; 0.446 ; 0.446 ; 0.484 ; 0.9 ; 0.9 ) ) ( ( 0.149 ; 0.149 ; 0.149 ; 0.149 ; 1 ; 1 ) , ( 0.149 ; 0.149 ; 0.149 ; 0.149 ; 0.9 ; 0.9 ) ) ( ( 0.149 ; 0.149 ; 0.149 ; 0.149 ; 1 ; 1 ) , ( 0.149 ; 0.149 ; 0.149 ; 0.149 ; 0.9 ; 0.9 ) ) ( ( 0.074 ; 0.074 ; 0.074 ; 0.074 ; 1 ; 1 ) , ( 0.074 ; 0.074 ; 0.074 ; 0.074 ; 0.9 ; 0.9 ) ) ( ( 0.037 ; 0.037 ; 0.037 ; 0.037 ; 1 ; 1 ) , ( 0.037 ; 0.037 ; 0.037 ; 0.037 ; 0.9 ; 0.9 ) ) ( ( 0.085 ; 0.085 ; 0.085 ; 0.085 ; 1 ; 1 ) , ( 0.085 ; 0.085 ; 0.085 ; 0.085 ; 0.9 ; 0.9 ) ) ( ( 0.085 ; 0.085 ; 0.085 ; 0.085 ; 1 ; 1 ) , ( 0.085 ; 0.085 ; 0.085 ; 0.085 ; 0.9 ; 0.9 ) ) )
The same calculation is processed for each criterion and five experts. The obtained outcomes of the IT2F-BWM are presented in Table 6.
As the final step of the IT2F-BWM, each criterion’s pin-sharp importance weights are calculated. The final weights for each expert and the average values are presented in Figure 5. For instance, the importance weights of the seven criteria for Expert 1 are 0.435, 0.145, 0.145, 0.073, 0.036, 0.083, and 0.083, respectively. The most and least significant criterion is determined as social status and gender according to the evaluation of Expert 1, respectively. The calculated weights showed that gender is the least significant criterion for four of the five experts. Additionally, the answers of three of the five experts resulted in compliance of the person to be affected by the accident being the most significant criterion for automated decision making in the possible event of a crash in an AV.
When the average of the weight values obtained from these five experts was taken, the following conclusion emerged. “Social Status” was determined as the most important criterion with a weight value of 0.272. It is followed by the “Compliance of the Person to be Affected by the Accident” criterion with 0.256. The third most important criterion was “Number of Persons Affected by the Accident”, with 0.158. “Health Condition”, “Age”, and “Criminal History” have similar weight values and come after these three criteria in the ranking. The least important criterion is “Gender”, with a weight value of 0.052.

4.2. Comparison of the IT2F-BWM with Other Best–Worst Methods

The discussed problem in this study, of the importance levels of the criteria that AVs will have to take into account when deciding and how decision makers and software developers will transfer these criteria to their products with a prioritization, is solved with an interval type 2 fuzzy-based BWM methodology. However, there is a tendency that it is useful to compare the results by solving the same problem with different BWM options, so that the generalization of the results obtained alone is on solid ground. In this context, the problem is solved with three different versions of BWM (traditional BWM, triangular fuzzy number-based BWM, and Bayesian BWM). For all three methods, the questionnaires applied to the decision-making expert group were renewed depending on the linguistic scale used, and the questionnaires were re-applied in a reasonable period. Optimal values of criteria importance levels were obtained by following the application steps by taking the survey results for all three methods. Traditional BWM is based on Rezaei [16], while fuzzy BWM is employed by Guo and Zhao [46]. Steps in Mohammadi and Rezaei [68] have also been followed for Bayesian BWM.
In traditional and fuzzy BWM, models of all decision makers are solved separately, and their average values are aggregated. Figure 6 and Figure 7 provide the weights of criteria for each expert. Here, A1 to A7 refers to the criteria descriptions as identified “social status, number of persons affected by the accident, compliance of the person to be affected by the accident, age, gender, health condition, and criminal history”.
This is unlikely for Bayesian BWM, as this method is already a probabilistic version developed for group decision making and reducing information loss [68]. Bayesian BWM also has a feature called credal ranking which is a graph showing the reliability of the criteria. With the aid of this feature, the importance levels of criteria are determined more sensitively. As can be seen in Figure 8, each point (A1 to A7) indicates the criteria in this problem, while the value written on the arrow between these points indicates the confidence level. A 1 c l A 2 indicates that criterion A1 is more significant than A2, along with a confidence level of c l . Therefore, when Figure 8 is examined, it can be easily read at which confidence level each criterion outperforms the other. These graphs also support values in Figure 9 interpreted regarding Bayesian BWM. According to Figure 8, the arrow goes from the A1 criterion to all the other criteria. This means that this criterion is superior to the other five criteria. The confidence level indicating superiority ranges from 0.54 to 1. This indicates that this criterion is the most important. Similarly, no arrows go from the least important criterion, A5, to other nodes. Another important detail is that the confidence level value on the arrow from A1 to A3 has approached almost 0.50 ( F 0.54 B ). This value shows that although the importance weights of the two criteria are different, they do not have an obvious advantage over each other. Similar interpretations can be produced for the other six criteria. Credal ranking results facilitate strengthening the produced weight values and the more precise interpretation of obvious superiority/non-superiority situations.
Figure 9 shows the criteria weight results of the comparative study. According to the numerical results obtained, it is seen that the order of priority, which we evaluate depending on the importance weight of the criteria, has not changed in all methods. Although the values of the criteria weights changed, the order of importance was obtained as A1 > A3 > A2 > A6 > A4 > A7 > A8. The correlation between criterion weights is also quite high. Table 7 shows the Pearson correlation coefficients of the criteria weight values obtained from the methods. It is observed that all correlation values are higher than 94.5%. When all the results are evaluated together, it is clearly seen that the current model, like other BWM versions, gives very accurate results for this problem.

5. Conclusions and Discussion

AI has been gaining attention as the primary decision maker for many driving tasks of AVs. AI needs a formulation based on mathematical algorithms to rationalize the decision-making process of AVs. The mathematics behind the algorithm should mainly focus on how to prioritize possible victims of the inevitable accident. The AV literature lacks any approach, to our knowledge, to determine the above parameters and weigh them to make decisions that are as moral as possible.
This paper aims to determine the necessary parameters that need to be considered while making a moral decision and weigh them to create a formulation of prioritizing, thus providing insights about AVs’ decision-making algorithms and related legislations using BWM.
The study’s findings demonstrate that in the case of a possible collision involving an AV, the most critical determinant of an ethical decision should be “social status”. If a person’s social status is considered an important indicator of the value he or she will add to society, it is possible to accept that the experts show a propensity for evaluations promoting the common good of society. On the other hand, the paper indicates that the least critical determinant of an ethical decision should be “gender”.
The applicability of some criteria in the current technological framework is debatable. For example, for now, it may not be possible for an AV to detect the criminal history of a pedestrian, operate the decision mechanism in seconds and decide in light of the priorities determined in this study. Additionally, an algorithm that covers the criteria identified by the study may harm the privacy of the pedestrian’s private life. Despite these current technical impossibilities and concerns, this paper is expected to shed light on the future of the AV research and development process.
This paper weighs the necessary parameters based on the evaluations of the experts. Our experts evaluated the criteria through the eyes of utilitarian ethics. In contrast, other experts from Islamic and Christianity philosophies avoided answering survey questions by reminding us that killing a person is strictly forbidden. Considering that religions set inclusive and detailed rules about how people should live and that AVs will be common on roads in the near future, contemporary philosophers of religion and religious scholars should discuss the possibility of finding new answers to the current issue.
This paper has specific limitations regarding the number of experts and criteria and the inevitably subjective nature of the surveys. Therefore, the results of the study cannot be generalized to all cases. Accordingly, a larger number of experts and criteria could be included in further studies. In addition, future studies can focus on weighing different categories or levels of each criterion within itself for AV decision making through more objective methods. On the other hand, due to the lack of exact data about weighting parameters when making moral decisions for autonomous vehicles, this paper benefits from expert opinions based on fuzzy sets. The method applied in this paper handles the process of weighting parameters more sensitively than the traditional fuzzy methods by enabling membership degrees to have fuzzy values. In addition, IT2F-BWM results combine the advantages of IT2F and BWM. This paper is expected to provide valuable practical insights for AV software developers in addition to its theoretical contribution.

Author Contributions

Conceptualization B.C.A. and A.E.B.; data curation B.C.A. and A.E.B.; writing—original draft preparation B.C.A., A.E.B., A.O., M.G. and E.Ç.; supervision A.O.; methodology M.G. and E.Ç. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fenwick, M.D.; Kaal, W.A.; Vermeulen, E.P. Regulation Tomorrow: What Happens When Technology Is Faster than the Law? Am. Univ. Bus. Law Rev. 2016, 6, 561. [Google Scholar] [CrossRef]
  2. Goodall, N.J. Machine Ethics and Automated Vehicles; Meyer, G., Beiker, S., Eds.; Springer International Publishing: Cham, Switzerland, 2014; pp. 93–102. [Google Scholar]
  3. Awad, E.; Dsouza, S.; Kim, R.; Schulz, J.; Henrich, J.; Shariff, A.; Bonnefon, J.-F.; Rahwan, I. The Moral Machine experiment. Nature 2018, 563, 59–64. [Google Scholar] [CrossRef]
  4. Noothigattu, R.; Gaikwad, S.; Awad, E.; Dsouza, S.; Rahwan, I.; Ravikumar, P.; Procaccia, A.D. A Voting-Based System for Ethical Decision Making. In Proceedings of the Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), New Orleans, LA, USA, 2–7 February 2018. [Google Scholar]
  5. Bonnefon, J.F.; Shariff, A.; Rahwan, I. The social dilemma of autonomous vehicles. Science 2016, 352, 1573–1576. [Google Scholar] [CrossRef]
  6. Faulhaber, A.K.; Dittmer, A.; Blind, F.; Wächter, M.A.; Timm, S.; Sütfeld, L.R.; Stephan, A.; Pipa, G.; König, P. Human Decisions in Moral Dilemmas are Largely Described by Utilitarianism: Virtual Car Driving Study Provides Guidelines for Autonomous Driving Vehicles. Sci. Eng. Ethics 2019, 25, 399–418. [Google Scholar] [CrossRef]
  7. Dubljević, V. Toward Implementing the ADC Model of Moral Judgment in Autonomous Vehicles. Sci. Eng. Ethics 2020, 26, 2461–2472. [Google Scholar] [CrossRef]
  8. Edwards, J.S.; Duan, Y.; Robins, P.C. An analysis of expert systems for business decision making at different levels and in different roles. Eur. J. Inf. Syst. 2000, 9, 36–46. [Google Scholar] [CrossRef]
  9. Al-Aidaros, A.H.; Mohd Shamsudin, F. Ethics and ethical theories from an Islamic perspective. Int. J. Islam. Thought 2013, 4, 1–13. [Google Scholar] [CrossRef]
  10. Burks, B.D. The İmpact of Ethics Education and Religiosity on the Cognitive Moral Development of Senior Accounting and Business Students in Higher Education. Doctoral Dissertation, Nova Southeastern University, Fort Lauderdale, FL, USA, 2007. [Google Scholar]
  11. Hayes, R.; Schilder, A.; Dassen, R.; Wallage, P. Principles of auditing: An International Perspective. Manag. Audit. J. 1999, 12, 498–505. [Google Scholar]
  12. Sleasman, M.J. New Technology and Christianity. The Encyclopedia of Christian Civilization; Blackwell Publishing: Hoboken, NJ, USA, 2012. [Google Scholar]
  13. Sims, D.B. The Effect of Technology on Christianity: Blessing or Curse. 2005. Available online: https://www.dbu.edu/friday-symposium/schedule/archive/_documents/the-effect-of-technology-on-christianity.pdf (accessed on 17 October 2022).
  14. Johnson, C. How has Technology and Artificial Intelligence Changed Christianity? Available online: https://ncsureligion.wordpress.com/2019/12/05/how-has-technology-and-artificial-intelligence-changed-christianity/amp/ (accessed on 16 September 2022).
  15. Sorokowski, P.; Marczak, M.; Misiak, M.; Białek, M. Trolley Dilemma in Papua. Yali horticulturalists refuse to pull the lever. Psychon. Bull. Rev. 2020, 27, 398–403. [Google Scholar] [CrossRef]
  16. Rezaei, J. Best-worst multi-criteria decision-making method. Omega 2015, 53, 49–57. [Google Scholar] [CrossRef]
  17. Lawlor, R. The ethics of automated vehicles: Why self-driving cars should not swerve in dilemma cases. Res. Publica 2022, 28, 193–216. [Google Scholar] [CrossRef]
  18. Holstein, T.; Dodig-Crnkovic, G.; Pelliccione, P. Real-World Ethics for Self-Driving Cars. In Proceedings of the ACM/IEEE 42nd International Conference on Software Engineering: Companion Proceedings, Seoul, Republic of Korea, 27 June–19 July 2020; pp. 328–329. [Google Scholar]
  19. Martinho, A.; Herber, N.; Kroesen, M.; Chorus, C. Ethical issues in focus by the autonomous vehicles industry. Transp. Rev. 2021, 41, 556–577. [Google Scholar] [CrossRef]
  20. Etienne, H. The dark side of the ‘Moral Machine’ and the fallacy of computational ethical decision-making for autonomous vehicles. Law Innov. Technol. 2021, 13, 85–107. [Google Scholar] [CrossRef]
  21. Etienne, H. When AI ethics goes astray: A case study of autonomous vehicles. Soc. Sci. Comput. Rev. 2022, 40, 236–246. [Google Scholar] [CrossRef]
  22. Andrade, J.A. The ethics of the ethics of autonomous vehicles: Levinas and naked streets. S. Afr. J. Philos. 2021, 40, 124–136. [Google Scholar] [CrossRef]
  23. Applin, S. Autonomous vehicle ethics: Stock or custom? IEEE Consum. Electron. Mag. 2017, 6, 108–110. [Google Scholar] [CrossRef]
  24. Wu, S.S. Autonomous vehicles, trolley problems, and the law. Ethics Inf. Technol. 2020, 22, 1–13. [Google Scholar] [CrossRef]
  25. Liu, H.Y. Irresponsibilities, inequalities and injustice for autonomous vehicles. Ethics Inf. Technol. 2017, 19, 193–207. [Google Scholar] [CrossRef]
  26. Wright, A.T. Rightful Machines and Dilemmas. In Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society, Honolulu, HI, USA, 27–28 January 2019; pp. 3–4. [Google Scholar]
  27. Santoni de Sio, F. Killing by autonomous vehicles and the legal doctrine of necessity. Ethical Theory Moral Pract. 2017, 20, 411–429. [Google Scholar] [CrossRef]
  28. Krügel, S.; Uhl, M. Autonomous vehicles and moral judgments under risk. Transp. Res. Part A Policy Pract. 2022, 155, 1–10. [Google Scholar] [CrossRef]
  29. Leben, D. A Rawlsian algorithm for autonomous vehicles. Ethics Inf. Technol. 2017, 19, 107–115. [Google Scholar] [CrossRef]
  30. Geisslinger, M.; Poszler, F.; Betz, J.; Lütge, C.; Lienkamp, M. Autonomous driving ethics: From trolley problem to ethics of risk. Philos. Technol. 2021, 34, 1033–1055. [Google Scholar]
  31. Ebina, T.; Kinjo, K. Approaching the social dilemma of autonomous vehicles with a general social welfare function. Eng. Appl. Artif. Intell. 2021, 104, 10439. [Google Scholar]
  32. Shah, M.U.; Rehman, U.; Iqbal, F.; Hussain, M.; Wahid, F. An Alternate Account on the Ethical Implications of Autonomous Vehicles. In Proceedings of the 2021 17th International Conference on Intelligent Environments (IE), Dubai, United Arab Emirates, 21–24 June 2021; IEEE: Piscataway, NJ, USA, 2021; pp. 1–5. [Google Scholar]
  33. Dakić, P.; Źivković, M. An Overview of the Challenges for Developing Software within the Field of Autonomous Vehicles. In Proceedings of the 7th Conference on the Engineering of Computer Based Systems, Novi Sad Serbia, Serbia, 26–27 May 2021; pp. 1–10. [Google Scholar]
  34. Goltz, N.; Zeleznikow, J.; Dowdeswell, T. From the tree of knowledge and the golem of Prague to kosher autonomous cars: The ethics of artificial intelligence through jewish eyes. Oxf. J. Law Relig. 2020, 9, 132–156. [Google Scholar] [CrossRef]
  35. Thielscher, C.; Krol, B.; Heinemann, S.; Schlander, M. Ethical decomposition as a new method to analyse moral dilemmata. In Proceedings of the INFORMATIK 2019: 50 Jahre Gesellschaft für Informatik–Informatik für Gesellschaft, Bonn, Germany, 23–26 September 2019; pp. 37–49. [Google Scholar]
  36. Himmelreich, J. Never mind the trolley: The ethics of autonomous vehicles in mundane situations. Ethical Theory Moral Pract. 2018, 21, 669–684. [Google Scholar] [CrossRef]
  37. Goodall, N. More than trolleys: Plausible, ethically ambiguous scenarios likely to be encountered by automated vehicles. Transfers 2019, 9, 45–58. [Google Scholar]
  38. Wu, Q.; Zhou, L.; Chen, Y.; Chen, H. An integrated approach to green supplier selection based on the interval type-2 fuzzy best-worst and extended VIKOR methods. Inf. Sci. 2019, 502, 394–417. [Google Scholar]
  39. Cunneen, M.; Mullins, M.; Murphy, F.; Shannon, D.; Furxhi, I.; Ryan, C. Autonomous vehicles and avoiding the trolley (dilemma): Vehicle perception, classification, and the challenges of framing decision ethics. Cybern. Syst. 2020, 51, 59–80. [Google Scholar] [CrossRef]
  40. Tian, Z.P.; Wang, J.Q.; Zhang, H.Y. An integrated approach for failure mode and effects analysis based on fuzzy best-worst, relative entropy, and VIKOR methods. Appl. Soft Comput. 2018, 72, 636–646. [Google Scholar]
  41. Mou, Q.; Xu, Z.; Liao, H. An intuitionistic fuzzy multiplicative best-worst method for multi-criteria group decision making. Inf. Sci. 2016, 374, 224–239. [Google Scholar] [CrossRef]
  42. Omrani, H.; Alizadeh, A.; Emrouznejad, A. Finding the optimal combination of power plants alternatives: A multi response Taguchineural network using TOPSIS and fuzzy best-worst method. J. Clean Prod. 2018, 203, 210–223. [Google Scholar] [CrossRef]
  43. Mi, X.; Tang, M.; Liao, H.; Shen, W.; Lev, B. The state-of-the-art survey on integrations and applications of the best worst method in decision making: Why, what, what for and what’s next? Omega 2019, 87, 205–225. [Google Scholar] [CrossRef]
  44. Rezaei, J. Best-worst multi-criteria decision-making method: Some properties and a linear model. Omega 2016, 64, 126–130. [Google Scholar] [CrossRef]
  45. Hafezalkotob, A.; Hafezalkotob, A. A novel approach for combination of individual and group decisions based on fuzzy best-worst method. Appl. Soft Comput. 2017, 59, 316–325. [Google Scholar] [CrossRef]
  46. Guo, S.; Zhao, H. Fuzzy best-worst multi-criteria decision-making method and its applications. Knowl.-Based Syst. 2017, 121, 23–31. [Google Scholar] [CrossRef]
  47. Moslem, S.; Gul, M.; Farooq, D.; Celik, E.; Ghorbanzadeh, O.; Blaschke, T. An integrated approach of best-worst method (BWM) and triangular fuzzy sets for evaluating driver behavior factors related to road safety. Mathematics 2020, 8, 414. [Google Scholar] [CrossRef]
  48. Celik, E.; Gul, M.; Aydin, N.; Gumus, A.T.; Guneri, A.F. A comprehensive review of multi criteria decision making approaches based on interval type-2 fuzzy sets. Knowl.-Based Syst. 2015, 85, 329–341. [Google Scholar] [CrossRef]
  49. Celik, E.; Yucesan, M.; Gul, M. Green supplier selection for textile industry: A case study using BWM-TODIM integration under interval type-2 fuzzy sets. Environ. Sci. Pollut. Res. 2021, 28, 64793–64817. [Google Scholar] [CrossRef]
  50. Lucifora, C.; Grasso, G.M.; Perconti, P.; Plebe, A. Moral dilemmas in self-driving cars. Riv. Internazionale Di Filos. E Psicol. 2020, 11, 238–250. [Google Scholar]
  51. Bergmann, L.T.; Schlicht, L.; Meixner, C.; König, P.; Pipa, G.; Boshammer, S.; Stephan, A. Autonomous vehicles require socio-political acceptance—An empirical and philosophical perspective on the problem of moral decision making. Front. Behav. Neurosci. 2018, 12, 31. [Google Scholar] [CrossRef]
  52. Bigman, Y.E.; Gray, K. Life and death decisions of autonomous vehicles. Nature 2020, 579, E1–E2. [Google Scholar] [CrossRef] [PubMed]
  53. Cunneen, M.; Mullins, M.; Murphy, F.; Gaines, S. Artificial driving intelligence and moral agency: Examining the decision ontology of unavoidable road traffic accidents through the prism of the trolley dilemma. Appl. Artif. Intell. 2019, 33, 267–293. [Google Scholar] [CrossRef]
  54. Evans, K.; de Moura, N.; Chauvier, S.; Chatila, R.; Dogan, E. Ethical decision making in autonomous vehicles: The AV ethics project. Sci. Eng. Ethics 2020, 26, 3285–3312. [Google Scholar] [CrossRef] [PubMed]
  55. Keeling, G. Legal necessity, Pareto efficiency & justified killing in autonomous vehicle collisions. Ethical Theory Moral Pract. 2018, 21, 413–427. [Google Scholar]
  56. Keeling, G. Why trolley problems matter for the ethics of automated vehicles. Sci. Eng. Ethics 2020, 26, 293–307. [Google Scholar] [CrossRef]
  57. Kochupillai, M.; Lütge, C.; Poszler, F. Programming away human rights and responsibilities? “The Moral Machine Experiment” and the need for a more “humane” AV future. NanoEthics 2020, 14, 285–299. [Google Scholar] [CrossRef]
  58. Lucifora, C.; Grasso, G.M.; Perconti, P.; Plebe, A. Moral reasoning and automatic risk reaction during driving. Cogn. Technol. Work. 2021, 23, 705–713. [Google Scholar] [CrossRef]
  59. Luetge, C. The German ethics code for automated and connected driving. Philos. Technol. 2017, 30, 547–558. [Google Scholar] [CrossRef]
  60. Rhim, J.; Lee, G.B.; Lee, J.H. Human moral reasoning types in autonomous vehicle moral dilemma: A cross-cultural comparison of Korea and Canada. Comput. Hum. Behav. 2020, 102, 39–56. [Google Scholar]
  61. Robinson, J.; Smyth, J.; Woodman, R.; Donzella, V. Ethical considerations and moral implications of autonomous vehicles and unavoidable collisions. Issues Ergon. Sci. 2022, 23, 435–452. [Google Scholar] [CrossRef]
  62. Shariff, A.; Bonnefon, J.F.; Rahwan, I. Psychological roadblocks to the adoption of self-driving vehicles. Nat. Hum. Behav. 2017, 1, 694–696. [Google Scholar] [CrossRef] [PubMed]
  63. Wang, H.; Khajepour, A.; Cao, D.; Liu, T. Ethical decision making in autonomous vehicles: Challenges and research progress. IEEE Intell. Transp. Syst. Mag. 2020, 14, 6–17. [Google Scholar] [CrossRef]
  64. Lin, P. Why Ethics Matters for Autonomous Cars. In Autonomous Driving; Maurer, M., Gerdes, J.C., Lenz, B., Winner, H., Eds.; Springer: Berlin/Heidelberg, Germany, 2016; pp. 69–85. [Google Scholar] [CrossRef]
  65. Kumfer, W.; Burgess, R. Investigation into the role of rational ethics in crashes of automated vehicles. Transp. Res. Record. 2015, 2489, 130–136. [Google Scholar] [CrossRef]
  66. Ethik-Kommission. Automatisiertes und Vernetztes Fahren; Bundesministerium für Verkehr und Digitale Infrastruktur: Berlin, Germany, 2017.
  67. Gorr, M. Thomson and the Trolley Problem. Philos. Stud. 1990, 59, 91–100. [Google Scholar] [CrossRef]
  68. Mohammadi, M.; Rezaei, J. Bayesian best-worst method: A probabilistic group decision making model. Omega 2020, 96, 102075. [Google Scholar] [CrossRef]
Figure 1. Most used author keywords identified by the first search query.
Figure 1. Most used author keywords identified by the first search query.
Sustainability 15 08916 g001
Figure 2. Flow of the applied methodology.
Figure 2. Flow of the applied methodology.
Sustainability 15 08916 g002
Figure 3. The frequency values of the criteria [2,3,6,27,30,39,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64].
Figure 3. The frequency values of the criteria [2,3,6,27,30,39,50,51,52,53,54,55,56,57,58,59,60,61,62,63,64].
Sustainability 15 08916 g003
Figure 4. Visual maps of the criteria used in past studies [3,6,27,30,39,50,51,55,56,57,58,60,61,63,64,65,66,67].
Figure 4. Visual maps of the criteria used in past studies [3,6,27,30,39,50,51,55,56,57,58,60,61,63,64,65,66,67].
Sustainability 15 08916 g004aSustainability 15 08916 g004b
Figure 5. Importance weight values of criteria.
Figure 5. Importance weight values of criteria.
Sustainability 15 08916 g005
Figure 6. Importance weight values of criteria by each expert in traditional BWM.
Figure 6. Importance weight values of criteria by each expert in traditional BWM.
Sustainability 15 08916 g006
Figure 7. Importance weight values of criteria by each expert in fuzzy BWM.
Figure 7. Importance weight values of criteria by each expert in fuzzy BWM.
Sustainability 15 08916 g007
Figure 8. Results on credal rankings of the criteria.
Figure 8. Results on credal rankings of the criteria.
Sustainability 15 08916 g008
Figure 9. Weights of criteria by three different BWM versions.
Figure 9. Weights of criteria by three different BWM versions.
Sustainability 15 08916 g009
Table 1. The past studies on AV ethics in the literature.
Table 1. The past studies on AV ethics in the literature.
NoStudyYearDocument
Type
AimFindings/Conclusion
1Lawlor (2022) [17]2022Articleto discuss the nature of the trolley problem and the objections against itIt suggests a new approach called “contrast and explain” to discuss that autonomous vehicles should be programmed to stay on the road in the event of an accident.
2Etienne (2022) [21]2022Articleto reveal the dangers of the ethical decisions taken by a voting-based system based on the MM experimentIt reveals the limitations of the MM experiment and other similar approaches to ethical decision making.
3Krügel and Uhl (2022) [28]2022Articleto test whether people change their moral judgments in case of an accident probability instead of a sure accident or notIt shows that laypeople believe that AV ethics should prioritize both accident severity and probability instead of ignoring accident severity.
4Geisslinger et al. (2021) [30]2021Articleto propose an ethical trajectory planning guidance for crashes and other events involving AVs based on risk calculationsIt provides a mathematical risk assessment method that integrates Bayesian, Equality, and Maximin principles for ethical trajectory planning for vehicles.
5Ebina and Kinjo (2021) [31]2021Articleto analyze optimal AV driving behavior for different scenariosIt shows that if there is a case where the total utility of the pedestrians and passengers is more than some pedestrians’ utility, AVs can prefer hitting some pedestrians. The study’s findings can be applied to estimate AV driving behavior in various countries depending on inequality aversion value.
6Shah et al. (2021) [32]2021Conference Paperto discuss how people make ethical decisions; pragmatically as mostly believed or rationallyIt proves that people can be less pragmatic in case of an incident involving a lower level of harm or a higher level of affection.
7Dakić and Åivković (2021) [33]2021Conference Paperto discuss AV software and hardware technologies’ capabilities for correcting human errors and data collection during collisionsIt argues that AV software settings for ethical decisions should be changeable in the future as new data are gathered from collisions.
8Andrade (2021) [22]2021Articleto discuss AV ethics through the lens of Levinasian ethics that embrace infinite responsibility to others even in case of uncertaintyIt criticizes the consideration of AV ethics explained by the trolley problem and underlines the importance of adopting Levinasian ethics. It also states that AVs can be risked for the sake of the safety of the people and roads.
9Etienne (2021) [20]2021Articleto investigate the dangers of the MM experiment usage for making ethical decisionsIt reveals the limitations of the MM experiment and other similar approaches to ethical decision making.
10Martinho et al. (2021) [19]2021Articleto analyze the awareness and industrial focus on AV ethicsIt demonstrates that the literature on AV ethics is mostly shaped by discussions about the trolley problem.
11Holstein et al. (2020) [18]2020Conference Paperto discuss how AV ethics should be handledIt emphasized that AV ethical analyses should be carried out based on AV software solutions.
13Wu (2020) [38]2020Articleto discuss AV manufacturers’ liability when they implement an ethical choice during a collisionIt suggests that there is a need for a special body of rules to defend manufacturers’ rights.
14Goltz et al. (2020) [34]2020Reviewto reveal Jewish ethics and divine truths for granting moral personhood to AVsIt underlined that one could not be sacrificed for others except in extremis regarding Jewish religion’s primary sources.
15Cunneen et al. (2020) [39]2020Articleto draw attention to realistic AV ethicsIt states that ethical decision frames should be determined according to issues such as privacy and machine perception rather than the basics of the MM experiment.
16Wright (2019) [26]2019Conference Paperto discuss the source of authority that decides on the right action in case of a moral dilemma following Kantian rationaleIt argues that moral conflicts should be handled by public law.
17Thielscher et al. (2019) [35]2019Conference Paperto discuss AV ethics concerning the Kantian rationale and utilitarian ethicIt shows that Kantian rationale and utilitarian ethic should be handled separately to guide taking ethical decisions during different situations.
18Goodall (2019) [37]2019Articleto discuss driving ethics from a multi-dimensional perspective involving more diverse situations compared to the trolley problemIt reveals realistic scenarios during a collision risk. Additionally, it recommends AV developers declare possible actions of AVs during a moral dilemma to the public.
19Himmelreich (2018) [36]2018Articleto reveal the deficiencies of the trolley problem in explaining the AV ethics and mundane situations that may cause moral debates in AV ethicsIt emphasizes that mundane driving situations should be taken into consideration in addition to the situations indicated by the trolley problem for discussing moral issues in AVs.
20Liu (2017) [25]2017Articleto discuss AVs’ responsibility during a collisionIt develops targeting and risk distribution concepts to handle AVs’ responsibility during a collision.
21Applin (2017) [23]2017Articleto discuss the role of cultural factors in determining AV ethicsIt criticizes the trolley problem since it does not discuss the self-sacrifice option and mind cultural differences.
22Leben (2017) [29]2017Articleto implement the Rawlsian algorithm to the trolley problemIt presents a mechanism that estimates the survival probability of each person involving a collision.
23Santoni de Sio (2017) [27]2017Articleto discuss how AV ethics should be in the case of unavoidable crashesIt reveals that some basic principles and rules in Anglo-American law and case law can be used in programming AVs.
Table 2. Linguistic terms for importance weights (Celik et al. (2015), [48]).
Table 2. Linguistic terms for importance weights (Celik et al. (2015), [48]).
Linguistic TermIT2FSs
EMI((8;9;9;10;1;1), (8.5;9;9;9.5;0.9;0.9))
IV8((7;8;8;9;1;1), (7.5;8;8;8.5;0.9;0.9))
VSMI((6;7;7;8;1;1), (6.5;7;7;7.5;0.9;0.9))
IV6((5;6;6;7;1;1), (5.5;6;6;6.5;0.9;0.9))
SMI((4;5;5;6;1;1), (4.5;5;5;5.5;0.9;0.9))
IV4((3;4;4;5;1;1), (3.5;4;4;4.5;0.9;0.9))
MMI((2;3;3;4;1;1), (2.5;3;3;3.5;0.9;0.9))
IV2((1;2;2;3;1;1), (1.5;2;2;2.5;0.9;0.9))
EI((1;1;1;1;1;1), (1;1;1;1;0.9;0.9))
Note: EMI: Extremely More Important; IV: Intermediate Value; VSMI: Very Strong More Important; SMI: Strongly More Important; MMI: Moderately More Important; and EI: Equally Important.
Table 3. Consistency index table [49].
Table 3. Consistency index table [49].
Linguistic TermEIIV2MMIIV4SMIIV6VSMIIV8EMI
Defuzzified0.9751.952.9253.94.8755.856.8257.88.775
CI2.95824.48725.89487.23738.53739.806911.053312.281213.494
Table 4. The evaluation of the best criterion for Expert 1.
Table 4. The evaluation of the best criterion for Expert 1.
Social StatusNumber of Persons
Affected by the
Accident
Compliance of the Person to Be Affected by the AccidentAgeGenderHealth ConditionCriminal History
The best criterion
Social Status
EIMMIMMIVSMIEMIIV6IV6
Table 5. The evaluation of the worst criterion for Expert 1.
Table 5. The evaluation of the worst criterion for Expert 1.
The Worst Criterion
Gender
Social statusEMI
Number of persons affected by the accidentIV6
Compliance of the person to be affected by the accidentIV6
AgeMMI
GenderEI
Health conditionIV4
Criminal historyIV4
Table 6. The obtained outcomes of the IT2F-BWM.
Table 6. The obtained outcomes of the IT2F-BWM.
Expert 1 Expert 2
C1((0.428;0.446;0.446;0.465;1;1),(0.409;0.446;0.446;0.484;0.9;0.9))C1((0.123;0.123;0.123;0.123;1;1),(0.123;0.123;0.123;0.123;0.9;0.9))
C2((0.149;0.149;0.149;0.149;1;1),(0.149;0.149;0.149;0.149;0.9;0.9))C2((0.163;0.163;0.163;0.163;1;1),(0.158;0.163;0.163;0.163;0.9;0.9))
C3((0.149;0.149;0.149;0.149;1;1),(0.149;0.149;0.149;0.149;0.9;0.9))C3((0.344;0.362;0.362;0.376;1;1),(0.257;0.362;0.362;0.391;0.9;0.9))
C4((0.074;0.074;0.074;0.074;1;1),(0.074;0.074;0.074;0.074;0.9;0.9))C4((0.098;0.098;0.098;0.098;1;1),(0.098;0.098;0.098;0.098;0.9;0.9))
C5((0.037;0.037;0.037;0.037;1;1),(0.037;0.037;0.037;0.037;0.9;0.9))C5((0.029;0.029;0.029;0.029;1;1),(0.029;0.029;0.029;0.029;0.9;0.9))
C6((0.085;0.085;0.085;0.085;1;1),(0.085;0.085;0.085;0.085;0.9;0.9))C6((0.163;0.163;0.163;0.163;1;1),(0.163;0.163;0.163;0.163;0.9;0.9))
C7((0.085;0.085;0.085;0.085;1;1),(0.085;0.085;0.085;0.085;0.9;0.9))C7((0.098;0.098;0.098;0.098;1;1),(0.098;0.098;0.098;0.098;0.9;0.9))
Expert 3 Expert 4
C1((0.375;0.399;0.399;0.42;1;1),(0.284;0.399;0.399;0.441;0.9;0.9))C1((0.13;0.13;0.13;0.13;1;1),(0.13;0.13;0.13;0.13;0.9;0.9))
C2((0.181;0.181;0.181;0.181;1;1),(0.181;0.181;0.181;0.181;0.9;0.9))C2((0.174;0.174;0.174;0.174;1;1),(0.174;0.174;0.174;0.174;0.9;0.9))
C3((0.136;0.136;0.136;0.136;1;1),(0.136;0.136;0.136;0.136;0.9;0.9))C3((0.366;0.381;0.381;0.396;1;1),(0.284;0.381;0.381;0.411;0.9;0.9))
C4((0.078;0.078;0.078;0.078;1;1),(0.077;0.078;0.078;0.078;0.9;0.9))C4((0.13;0.13;0.13;0.13;1;1),(0.13;0.13;0.13;0.13;0.9;0.9))
C5((0.042;0.042;0.042;0.042;1;1),(0.042;0.042;0.042;0.042;0.9;0.9))C5((0.13;0.13;0.13;0.13;1;1),(0.13;0.13;0.13;0.13;0.9;0.9))
C6((0.091;0.091;0.091;0.091;1;1),(0.091;0.091;0.091;0.091;0.9;0.9))C6((0.058;0.058;0.058;0.058;1;1),(0.056;0.058;0.058;0.058;0.9;0.9))
C7((0.109;0.109;0.109;0.109;1;1),(0.109;0.109;0.109;0.109;0.9;0.9))C7((0.03;0.03;0.03;0.03;1;1),(0.03;0.03;0.03;0.03;0.9;0.9))
Expert 5
C1((0.295;0.308;0.308;0.322;1;1),(0.281;0.308;0.308;0.336;0.9;0.9))
C2((0.141;0.141;0.141;0.141;1;1),(0.141;0.141;0.141;0.141;0.9;0.9))
C3((0.295;0.308;0.308;0.322;1;1),(0.229;0.308;0.308;0.336;0.9;0.9))
C4((0.071;0.071;0.071;0.071;1;1),(0.071;0.071;0.071;0.071;0.9;0.9))
C5((0.028;0.028;0.028;0.028;1;1),(0.028;0.028;0.028;0.028;0.9;0.9))
C6((0.106;0.106;0.106;0.106;1;1),(0.106;0.106;0.106;0.106;0.9;0.9))
C7((0.071;0.071;0.071;0.071;1;1),(0.071;0.071;0.071;0.071;0.9;0.9))
Table 7. Results of correlation analysis.
Table 7. Results of correlation analysis.
MethodCurrent Model (Interval Type-2 Fuzzy BWM)Traditional BWMFuzzy BWM
Traditional BWM0.995
Fuzzy BWM0.9450.960
Bayesian BWM0.9780.9930.966
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Altay, B.C.; Boztas, A.E.; Okumuş, A.; Gul, M.; Çelik, E. How Will Autonomous Vehicles Decide in Case of an Accident? An Interval Type-2 Fuzzy Best–Worst Method for Weighting the Criteria from Moral Values Point of View. Sustainability 2023, 15, 8916. https://doi.org/10.3390/su15118916

AMA Style

Altay BC, Boztas AE, Okumuş A, Gul M, Çelik E. How Will Autonomous Vehicles Decide in Case of an Accident? An Interval Type-2 Fuzzy Best–Worst Method for Weighting the Criteria from Moral Values Point of View. Sustainability. 2023; 15(11):8916. https://doi.org/10.3390/su15118916

Chicago/Turabian Style

Altay, Burak Can, Abdullah Erdem Boztas, Abdullah Okumuş, Muhammet Gul, and Erkan Çelik. 2023. "How Will Autonomous Vehicles Decide in Case of an Accident? An Interval Type-2 Fuzzy Best–Worst Method for Weighting the Criteria from Moral Values Point of View" Sustainability 15, no. 11: 8916. https://doi.org/10.3390/su15118916

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop