The precondition for considering ethical issues during the AIS design is that the relevant ethical issues are identifiable. For that purpose, we propose a systematic framework which can be used in different phases of design: In the beginning, ethics for the design goals are defined and interpreted as design requirements; When the design is on a more detailed level, the framework can be applied again. The final design can be assessed with the help of this framework as well. Essentially, the framework can be applied in every design decision if necessary.
The systematics of the analysis framework is based on the idea that the system under design is thoroughly discussed by using identified ethical values. We argue that this should be carried out in the very beginning of design to guide the design towards inherently ethical solutions: Ethically acceptable products and services are accepted by the users, which adds both business and societal value. Bringing in the ethical perspective very early in the product lifecycle is important, because it indicates that it is possible to come up with technical solutions and services that bring sustainability and are good for society. To embed ethical values into the design and to consider ethical issues during the design process designers need systematics to do that. As a solution, we propose the idea of bringing ethics in the practices of human-technology interaction design. This can be done by with the help of usage scenarios—stories or descriptions of usage situations in selected usage contexts—in early phases of concept design. With the help of scenarios, it is possible to operationalize “good” in the design concepts from the point of view of actors, actions and goals of actions, and thus systematically assess the ethical value of the design outcomes.
As the design of AIS is not only a multi-technological effort, but involves also social, psychological, economic, political, and legal aspects, and is likely to have profound impacts at all the dimensions of the society, this deliberation requires multidisciplinary approach and involvement of various experts and stakeholders [33
] (e.g., in the case of autonomous ships, experts of autonomous technology, shipping companies, passenger representatives, ethical experts). This iterative process should be carried out using co-design methods, involving users and stakeholders broadly, and including three steps: (1) Identification of ethical values affected by AIS; (2) Identification of context-relevant ethical values; and (3) Analysis and understanding of ethical issues within the context. These steps are further studied in the following chapters.
3.1. Identification of Ethical Principles and Values Affected by AIS
Ethical principles and values can be used as an introductory compass when seeking ways to understand ethics in design. They are universal moral rules that exist above cultures, time, or single acts of people. Principlism is an approach for ethical decision-making that focuses on the common ground moral principles that can be used as a rule of thumb in ethical thinking [31
]. Principlism can be derived from and is consistent with a multitude of ethical, theological, and social approaches towards moral decision-making. It introduces the four cardinal virtues of beneficence, nonmaleficence, autonomy, and justice, which can be seen to stem already from e.g., Confucius’s ren (compassion or loving others; [35
] and Aristotle’s conception of good life [36
]. These principles form the basis of ethical education of e.g., most physicians. They are usually conceived as intermediate between “low level” moral theories, such as utilitarism and deontology [37
]. The principle of “beneficence
” includes all forms of action intended to benefit or promote the good of other persons [38
]. The principle of “nonmaleficence
” prohibits causing harm to other persons [38
]. “Justice”, when identified with morality, is something that we owe to each other, and at the level of individual ethics, it is contrasted with charity on the one hand, and mercy on the other [39
], and can be seen as the first virtue of social institutions [40
]. The principle of “autonomy
” is introduced by e.g., Kant and Mill [41
], and refers to the right of an individual to make decisions concerning her own life.
However, the four virtues, and principlism as such, may not have enough power to carry us far enough in the discussion of technology ethics, as in technology design there are situations in which the four principles may often run into conflict. One of the reasons for this is that dealing with technology ethics is always contextual, and the impact of technology mostly concerns, not only the direct usage situation, but also many different stakeholders who may have conflicting interests [37
As the context of technology is always situated in a cultural and ecological environment (see e.g., [43
]), it is obvious that values for technology design and assessment should reflect the ethical values and norms of the given community, as well as ecological aspirations. Values are culturally predominant perceptions of individuals’, society’s and human kind’s central goals of a good life, good society and good world. They are objectives directing a person’s life and they guide decision-making [44
]. Besides individual and (multi)cultural values, there are also critical universal values that transcend culture and national borders, such as the fundamental values laid down in the Universal Declaration of Human Rights (UN) [47
], EU Treaties (EU) [48
] and in the EU Charter of Fundamental Rights (2000) [49
Friedman et al. (2003; 2006) [22
] introduce the following values from the point of view of technology design: Human welfare; ownership and property; freedom from bias; universal usability; accountability; courtesy; identity; calmness; and environmental sustainability. In addition, informed consent is seen as a necessity in the adoption of technology [23
]. It refers to garnering people’s agreement, encompassing criteria of disclosure and comprehension (for “informed”) and voluntariness, competence, and agreement (for “consent”). People have the right to consent to technological intervention (adoption and usage of technology).
3.2. Identification of Context-Specific Ethical Values
Like design issues, issues of context-specific ethical values involve differences in perspectives and in power [50
]. An ethical issue arises when there is a dilemma between two simultaneous values (two ethical ones or an ethical and practical value, such as e.g., safety and efficiency). This is why technology ethics calls for a broader view, where the agents, the goal, and the context of the technology usage are discussed and deliberated, in order to analyze, argue and report the ethical dilemma and its solution. This ethical case deliberation should be carried out in collaboration with relevant stakeholders, designers and ethical experts [51
]. This helps to understand what ethical principles and values should define the boundaries of the technology. In this way, it would be possible also to formulate additional design principles to the context of technology.
As to ethics of AI, many public, private and civil organizations and expert groups have introduced visions for designing ethical technology and ethical AI. For this study, we carried out i) a literature review and ii) a discussion workshop, as part of the Effective Autonomous Systems research project at VTT Technical Research Center of Finland Ltd. The participants’ scientific backgrounds include Engineering Sciences and AI, Cognitive Science, Psychology and Social Sciences. They represent experts in autonomous technologies, design thinking, ethics, responsible research and innovation, risk assessment, and societal impacts of technology. In the workshop, the outcomes of already mentioned expert groups were systematically examined, and elaborated in respect to different contexts of autonomous systems.
In the following, we shortly go through the results of the literature review in terms of ethical principles and values introduced by expert groups with respect to AI.
Ethically Aligned Design
(EAD) Global initiative has been launched by the IEEE in 2016 and 2017 [4
] under the title “A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems”, to unite collective input in the fields of Autonomous and Intelligent Systems (A/IS), ethics, philosophy and policy. In addition, some approaches for designing ethics and ethics assessment have been published (e.g., [4
The IEEE Global Initiative for Ethical Considerations in Artificial Intelligence and Autonomous Systems
(2016 pp. 15) [4
] has articulated the following high-level ethical concerns applying to AI/AS:
Embody the highest ideas of human rights.
Prioritize the maximum benefit to humanity and the natural environment.
Mitigate risks and negative impacts as AI/AS evolve as socio-technical systems.
The Global Initiative (2016 p. 5–6; 2017 p. 34) proposes a three-pronged approach for a designer to embedding values into AIS:
Identify the norms and values of a specific community affected by AIS.
Implement the norms and values of that community within AIS.
Evaluate the alignment and compatibility of those norms and values between the humans and AIS within that community.
The Asilomar Conference (2017) [54
] hosted by the Future Life Institute (a volunteer-run research and outreach organization that works to mitigate existential risks facing humanity, particularly existential risk from advanced AI.), with more than 100 thought leaders and researches in economics, law, ethics, and philosophy, was a forerunner in addressing and formulating principles of beneficial AI to guide the development of AI. Its outcome was the Asilomar AI Principles which include safety; failure and juridical transparency; responsibility; value alignment; human values; privacy and liberty; shared benefit and prosperity; human control; non-supervision; and avoiding an arms race.
The European Group on Ethics in Science and New Technologies (EGE) published Statement on Artificial Intelligence, Robotics and Autonomous Systems (2017) [55
], where the following prerequisites are proposed as important when discussing AI ethics: Human dignity; autonomy; responsibility; justice, equality and solidarity; democracy; rule of law and accountability; security, safety, bodily and mental integrity; data protection and privacy; and sustainability. This list is supplemented by e.g., Dignum [56
] who proposes AI ethics to rest in the three design principles of accountability, responsibility and transparency.
The draft ethics guidelines for Trustworthy AI, by the European Commission’s High-Level Expert Group on Artificial Intelligence (AI HLEG) (2018) [53
] propose a framework for trustworthy AI, consisting:
Ethical Purpose: Ensuring respect for fundamental rights, principles and values when developing, deploying and using AI.
Realization of Trustworthy AI: Ensuring implementation of ethical purpose, as well as technical robustness when developing, deploying and using AI.
Requirements for Trustworthy AI: To be continuously evaluated, addressed and assessed in the design and use through technical and non-technical methods
The AI4People’s project (2018) [3
] has studied the EGE principles, as well as other relevant principles and subsumed them under four overarching principles. These include beneficence, non-maleficence, autonomy (defined as self-determination and choice of individuals), justice (defined as fair and equitable treatment for all), and explicability.
In addition, several other parties have introduced similar principles and guidelines concerning ethics of artificial intelligence, including Association for Computing Machinery ACM (US), Google, Information Technology Industry Council (US), UNI Global Union (Switzerland), World Commission on the Ethics of Scientific Knowledge and Technology COMEST, Engineering and Physical Sciences Research Council EPSRC (UK), The Japanese Society for Artificial Intelligence JSAI, University of Montreal, and European Group on Ethics and New Technologies EGE.
Based on the literature review, the table below (Table 1
) introduces the ethical values and principles of the most relevant documents in the current European discussion of technology ethics.
Based on the workshop discussion [57
], and as a synthesis of above presented guidelines and values, we propose a modified set of values to be considered as a basis for ethical and responsible development of AIS (Table 2
In the case of autonomous ships, the list of values could include: Integrity and human dignity; autonomy; human control; responsibility; justice, equality, fairness and solidarity; transparency; privacy; reliability; security and safety; accountability; explicability; sustainability; and role of technology in society. The generic goals of the system to be designed are discussed and analyzed in the light of each identified ethical value.
3.3. Analysis and Understanding of Ethical Issues within the Context
Ethical issues are analyzed further to understand them, solve them and to translate them into design language. This outcome contributes to the design requirements. In the first step of the analysis, the goals and requirements may be more generic, but along with more detailed design, the requirements will become more detailed, as well.
Ultimately, how ethical dilemmas are resolved depends on the context [58
]. Ethical issues arise regarding the use of specific features and services rather than the inherent characteristics of the technology. The principles and values must thus be discussed on a practical level to inform technology design. To enable ethical reasoning in human-driven technology design, usage scenarios (e.g., Reference [59
]) can be used as “cases” to discuss ethical issues. With the help of scenarios, it is possible to consider: (1) What kind of ethical challenges the deployment of technology in the life of people raises; (2) which ethical principles are appropriate to follow; and (3) what kind of context-specific ethical values and design principles should be embedded in the design outcomes.
Therefore, we propose usage scenarios as a tool to describe the aim of the system, the actors and their expectations, the goals of actors’ actions, the technology and the context. The selected principles are cross-checked against each phase of a scenario and the possible arising ethical issues are discussed and reported at each step. Lucivero (2016) [12
] indicates that socio-technical scenarios are important tools to broader stakeholder understanding by joint discussions, which enhance reflexivity in one’s own role in shaping the future, as well as awareness of stakeholder interdependence and related unintended consequences. The purpose of the scenario-based discussion is to develop ethical human requirements for the requirements specification and for the iterative design process. The discussion needs to be carried out with all relevant stakeholders and required expertise. The same systematics can be utilized also for assessment of the end-result, or the design decision. The discussion needs to be documented and agreement made transparent so that later it is possible to go back and re-assess possible relevant changes in the environment.
It is not easy to perceive how the final technological outcome will work in society, what kind of effects it will have, and how it will promote the good for humanity. Discussion of the normative acceptability of the technology is thus needed. Usage scenarios can be used as a participatory design tool to capture the different usage situations of the system and the people and environment bound to it. Scenarios describe the aim of the system, the actors and their expectations, the goals of actors’ actions, the technology and the context [60
]. Socio-technical scenarios can also be used to broader stakeholder understanding of one’s own role in shaping the future, as well as awareness of stakeholder interdependence [12
]. In the second step, the scenarios representing different usage situations of the system are discussed with different stakeholders and ethical experts and examined phase by phase according to the listed ethical values, in order to define potential ethical issues. In addition, the following questions presented by Lucivero (2016, 53) [12
] can help comprehension of the possible effects of the system in society:
How likely is it that the expected artifact will promote the expected values?
To what extent are the promised values desirable for society?
How likely is it that technology will instrumentally bring about a desirable consequence?
The outcome of the analysis is a list of potential ethical issues, which need to be further deliberated when defining the design and system’s goals.
Case example: Autonomous short-distance electric passenger ship. An initial usage scenario was developed in a series of workshops, to serve here as an example of the scenario work. This scenario is an imaginary example, developed from a passenger perspective, which illustrates what kind of qualitative information can be provided with a scenario to support the identification of ethical issues and the following requirements specification process. The basic elements of the scenario are the following:
Usage situation: Transport passengers between two pre-defined points across a river as a part of city public transportation; journey time—20 min.
Design goals: (1) Enable a reliable, frequent service during operation hours; (2) reduce costs of public transport service and/or enable crossing in a location where a bridge can’t be used; and (3) increase the safety of passengers.
Operational model: Guide passengers on-board using relevant automatic barriers, signage, and voice announcements; close the ramp when all passengers are on board; autonomously plan the route, considering other traffic and obstacles; make departure decision according to environmental conditions and technical systems status; detach from dock; cross the river, avoiding crossing traffic and obstacles; attach to opposite dock; open ramp, allow disembarkation of passengers; batteries are charged when docked; maintenance operations carried out during night when there is no service; remote operator monitors the operation in a Shore Control Center (SCC), with the possibility to intervene if needed.
Stakeholders: Remote operator: In an SCC, with access to data provided by ship sensors. Monitors 3 similar vessels simultaneously; passengers (ticket needed to enter the boarding area), max 100 passengers per crossing; maintenance personnel; crossing boat traffic on the route; bystanders on the shore (not allowed to enter the boarding area); people living/having recreational cottages nearby; ship owner/service provider; shipbuilder, insurance company, classification society, traffic authorities.
Environment: A river within a European city (EU regulations applicable); crossing traffic on the river; varying weather conditions (river does not freeze, but storms/snow etc. can be expected.