Special Issue "Standards and Ethics in AI"

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 31 August 2023 | Viewed by 35769

Special Issue Editors

Department of Computer Science, Baylor University, Waco, TX 76706, USA
Interests: AI orthopraxy; AI ethics; AI standards; deep learning
Special Issues, Collections and Topics in MDPI journals
Department of Computer Science, Baylor University, Baylor University, Waco, TX 76706, USA
Interests: AI for good; deep learning; sign language recognition; smart cities; AI policies

Special Issue Information

Dear Colleagues,

There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.

This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:

  • AI ethics standards and best practices;
  • Applied AI ethics and case studies;
  • AI fairness, accountability, and transparency;
  • Quantitative metrics of AI ethics and fairness;
  • Review papers on AI ethics standards;
  • Reports on the development of AI ethics standards and best practices.

Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.

Dr. Pablo Rivas
Dr. Gissella Bejarano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI ethics
  • AI ethics standards
  • AI orthopraxy
  • AI best practices
  • AI fairness

Published Papers (8 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

Communication
AI and We in the Future in the Light of the Ouroboros Model: A Plea for Plurality
AI 2022, 3(4), 778-788; https://doi.org/10.3390/ai3040046 - 22 Sep 2022
Viewed by 1295
Abstract
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of [...] Read more.
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of a recently proposed cognitive architecture that claims to deliver a general layout for both natural intelligence and general AI, a coarse but broad perspective is developed and an emphasis is put on AI ethics. A number of findings, requirements, and recommendations are derived that can transparently be traced to the hypothesized structure and the procedural operation of efficient cognitive agents according to the Ouroboros Model. Including all of the available and possibly relevant information for any action and respecting a “negative imperative” are the most important resulting recommendations. Self-consistency, continual monitoring, equitable considerations, accountability, flexibility, and pragmatic adaptations are highlighted as foundations and, at the same time, mandatory consequences for timely answers to the most relevant questions concerning the embedding of AI in society and ethical rules for this. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Communication
Bridging East-West Differences in Ethics Guidance for AI and Robotics
AI 2022, 3(3), 764-777; https://doi.org/10.3390/ai3030045 - 14 Sep 2022
Cited by 2 | Viewed by 1802
Abstract
Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the [...] Read more.
Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the East, while Europe and North America serve as examples of the West. The paper’s principal aim is to demonstrate that Western values predominate in international ethics guidance and that Japanese values serve as a much-needed corrective. We recommend a hybrid approach that is more inclusive and truly ‘international’. Following an introduction, the paper examines distinct stances toward robots that emerged in the West and Japan, respectively, during the aftermath of the Second World War, reflecting history and popular culture, socio-economic conditions, and religious worldviews. It shows how international ethics guidelines reflect these disparate stances, drawing on a 2019 scoping review that examined 84 international AI ethics documents. These documents are heavily skewed toward precautionary values associated with the West and cite the optimistic values associated with Japan less frequently. Drawing insights from Japan’s so-called ‘moonshot goals’, the paper fleshes out Japanese values in greater detail and shows how to incorporate them more effectively in international ethics guidelines for AI and robotics. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Communication
Does the Use of AI to Create Academic Research Papers Undermine Researcher Originality?
AI 2022, 3(3), 702-706; https://doi.org/10.3390/ai3030040 - 18 Aug 2022
Cited by 2 | Viewed by 6164
Abstract
Manuscript writing support services using AI technology have become increasingly available in recent years. In keeping with this trend, we need to sort out issues related to authorship in academic writing. Authorship is attached to the contribution of researchers who report innovative research, [...] Read more.
Manuscript writing support services using AI technology have become increasingly available in recent years. In keeping with this trend, we need to sort out issues related to authorship in academic writing. Authorship is attached to the contribution of researchers who report innovative research, the originality of which forms the core of their identity. The most important originality is demonstrated in the discussion of study findings. In the discussion section of this paper, we argue that if a researcher uses AI-based manuscript writing support to draft the discussion section, this does not necessarily diminish the researcher’s originality. Rather, AI support may allow the researcher to perform creative work in a more refined fashion. Presumably, selecting which AI support to use or evaluating and properly adjusting AI would still remain an important aspect of research for researchers. It is thus reasonable to view a researcher as a cooperative existence realized through a network of cooperative work that includes the use of AI. Discussions on this topic will be scientifically and socially important as AI technology advances in the future. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Article
Shifting Perspectives on AI Evaluation: The Increasing Role of Ethics in Cooperation
AI 2022, 3(2), 331-352; https://doi.org/10.3390/ai3020021 - 19 Apr 2022
Cited by 1 | Viewed by 3955
Abstract
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game [...] Read more.
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game against a human has historically been adopted as a criterion of evaluation, as competition can be characterized by an algorithmic approach. Starting from the end of the 1990s, the deployment of sophisticated hardware identified a significant improvement in the ability of a machine to play and win popular games. In spite of the spectacular victory of IBM’s Deep Blue over Garry Kasparov, many objections still remain. This is due to the fact that it is not clear how this result can be applied to solve real-world problems or simulate human abilities, e.g., common sense, and also exhibit a form of generalized AI. An evaluation based uniquely on the capacity of playing games, even when enriched by the capability of learning complex rules without any human supervision, is bound to be unsatisfactory. As the internet has dramatically changed the cultural habits and social interaction of users, who continuously exchange information with intelligent agents, it is quite natural to consider cooperation as the next step in AI software evaluation. Although this concept has already been explored in the scientific literature in the fields of economics and mathematics, its consideration in AI is relatively recent and generally covers the study of cooperation between agents. This paper focuses on more complex problems involving heterogeneity (specifically, the cooperation between humans and software agents, or even robots), which are investigated by taking into account ethical issues occurring during attempts to achieve a common goal shared by both parties, with a possible result of either conflict or stalemate. The contribution of this research consists in identifying those factors (trust, autonomy, and cooperative learning) on which to base ethical guidelines in agent software programming, making cooperation a more suitable benchmark for AI applications. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Review

Jump to: Research, Other

Review
Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare
AI 2023, 4(1), 28-53; https://doi.org/10.3390/ai4010003 - 31 Dec 2022
Cited by 1 | Viewed by 4695
Abstract
In modern life, the application of artificial intelligence (AI) has promoted the implementation of data-driven algorithms in high-stakes domains, such as healthcare. However, it is becoming increasingly challenging for humans to understand the working and reasoning of these complex and opaque algorithms. For [...] Read more.
In modern life, the application of artificial intelligence (AI) has promoted the implementation of data-driven algorithms in high-stakes domains, such as healthcare. However, it is becoming increasingly challenging for humans to understand the working and reasoning of these complex and opaque algorithms. For AI to support essential decisions in these domains, specific ethical issues need to be addressed to prevent the misinterpretation of AI, which may have severe consequences for humans. However, little research has been published on guidelines that systematically addresses ethical issues when AI techniques are applied in healthcare. In this systematic literature review, we aimed to provide an overview of ethical concerns and related strategies that are currently identified when applying AI in healthcare. The review, which followed the PRISMA guidelines, revealed 12 main ethical issues: justice and fairness, freedom and autonomy, privacy, transparency, patient safety and cyber security, trust, beneficence, responsibility, solidarity, sustainability, dignity, and conflicts. In addition to these 12 main ethical issues, we derived 19 ethical sub-issues and associated strategies from the literature. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Review
Cybernetic Hive Minds: A Review
AI 2022, 3(2), 465-492; https://doi.org/10.3390/ai3020027 - 16 May 2022
Viewed by 8403
Abstract
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The [...] Read more.
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The social media and smartphone revolution have helped people collectively work together and organize in their day-to-day jobs or activism. This revolution has also led to the massive spread of disinformation amplified during the COVID-19 pandemic by alt-right Neo Nazi Cults like QAnon and their counterparts from across the globe, causing increases in the spread of infection and deaths. This paper presents the case for a theoretical cybernetic hive mind to explain how existing cults like QAnon weaponize group think and carry out crimes using social media-based alternate reality games. We also showcase a framework on how cybernetic hive minds have come into existence and how the hive mind might evolve in the future. We also discuss the implications of these hive minds for the future of free will and how different malfeasant entities have utilized these technologies to cause problems and inflict harm by various forms of cyber-crimes and predict how these crimes can evolve in the future. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Other

Jump to: Research, Review

Commentary
Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology
AI 2023, 4(2), 375-384; https://doi.org/10.3390/ai4020019 - 10 Apr 2023
Cited by 2 | Viewed by 4242
Abstract
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application [...] Read more.
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application in various fields has generated attention and anticipation. However, some experts are wary of ChatGPT, citing ethical implications. Therefore, this paper shows that ChatGPT has significant potential to transform marketing and shape its future if certain ethical considerations are taken into account. First, we argue that ChatGPT-based tools can help marketers create content faster and potentially with quality similar to human content creators. It can also assist marketers in conducting more efficient research and understanding customers better, automating customer service, and improving efficiency. Then we discuss ethical implications and potential risks for marketers, consumers, and other stakeholders, that are essential for ChatGPT-based marketing; doing so can help revolutionize marketing while avoiding potential harm to stakeholders. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Commentary
An Ethical Framework for Artificial Intelligence and Sustainable Cities
AI 2022, 3(4), 961-974; https://doi.org/10.3390/ai3040057 - 25 Nov 2022
Viewed by 2357
Abstract
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable [...] Read more.
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable development goal (SDG) 11 (on sustainable cities), and the remainder of the related SDGs. The unequal structure of the global system leads to dynamic and systemic problems, which have a more significant impact on those that are most vulnerable. Ethical frameworks based only on the individual level are no longer sufficient as they lack the necessary articulation to provide solutions to the new systemic challenges. A new ethical vision of digitalization must comprise the understanding of the scales and complex interconnections among SDGs and the ongoing socioeconomic and industrial revolutions. Many of the current social systems are internally fragile and very sensitive to external factors and threats, which lead to unethical situations. Furthermore, the multilayered net-like social tissue generates clusters of influence and leadership that prevent communities from a proper development. Digital technology has also had an impact at the individual level, posing several risks including a more homogeneous and predictable humankind. To preserve the core of humanity, we propose an ethical framework to empower individuals centered on the cities and interconnected with the socioeconomic ecosystem and the environment through the complex relationships of the SDGs. Only by combining human-centered and collectiveness-oriented digital development will it be possible to construct new social models and interactions that are ethical. Thus, it is necessary to combine ethical principles with the digital innovation undergoing in all the dimensions of sustainability. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Back to TopTop