Special Issue "Standards and Ethics in AI"

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: 31 August 2022 | Viewed by 1208

Special Issue Editors

Dr. Pablo Rivas
E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Waco, TX 76706, USA
Interests: AI orthopraxy; AI ethics; AI standards; AI fairness; deep learning
Dr. Gissella Bejarano
E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Baylor University, Waco, TX 76706, USA
Interests: AI for good; deep learning; sign language recognition; smart cities; AI policies
Dr. Javier Orduz
E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Waco, TX 76706, USA
Interests: AI fairness; quantum machine learning; quantum AI fairness

Special Issue Information

Dear Colleagues,

There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.

This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:

  • AI ethics standards and best practices;
  • Applied AI ethics and case studies;
  • AI fairness, accountability, and transparency;
  • Quantitative metrics of AI ethics and fairness;
  • Review papers on AI ethics standards;
  • Reports on the development of AI ethics standards and best practices.

Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.

Dr. Pablo Rivas
Dr. Gissella Bejarano
Dr. Javier Orduz
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1000 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI ethics
  • AI ethics standards
  • AI orthopraxy
  • AI best practices
  • AI fairness

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

Article
Shifting Perspectives on AI Evaluation: The Increasing Role of Ethics in Cooperation
AI 2022, 3(2), 331-352; https://doi.org/10.3390/ai3020021 - 19 Apr 2022
Viewed by 421
Abstract
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game [...] Read more.
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game against a human has historically been adopted as a criterion of evaluation, as competition can be characterized by an algorithmic approach. Starting from the end of the 1990s, the deployment of sophisticated hardware identified a significant improvement in the ability of a machine to play and win popular games. In spite of the spectacular victory of IBM’s Deep Blue over Garry Kasparov, many objections still remain. This is due to the fact that it is not clear how this result can be applied to solve real-world problems or simulate human abilities, e.g., common sense, and also exhibit a form of generalized AI. An evaluation based uniquely on the capacity of playing games, even when enriched by the capability of learning complex rules without any human supervision, is bound to be unsatisfactory. As the internet has dramatically changed the cultural habits and social interaction of users, who continuously exchange information with intelligent agents, it is quite natural to consider cooperation as the next step in AI software evaluation. Although this concept has already been explored in the scientific literature in the fields of economics and mathematics, its consideration in AI is relatively recent and generally covers the study of cooperation between agents. This paper focuses on more complex problems involving heterogeneity (specifically, the cooperation between humans and software agents, or even robots), which are investigated by taking into account ethical issues occurring during attempts to achieve a common goal shared by both parties, with a possible result of either conflict or stalemate. The contribution of this research consists in identifying those factors (trust, autonomy, and cooperative learning) on which to base ethical guidelines in agent software programming, making cooperation a more suitable benchmark for AI applications. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Review

Jump to: Research

Review
Cybernetic Hive Minds: A Review
AI 2022, 3(2), 465-492; https://doi.org/10.3390/ai3020027 - 16 May 2022
Viewed by 291
Abstract
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The [...] Read more.
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The social media and smartphone revolution have helped people collectively work together and organize in their day-to-day jobs or activism. This revolution has also led to the massive spread of disinformation amplified during the COVID-19 pandemic by alt-right Neo Nazi Cults like QAnon and their counterparts from across the globe, causing increases in the spread of infection and deaths. This paper presents the case for a theoretical cybernetic hive mind to explain how existing cults like QAnon weaponize group think and carry out crimes using social media-based alternate reality games. We also showcase a framework on how cybernetic hive minds have come into existence and how the hive mind might evolve in the future. We also discuss the implications of these hive minds for the future of free will and how different malfeasant entities have utilized these technologies to cause problems and inflict harm by various forms of cyber-crimes and predict how these crimes can evolve in the future. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Back to TopTop