Standards and Ethics in AI

A special issue of AI (ISSN 2673-2688).

Deadline for manuscript submissions: closed (31 August 2023) | Viewed by 109620

Special Issue Editors


E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Waco, TX 76706, USA
Interests: AI orthopraxy; AI ethics; AI standards; deep learning
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Computer Science, Baylor University, Baylor University, Waco, TX 76706, USA
Interests: AI for good; deep learning; sign language recognition; smart cities; AI policies

Special Issue Information

Dear Colleagues,

There is a swarm of artificial intelligence (AI) ethics standards and regulations being discussed, developed, and released worldwide. The need for an academic discussion forum for the application of such standards and regulations is evident. The research community needs to keep track of any updates for such standards, and the publication of use cases and other practical considerations for such.

This Special Issue of the journal AI on “Standards and Ethics in AI” will publish research papers on applied AI ethics, including the standards in AI ethics. This implies interactions among technology, science, and society in terms of applied AI ethics and standards; the impact of such standards and ethical issues on individuals and society; and the development of novel ethical practices of AI technology. The journal will also provide a forum for the open discussion of resulting issues of the application of such standards and practices across different social contexts and communities. More specifically, this Special Issue welcomes submissions on the following topics:

  • AI ethics standards and best practices;
  • Applied AI ethics and case studies;
  • AI fairness, accountability, and transparency;
  • Quantitative metrics of AI ethics and fairness;
  • Review papers on AI ethics standards;
  • Reports on the development of AI ethics standards and best practices.

Note, however, that manuscripts that are philosophical in nature might be discouraged in favor of applied ethics discussions where readers have a clear understanding of the standards, best practices, experiments, quantitative measurements, and case studies that may lead readers from academia, industry, and government to find actionable insight.

Dr. Pablo Rivas
Dr. Gissella Bejarano
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. AI is an international peer-reviewed open access quarterly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI ethics
  • AI ethics standards
  • AI orthopraxy
  • AI best practices
  • AI fairness

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review, Other

8 pages, 215 KiB  
Communication
Ethical Considerations for Artificial Intelligence Applications for HIV
by Renee Garett, Seungjun Kim and Sean D. Young
AI 2024, 5(2), 594-601; https://doi.org/10.3390/ai5020031 - 7 May 2024
Viewed by 535
Abstract
Human Immunodeficiency Virus (HIV) is a stigmatizing disease that disproportionately affects African Americans and Latinos among people living with HIV (PLWH). Researchers are increasingly utilizing artificial intelligence (AI) to analyze large amounts of data such as social media data and electronic health records [...] Read more.
Human Immunodeficiency Virus (HIV) is a stigmatizing disease that disproportionately affects African Americans and Latinos among people living with HIV (PLWH). Researchers are increasingly utilizing artificial intelligence (AI) to analyze large amounts of data such as social media data and electronic health records (EHR) for various HIV-related tasks, from prevention and surveillance to treatment and counseling. This paper explores the ethical considerations surrounding the use of AI for HIV with a focus on acceptability, trust, fairness, and transparency. To improve acceptability and trust towards AI systems for HIV, informed consent and a Federated Learning (FL) approach are suggested. In regard to unfairness, stakeholders should be wary of AI systems for HIV further stigmatizing or even being used as grounds to criminalize PLWH. To prevent criminalization, in particular, the application of differential privacy on HIV data generated by data linkage should be studied. Participatory design is crucial in designing the AI systems for HIV to be more transparent and inclusive. To this end, the formation of a data ethics committee and the construction of relevant frameworks and principles may need to be concurrently implemented. Lastly, the question of whether the amount of transparency beyond a certain threshold may overwhelm patients, thereby unexpectedly triggering negative consequences, is posed. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
12 pages, 814 KiB  
Article
Towards an ELSA Curriculum for Data Scientists
by Maria Christoforaki and Oya Deniz Beyan
AI 2024, 5(2), 504-515; https://doi.org/10.3390/ai5020025 - 11 Apr 2024
Viewed by 661
Abstract
The use of artificial intelligence (AI) applications in a growing number of domains in recent years has put into focus the ethical, legal, and societal aspects (ELSA) of these technologies and the relevant challenges they pose. In this paper, we propose an ELSA [...] Read more.
The use of artificial intelligence (AI) applications in a growing number of domains in recent years has put into focus the ethical, legal, and societal aspects (ELSA) of these technologies and the relevant challenges they pose. In this paper, we propose an ELSA curriculum for data scientists aiming to raise awareness about ELSA challenges in their work, provide them with a common language with the relevant domain experts in order to cooperate to find appropriate solutions, and finally, incorporate ELSA in the data science workflow. ELSA should not be seen as an impediment or a superfluous artefact but rather as an integral part of the Data Science Project Lifecycle. The proposed curriculum uses the CRISP-DM (CRoss-Industry Standard Process for Data Mining) model as a backbone to define a vertical partition expressed in modules corresponding to the CRISP-DM phases. The horizontal partition includes knowledge units belonging to three strands that run through the phases, namely ethical and societal, legal and technical rendering knowledge units (KUs). In addition to the detailed description of the aforementioned KUs, we also discuss their implementation, issues such as duration, form, and evaluation of participants, as well as the variance of the knowledge level and needs of the target audience. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

22 pages, 1593 KiB  
Article
From Trustworthy Principles to a Trustworthy Development Process: The Need and Elements of Trusted Development of AI Systems
by Ellen Hohma and Christoph Lütge
AI 2023, 4(4), 904-925; https://doi.org/10.3390/ai4040046 - 13 Oct 2023
Viewed by 1931
Abstract
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the [...] Read more.
The current endeavor of moving AI ethics from theory to practice can frequently be observed in academia and industry and indicates a major achievement in the theoretical understanding of responsible AI. Its practical application, however, currently poses challenges, as mechanisms for translating the proposed principles into easily feasible actions are often considered unclear and not ready for practice. In particular, a lack of uniform, standardized approaches that are aligned with regulatory provisions is often highlighted by practitioners as a major drawback to the practical realization of AI governance. To address these challenges, we propose a stronger shift in focus from solely the trustworthiness of AI products to the perceived trustworthiness of the development process by introducing a concept for a trustworthy development process for AI systems. We derive this process from a semi-systematic literature analysis of common AI governance documents to identify the most prominent measures for operationalizing responsible AI and compare them to implications for AI providers from EU-centered regulatory frameworks. Assessing the resulting process along derived characteristics of trustworthy processes shows that, while clarity is often mentioned as a major drawback, and many AI providers tend to wait for finalized regulations before reacting, the summarized landscape of proposed AI governance mechanisms can already cover many of the binding and non-binding demands circulating similar activities to address fundamental risks. Furthermore, while many factors of procedural trustworthiness are already fulfilled, limitations are seen particularly due to the vagueness of currently proposed measures, calling for a detailing of measures based on use cases and the system’s context. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

31 pages, 1261 KiB  
Article
Anthropocentrism and Environmental Wellbeing in AI Ethics Standards: A Scoping Review and Discussion
by Eryn Rigley, Adriane Chapman, Christine Evers and Will McNeill
AI 2023, 4(4), 844-874; https://doi.org/10.3390/ai4040043 - 8 Oct 2023
Cited by 1 | Viewed by 2599
Abstract
As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One [...] Read more.
As AI deployment has broadened, so too has an awareness for the ethical implications and problems that may ensue from this deployment. In response, groups across multiple domains have issued AI ethics standards that rely on vague, high-level principles to find consensus. One such high-level principle that is common across the AI landscape is ‘human-centredness’, though oftentimes it is applied without due investigation into its merits and limitations and without a clear, common definition. This paper undertakes a scoping review of AI ethics standards to examine the commitment to ‘human-centredness’ and how this commitment interacts with other ethical concerns, namely, concerns for nonhumans animals and environmental wellbeing. We found that human-centred AI ethics standards tend to prioritise humans over nonhumans more so than nonhuman-centred standards. A critical analysis of our findings suggests that a commitment to human-centredness within AI ethics standards accords with the definition of anthropocentrism in moral philosophy: that humans have, at least, more intrinsic moral value than nonhumans. We consider some of the limitations of anthropocentric AI ethics, which include permitting harm to the environment and animals and undermining the stability of ecosystems. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

11 pages, 535 KiB  
Communication
AI and We in the Future in the Light of the Ouroboros Model: A Plea for Plurality
by Knud Thomsen
AI 2022, 3(4), 778-788; https://doi.org/10.3390/ai3040046 - 22 Sep 2022
Cited by 1 | Viewed by 2092
Abstract
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of [...] Read more.
Artificial Intelligence (AI) is set to play an ever more important role in our lives and societies. Here, some boundary conditions and possibilities for shaping and using AI as well as advantageously embedding it in daily life are sketched. On the basis of a recently proposed cognitive architecture that claims to deliver a general layout for both natural intelligence and general AI, a coarse but broad perspective is developed and an emphasis is put on AI ethics. A number of findings, requirements, and recommendations are derived that can transparently be traced to the hypothesized structure and the procedural operation of efficient cognitive agents according to the Ouroboros Model. Including all of the available and possibly relevant information for any action and respecting a “negative imperative” are the most important resulting recommendations. Self-consistency, continual monitoring, equitable considerations, accountability, flexibility, and pragmatic adaptations are highlighted as foundations and, at the same time, mandatory consequences for timely answers to the most relevant questions concerning the embedding of AI in society and ethical rules for this. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

14 pages, 293 KiB  
Communication
Bridging East-West Differences in Ethics Guidance for AI and Robotics
by Nancy S. Jecker and Eisuke Nakazawa
AI 2022, 3(3), 764-777; https://doi.org/10.3390/ai3030045 - 14 Sep 2022
Cited by 11 | Viewed by 4132
Abstract
Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the [...] Read more.
Societies of the East are often contrasted with those of the West in their stances toward technology. This paper explores these perceived differences in the context of international ethics guidance for artificial intelligence (AI) and robotics. Japan serves as an example of the East, while Europe and North America serve as examples of the West. The paper’s principal aim is to demonstrate that Western values predominate in international ethics guidance and that Japanese values serve as a much-needed corrective. We recommend a hybrid approach that is more inclusive and truly ‘international’. Following an introduction, the paper examines distinct stances toward robots that emerged in the West and Japan, respectively, during the aftermath of the Second World War, reflecting history and popular culture, socio-economic conditions, and religious worldviews. It shows how international ethics guidelines reflect these disparate stances, drawing on a 2019 scoping review that examined 84 international AI ethics documents. These documents are heavily skewed toward precautionary values associated with the West and cite the optimistic values associated with Japan less frequently. Drawing insights from Japan’s so-called ‘moonshot goals’, the paper fleshes out Japanese values in greater detail and shows how to incorporate them more effectively in international ethics guidelines for AI and robotics. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
5 pages, 194 KiB  
Communication
Does the Use of AI to Create Academic Research Papers Undermine Researcher Originality?
by Eisuke Nakazawa, Makoto Udagawa and Akira Akabayashi
AI 2022, 3(3), 702-706; https://doi.org/10.3390/ai3030040 - 18 Aug 2022
Cited by 8 | Viewed by 9880
Abstract
Manuscript writing support services using AI technology have become increasingly available in recent years. In keeping with this trend, we need to sort out issues related to authorship in academic writing. Authorship is attached to the contribution of researchers who report innovative research, [...] Read more.
Manuscript writing support services using AI technology have become increasingly available in recent years. In keeping with this trend, we need to sort out issues related to authorship in academic writing. Authorship is attached to the contribution of researchers who report innovative research, the originality of which forms the core of their identity. The most important originality is demonstrated in the discussion of study findings. In the discussion section of this paper, we argue that if a researcher uses AI-based manuscript writing support to draft the discussion section, this does not necessarily diminish the researcher’s originality. Rather, AI support may allow the researcher to perform creative work in a more refined fashion. Presumably, selecting which AI support to use or evaluating and properly adjusting AI would still remain an important aspect of research for researchers. It is thus reasonable to view a researcher as a cooperative existence realized through a network of cooperative work that includes the use of AI. Discussions on this topic will be scientifically and socially important as AI technology advances in the future. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
22 pages, 780 KiB  
Article
Shifting Perspectives on AI Evaluation: The Increasing Role of Ethics in Cooperation
by Enrico Barbierato and Maria Enrica Zamponi
AI 2022, 3(2), 331-352; https://doi.org/10.3390/ai3020021 - 19 Apr 2022
Cited by 1 | Viewed by 5631
Abstract
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game [...] Read more.
Evaluating AI is a challenging task, as it requires an operative definition of intelligence and the metrics to quantify it, including amongst other factors economic drivers, depending on specific domains. From the viewpoint of AI basic research, the ability to play a game against a human has historically been adopted as a criterion of evaluation, as competition can be characterized by an algorithmic approach. Starting from the end of the 1990s, the deployment of sophisticated hardware identified a significant improvement in the ability of a machine to play and win popular games. In spite of the spectacular victory of IBM’s Deep Blue over Garry Kasparov, many objections still remain. This is due to the fact that it is not clear how this result can be applied to solve real-world problems or simulate human abilities, e.g., common sense, and also exhibit a form of generalized AI. An evaluation based uniquely on the capacity of playing games, even when enriched by the capability of learning complex rules without any human supervision, is bound to be unsatisfactory. As the internet has dramatically changed the cultural habits and social interaction of users, who continuously exchange information with intelligent agents, it is quite natural to consider cooperation as the next step in AI software evaluation. Although this concept has already been explored in the scientific literature in the fields of economics and mathematics, its consideration in AI is relatively recent and generally covers the study of cooperation between agents. This paper focuses on more complex problems involving heterogeneity (specifically, the cooperation between humans and software agents, or even robots), which are investigated by taking into account ethical issues occurring during attempts to achieve a common goal shared by both parties, with a possible result of either conflict or stalemate. The contribution of this research consists in identifying those factors (trust, autonomy, and cooperative learning) on which to base ethical guidelines in agent software programming, making cooperation a more suitable benchmark for AI applications. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Review

Jump to: Research, Other

13 pages, 288 KiB  
Review
Ethics and Transparency Issues in Digital Platforms: An Overview
by Leilasadat Mirghaderi, Monika Sziron and Elisabeth Hildt
AI 2023, 4(4), 831-843; https://doi.org/10.3390/ai4040042 - 28 Sep 2023
Cited by 1 | Viewed by 3907
Abstract
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to [...] Read more.
There is an ever-increasing application of digital platforms that utilize artificial intelligence (AI) in our daily lives. In this context, the matters of transparency and accountability remain major concerns that are yet to be effectively addressed. The aim of this paper is to identify the zones of non-transparency in the context of digital platforms and provide recommendations for improving transparency issues on digital platforms. First, by surveying the literature and reflecting on the concept of platformization, choosing an AI definition that can be adopted by different stakeholders, and utilizing AI ethics, we will identify zones of non-transparency in the context of digital platforms. Second, after identifying the zones of non-transparency, we go beyond a mere summary of existing literature and provide our perspective on how to address the raised concerns. Based on our survey of the literature, we find that three major zones of non-transparency exist in digital platforms. These include a lack of transparency with regard to who contributes to platforms; lack of transparency with regard to who is working behind platforms, the contributions of those workers, and the working conditions of digital workers; and lack of transparency with regard to how algorithms are developed and governed. Considering the abundance of high-level principles in the literature that cannot be easily operationalized, this is an attempt to bridge the gap between principles and operationalization. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
26 pages, 849 KiB  
Review
Ethics & AI: A Systematic Review on Ethical Concerns and Related Strategies for Designing with AI in Healthcare
by Fan Li, Nick Ruijs and Yuan Lu
AI 2023, 4(1), 28-53; https://doi.org/10.3390/ai4010003 - 31 Dec 2022
Cited by 16 | Viewed by 23134
Abstract
In modern life, the application of artificial intelligence (AI) has promoted the implementation of data-driven algorithms in high-stakes domains, such as healthcare. However, it is becoming increasingly challenging for humans to understand the working and reasoning of these complex and opaque algorithms. For [...] Read more.
In modern life, the application of artificial intelligence (AI) has promoted the implementation of data-driven algorithms in high-stakes domains, such as healthcare. However, it is becoming increasingly challenging for humans to understand the working and reasoning of these complex and opaque algorithms. For AI to support essential decisions in these domains, specific ethical issues need to be addressed to prevent the misinterpretation of AI, which may have severe consequences for humans. However, little research has been published on guidelines that systematically addresses ethical issues when AI techniques are applied in healthcare. In this systematic literature review, we aimed to provide an overview of ethical concerns and related strategies that are currently identified when applying AI in healthcare. The review, which followed the PRISMA guidelines, revealed 12 main ethical issues: justice and fairness, freedom and autonomy, privacy, transparency, patient safety and cyber security, trust, beneficence, responsibility, solidarity, sustainability, dignity, and conflicts. In addition to these 12 main ethical issues, we derived 19 ethical sub-issues and associated strategies from the literature. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

28 pages, 1928 KiB  
Review
Cybernetic Hive Minds: A Review
by Anirban Chowdhury and Rithvik Ramadas
AI 2022, 3(2), 465-492; https://doi.org/10.3390/ai3020027 - 16 May 2022
Cited by 1 | Viewed by 18400
Abstract
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The [...] Read more.
Insect swarms and migratory birds are known to exhibit something known as a hive mind, collective consciousness, and herd mentality, among others. This has inspired a whole new stream of robotics known as swarm intelligence, where small-sized robots perform tasks in coordination. The social media and smartphone revolution have helped people collectively work together and organize in their day-to-day jobs or activism. This revolution has also led to the massive spread of disinformation amplified during the COVID-19 pandemic by alt-right Neo Nazi Cults like QAnon and their counterparts from across the globe, causing increases in the spread of infection and deaths. This paper presents the case for a theoretical cybernetic hive mind to explain how existing cults like QAnon weaponize group think and carry out crimes using social media-based alternate reality games. We also showcase a framework on how cybernetic hive minds have come into existence and how the hive mind might evolve in the future. We also discuss the implications of these hive minds for the future of free will and how different malfeasant entities have utilized these technologies to cause problems and inflict harm by various forms of cyber-crimes and predict how these crimes can evolve in the future. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Other

Jump to: Research, Review

13 pages, 235 KiB  
Essay
AI and Regulations
by Paul Dumouchel
AI 2023, 4(4), 1023-1035; https://doi.org/10.3390/ai4040052 - 29 Nov 2023
Viewed by 1836
Abstract
This essay argues that the popular misrepresentation of the nature of AI has important consequences concerning how we view the need for regulations. Considering AI as something that exists in itself, rather than as a set of cognitive technologies whose characteristics—physical, cognitive, and [...] Read more.
This essay argues that the popular misrepresentation of the nature of AI has important consequences concerning how we view the need for regulations. Considering AI as something that exists in itself, rather than as a set of cognitive technologies whose characteristics—physical, cognitive, and systemic—are quite different from ours (and that, at times, differ widely among the technologies) leads to inefficient approaches to regulation. This paper aims at helping the practitioners of responsible AI to address the way in which the technical aspects of the tools they are developing and promoting directly have important social and political consequences. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
16 pages, 614 KiB  
Concept Paper
Algorithms for All: Can AI in the Mortgage Market Expand Access to Homeownership?
by Vanessa G. Perry, Kirsten Martin and Ann Schnare
AI 2023, 4(4), 888-903; https://doi.org/10.3390/ai4040045 - 11 Oct 2023
Cited by 1 | Viewed by 3372
Abstract
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, [...] Read more.
Artificial intelligence (AI) is transforming the mortgage market at every stage of the value chain. In this paper, we examine the potential for the mortgage industry to leverage AI to overcome the historical and systemic barriers to homeownership for members of Black, Brown, and lower-income communities. We begin by proposing societal, ethical, legal, and practical criteria that should be considered in the development and implementation of AI models. Based on this framework, we discuss the applications of AI that are transforming the mortgage market, including digital marketing, the inclusion of non-traditional “big data” in credit scoring algorithms, AI property valuation, and loan underwriting models. We conclude that although the current AI models may reflect the same biases that have existed historically in the mortgage market, opportunities exist for proactive, responsible AI model development designed to remove the systemic barriers to mortgage credit access. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

10 pages, 340 KiB  
Commentary
Marketing with ChatGPT: Navigating the Ethical Terrain of GPT-Based Chatbot Technology
by Pablo Rivas and Liang Zhao
AI 2023, 4(2), 375-384; https://doi.org/10.3390/ai4020019 - 10 Apr 2023
Cited by 49 | Viewed by 18895
Abstract
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application [...] Read more.
ChatGPT is an AI-powered chatbot platform that enables human users to converse with machines. It utilizes natural language processing and machine learning algorithms, transforming how people interact with AI technology. ChatGPT offers significant advantages over previous similar tools, and its potential for application in various fields has generated attention and anticipation. However, some experts are wary of ChatGPT, citing ethical implications. Therefore, this paper shows that ChatGPT has significant potential to transform marketing and shape its future if certain ethical considerations are taken into account. First, we argue that ChatGPT-based tools can help marketers create content faster and potentially with quality similar to human content creators. It can also assist marketers in conducting more efficient research and understanding customers better, automating customer service, and improving efficiency. Then we discuss ethical implications and potential risks for marketers, consumers, and other stakeholders, that are essential for ChatGPT-based marketing; doing so can help revolutionize marketing while avoiding potential harm to stakeholders. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

14 pages, 1072 KiB  
Commentary
An Ethical Framework for Artificial Intelligence and Sustainable Cities
by David Pastor-Escuredo, Philip Treleaven and Ricardo Vinuesa
AI 2022, 3(4), 961-974; https://doi.org/10.3390/ai3040057 - 25 Nov 2022
Cited by 6 | Viewed by 6696
Abstract
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable [...] Read more.
The digital revolution has brought ethical crossroads of technology and behavior, especially in the realm of sustainable cities. The need for a comprehensive and constructive ethical framework is emerging as digital platforms encounter trouble to articulate the transformations required to accomplish the sustainable development goal (SDG) 11 (on sustainable cities), and the remainder of the related SDGs. The unequal structure of the global system leads to dynamic and systemic problems, which have a more significant impact on those that are most vulnerable. Ethical frameworks based only on the individual level are no longer sufficient as they lack the necessary articulation to provide solutions to the new systemic challenges. A new ethical vision of digitalization must comprise the understanding of the scales and complex interconnections among SDGs and the ongoing socioeconomic and industrial revolutions. Many of the current social systems are internally fragile and very sensitive to external factors and threats, which lead to unethical situations. Furthermore, the multilayered net-like social tissue generates clusters of influence and leadership that prevent communities from a proper development. Digital technology has also had an impact at the individual level, posing several risks including a more homogeneous and predictable humankind. To preserve the core of humanity, we propose an ethical framework to empower individuals centered on the cities and interconnected with the socioeconomic ecosystem and the environment through the complex relationships of the SDGs. Only by combining human-centered and collectiveness-oriented digital development will it be possible to construct new social models and interactions that are ethical. Thus, it is necessary to combine ethical principles with the digital innovation undergoing in all the dimensions of sustainability. Full article
(This article belongs to the Special Issue Standards and Ethics in AI)
Show Figures

Figure 1

Back to TopTop