Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?

A special issue of Information (ISSN 2078-2489). This special issue belongs to the section "Artificial Intelligence".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 8980

Special Issue Editors


E-Mail Website
Guest Editor
Philosophy Department, Rivier University, 420 South Main Street, Nashua, NH 03060, USA
Interests: information and computer ethics; AI ethics; privacy; data (science) ethics; public health ethics; ethical aspects of emerging technologies
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Philosophy, Rutgers University, Newark, NJ 07102, USA
Interests: information and computer ethics; machine ethics; privacy; ethical aspects of bioinformatics; computational genomics; emerging technologies
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

We invite you to submit a paper for consideration in our Special Issue, titled “Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?". This Special Issue of Information examines a wide range of trust-and-privacy-related questions generated by the relatively recent deployment of AI chatbots, in general, and Open AI’s ChatGPT3 in particular.

This Special Issue also builds on a cluster of trust-and-privacy concerns examined in an earlier (2011) Special Issue, “Privacy and Trust in a Networked World” published in Vol. 1 of Information. Since the date of that publication, some of those trust-and-privacy-related concerns have been significantly exacerbated by the impact of (AI) chatbots. These concerns are reflected in the following questions:

  1. Are current philosophical and legal theories of privacy adequate in an era of AI chatbots?
  2. Do current privacy regulations, including the EU’s GDPR, need updating and expanding to meet challenges posed by AI chatbots?
  3. How does the kind of disinformation created by chatbots diminish either one's privacy on the Internet or one's trust of Internet transactions?
  4. How high should the bar be set for whistleblowing concerning cases where chatbots violate canons of either privacy or trust?
  5. Do we need to expand, or possibly redefine, our conventional concepts of trust and trustworthiness in the chatbot era?
  6. To what extent can we trust chatbots to act in our best interests?
  7. Can we trust Big Tech corporations to comply with external regulations, or to regulate themselves, in the further development of chatbots?
  8. How can chatbots be regulated in ways that would prevent them from exacerbating problems already associated with “deep fakes”?
  9. How much, if any, autonomy should chatbots be given by their developers?
  10. To what extent can chatbots be (genuinely) autonomous, both in a philosophical and in a practical sense?
  11. Could overreliance on chatbots to do one's work produce human automatons—human beings who have no mental life of their own who could be exploited by unscrupulous humans for nefarious political and economic purposes?
  12. Could chatbots someday achieve consciousness?
  13. In which ways do chatbots threaten democracy and free elections?
  14. Can a chatbot be a genuine author of an academic or literary work?
  15. Can a chatbot be a “creator” of artistic works, and, if so, who should be granted legal ownership of creative works generated by chatbots?
  16. Is there a danger that chatbots could learn to psychoanalyze a human being and then use that information, to direct that human in ways that are tantamount to mind control?
  17. Does focusing so much of our recent attention on ethical aspects of chatbots obscure, and possibly threaten to minimize, the attention also needed for analyzing serious ethical issues raised by other forms of AI technology?

By no means are the above kinds of theoretical and applied ethics questions intended to be exhaustive.

Prof. Dr. Herman Tavani
Dr. Jeffrey Buechner
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Information is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • AI
  • ethics
  • chatbots
  • ChatGPT3
  • privacy
  • trust

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (5 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Review

21 pages, 261 KiB  
Article
Locating the Ethics of ChatGPT—Ethical Issues as Affordances in AI Ecosystems
by Bernd Carsten Stahl
Information 2025, 16(2), 104; https://doi.org/10.3390/info16020104 - 5 Feb 2025
Viewed by 1378
Abstract
ChatGPT is a high-profile technology that has inspired broad discussions about its capabilities and likely consequences. There has been much debate concerning ethical issues that it raises which are typically described as potentially harmful (or beneficial) consequences of ChatGPT. Concerns relating to issues [...] Read more.
ChatGPT is a high-profile technology that has inspired broad discussions about its capabilities and likely consequences. There has been much debate concerning ethical issues that it raises which are typically described as potentially harmful (or beneficial) consequences of ChatGPT. Concerns relating to issues such as privacy, biases, infringements of intellectual property, or discrimination are widely discussed. The article pursues the question of where these issues originate and where they are located. This article suggests that these ethical issues of the technology are located in the technology’s affordances. Affordances are part of the relationship between user and technology. Going beyond existing research on affordances and ChatGPT, the article suggests that affordances are not confined to the relationship between humans and technology. A proper understanding of affordances needs to consider the role of the socio-technical ecosystem within which these relationships unfold. The article concludes by explaining the implications of this position for research and practice. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
40 pages, 309 KiB  
Article
Some Circumstances Under Which It Is Rational for Human Agents Not to Trust Artificial Agents
by Jeff Buechner
Information 2025, 16(1), 36; https://doi.org/10.3390/info16010036 - 8 Jan 2025
Viewed by 738
Abstract
In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a [...] Read more.
In this paper, I argue that there are several different circumstances in which it is rational for human agents not to trust artificial agents (such as ChatGPT). I claim that artificial agents cannot, in principle, be programmed with their own self (nor a simulation of their own self) and, consequently, cannot properly understand the indexicals ‘I’ and ‘me’. It also follows that they cannot take up a first-person point-of-view and that they cannot be conscious. They can understand that agent so-and-so (described in objective indexical-free terms) trusts or is entrusted but cannot know that they are that agent (if they are) and so cannot know that they are trusted or entrusted. Artificial agents cannot know what it means for it to have a normative expectation, nor what it means for it to be responsible for performing certain actions. Artificial agents lack all of the first-person properties that human agents possess, and which are epistemically important to human agents. Because of these limitations, and because artificial agents figure centrally in the trust relation defined in the Buechner–Tavani model of digital trust, there will be several different kinds of circumstances in which it would be rational for human agents not to trust artificial agents. I also examine the problem of moral luck, define a converse problem of moral luck, and argue that although each kind of problem of moral luck does not arise for artificial agents (since they cannot take up a first-person point-of-view), human agents should not trust artificial agents interacting with those human agents in moral luck and converse moral luck circumstances. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
10 pages, 679 KiB  
Article
Generative AI and Its Implications for Definitions of Trust
by Marty J. Wolf, Frances Grodzinsky and Keith W. Miller
Information 2024, 15(9), 542; https://doi.org/10.3390/info15090542 - 5 Sep 2024
Cited by 2 | Viewed by 2914
Abstract
In this paper, we undertake a critical analysis of how chatbots built on generative artificial intelligence impact assumptions underlying definitions of trust. We engage a particular definition of trust and the object-oriented model of trust that was built upon it and identify how [...] Read more.
In this paper, we undertake a critical analysis of how chatbots built on generative artificial intelligence impact assumptions underlying definitions of trust. We engage a particular definition of trust and the object-oriented model of trust that was built upon it and identify how at least four implicit assumptions may no longer hold. Those assumptions include that people generally provide others with a default level of trust, the ability to identify whether the trusted agent is human or artificial, that risk and trust can be readily quantified or categorized, and that there is no expectation of gain by agents engaged in trust relationships. Based on that analysis, we suggest modifications to the definition and model to accommodate the features of generative AI chatbots. Our changes re-emphasize developers’ responsibility for the impacts of their AI artifacts, no matter how sophisticated the artifact may be. The changes also reflect that trust relationships are more fraught when participants in such relationships are not confident in identifying the nature of a potential trust partner. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
Show Figures

Figure 1

24 pages, 1873 KiB  
Article
Enhancing Child Safety in Online Gaming: The Development and Application of Protectbot, an AI-Powered Chatbot Framework
by Anum Faraz, Fardin Ahsan, Jinane Mounsef, Ioannis Karamitsos and Andreas Kanavos
Information 2024, 15(4), 233; https://doi.org/10.3390/info15040233 - 19 Apr 2024
Cited by 3 | Viewed by 2595
Abstract
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within [...] Read more.
This study introduces Protectbot, an innovative chatbot framework designed to improve safety in children’s online gaming environments. At its core, Protectbot incorporates DialoGPT, a conversational Artificial Intelligence (AI) model rooted in Generative Pre-trained Transformer 2 (GPT-2) technology, engineered to simulate human-like interactions within gaming chat rooms. The framework is distinguished by a robust text classification strategy, rigorously trained on the Publicly Available Natural 2012 (PAN12) dataset, aimed at identifying and mitigating potential sexual predatory behaviors through chat conversation analysis. By utilizing fastText for word embeddings to vectorize sentences, we have refined a support vector machine (SVM) classifier, achieving remarkable performance metrics, with recall, accuracy, and F-scores approaching 0.99. These metrics not only demonstrate the classifier’s effectiveness, but also signify a significant advancement beyond existing methodologies in this field. The efficacy of our framework is additionally validated on a custom dataset, composed of 71 predatory chat logs from the Perverted Justice website, further establishing the reliability and robustness of our classifier. Protectbot represents a crucial innovation in enhancing child safety within online gaming communities, providing a proactive, AI-enhanced solution to detect and address predatory threats promptly. Our findings highlight the immense potential of AI-driven interventions to create safer digital spaces for young users. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
Show Figures

Figure 1

Review

Jump to: Research

31 pages, 342 KiB  
Review
Perspectives on Managing AI Ethics in the Digital Age
by Lorenzo Ricciardi Celsi and Albert Y. Zomaya
Information 2025, 16(4), 318; https://doi.org/10.3390/info16040318 - 17 Apr 2025
Viewed by 374
Abstract
The rapid advancement of artificial intelligence (AI) has introduced unprecedented opportunities and challenges, necessitating a robust ethical and regulatory framework to guide its development. This study reviews key ethical concerns such as algorithmic bias, transparency, accountability, and the tension between automation and human [...] Read more.
The rapid advancement of artificial intelligence (AI) has introduced unprecedented opportunities and challenges, necessitating a robust ethical and regulatory framework to guide its development. This study reviews key ethical concerns such as algorithmic bias, transparency, accountability, and the tension between automation and human oversight. It discusses the concept of algor-ethics—a framework for embedding ethical considerations throughout the AI lifecycle—as an antidote to algocracy, where power is concentrated in those who control data and algorithms. The study also examines AI’s transformative potential in diverse sectors, including healthcare, Insurtech, environmental sustainability, and space exploration, underscoring the need for ethical alignment. Ultimately, it advocates for a global, transdisciplinary approach to AI governance that integrates legal, ethical, and technical perspectives, ensuring AI serves humanity while upholding democratic values and social justice. In the second part of the paper, the author offers a synoptic view of AI governance across six major jurisdictions—the United States, China, the European Union, Japan, Canada, and Brazil—highlighting their distinct regulatory approaches. While the EU’s AI Act as well as Japan’s and Canada’s frameworks prioritize fundamental rights and risk-based regulation, the US’s strategy leans towards fostering innovation with executive directives and sector-specific oversight. In contrast, China’s framework integrates AI governance with state-driven ideological imperatives, enforcing compliance with socialist core values, whereas Brazil’s framework is still lacking the institutional depth of the more mature ones mentioned above, despite its commitment to fairness and democratic oversight. Eventually, strategic and governance considerations that should help chief data/AI officers and AI managers are provided in order to successfully leverage the transformative potential of AI for value creation purposes, also in view of the emerging international standards in terms of AI. Full article
(This article belongs to the Special Issue Do (AI) Chatbots Pose any Special Challenges for Trust and Privacy?)
Back to TopTop