Topic Editors

Headingley Campus, Leeds Beckett University, Leeds LS6 3QS, UK
Department of Sciences and Methods for Engineering, University of Modena and Reggio Emilia, 42100 Reggio Emilia, Italy
School of Computer Science and Engineering, South China University of Technology, Guangzhou 510641, China
Department of Computer Science & Engineering (DISI), University of Bologna, 40136 Bologna, Italy
Biomedical Artificial Intelligence Research Unit (BMAI), Institute of Innovative Research, Tokyo Institute of Technology, Yokohama 226-8503, Japan
Department of English Language & Applied Linguistics, University of Reading, Reading RG6 6AH, UK

AI Chatbots: Threat or Opportunity?

Abstract submission deadline
closed (29 February 2024)
Manuscript submission deadline
30 April 2024
Viewed by
35638

Topic Information

Dear Colleagues,

ChatGPT, based on GPT-3, was launched by OpenAI in November 2022. On their website it is described as ‘a language model … designed to respond to text-based queries and generate natural language responses. It is part of the broader field of artificial intelligence known as natural language processing (NLP), which seeks to teach computers to understand and interpret human language’. More significantly, it is stated that ‘One of the main applications of ChatGPT is in chatbots, where it can be used to provide automated customer service, answer FAQs, or even engage in more free-flowing conversations with users. However, it can also be used in other NLP applications such as text summarization, language translation, and content creation. Overall, ChatGPT represents a significant advancement in the field of NLP and has the potential to revolutionize the way we interact with computers and digital systems’.

These claims, although containing relatively innocuous terms, have been seen by many as potentially ominous and with far-reaching ramifications. Teachers, already facing the issues of cut-and-paste-off-the-internet plagiarism, ghost-writing, and contract cheating, foresaw that AI chatbots such as ChatGPT, Bard, and Bing, would offer students new and more powerful opportunities to produce work for assessment. For some this was not a problem, but for others it appeared to be the beginning of the end for anything other than in-person assessments, including hand-written exams and vivas.

People began to experiment with ChatGPT, using it to produce computer code, speeches, and academic papers. In some cases, users expressed their astonishment at the high quality of the outputs, but others were far more skeptical. In the meantime, OpenAI released GPT-4, which is now incorporated into ChatGPT Plus. It is expected that GPT-5 will be available later this year, on top of which, autonomous AI agents such as Auto-GPT and Agent-GPT are now available. These developments, and others in the general area of AI, have led to calls for a pause in such developments, although others have expressed doubts that this will have any impact.

The issues raised by AI chatbots such as ChatGPT impact upon a range of practices and disciplines, as well as many facets of our everyday lives and interactions. Hence, this invitation to submit work comes from editors associated with a wide variety of MDPI journals, encompassing a range of inter-related perspectives on the topic. We are keen to receive submissions relating to the technologies behind the advance in these AI chatbots, and also with regard to the wider implications of their use in social, technical, and educational contexts.

We are open to all manner of submissions, but to give some indication of the aspects of key interest we list the following questions and issues.

  • The development of AI chatbots has been claimed to herald a new era, offering significant advances in the incorporation of technology into people’s lives and interactions. Is this likely to be the case, and if so, where are these impacts going to be the most pervasive and effective?
  • Is it possible to strike a balance regarding the impact of these technologies so that any potential harms are minimized, while potential benefits are maximized and shared?
  • How should educators respond to the challenge of AI chatbots? Should they welcome this technology and re-orient teaching and learning strategies around it, or seek to safeguard traditional practices from what is seen as a major threat?
  • There is a growing body of evidence that the design and implementation of many AI applications, i.e., algorithms, incorporate bias and prejudice. How can this be countered and corrected?
  • How can publishers and editors recognize the difference between manuscripts that have been written by a chatbot and "genuine" articles written by researchers? Is training to recognize the difference required? If so, who could offer such training?
  • How can the academic world and the wider public be protected against the creation of "alternative facts" by AI? Should researchers be required to submit their data with manuscripts to show that the data are authentic? What is the role of ethics committees in protecting the integrity of research?
  • Can the technology underlying AI chatbots be enhanced to guard against misuse and vulnerabilities?
  • Novel models and algorithms for using AI chatbots in cognitive computing;
  • Techniques for training and optimizing AI chatbots for cognitive computing tasks;
  • Evaluation methods for assessing the performance of AI chatbot-based - cognitive computing systems;
  • Case studies and experiences in developing and deploying AI chatbot-based cognitive computing systems in real-world scenarios;
  • Social and ethical issues related to the use of AI chatbots for cognitive computing.

The potential impact of these AI chatbots on the topics covered by journals is twofold: on the one hand, there is a need for research on the technological bases underlying AI chatbots, including the algorithmic aspects behind the AI; on the other hand, there are many aspects related to the support and assistance that these AI chatbots can provide to algorithm designers, code developers and others operating in the many fields and practices encompassed by this collection of journals.

Prof. Dr. Antony Bryant, Editor-in-Chief of Informatics
Prof. Dr. Roberto Montemanni, Section Editor-in-Chief of Algorithms
Prof. Dr. Min Chen, Editor-in-Chief of BDCC
Prof. Dr. Paolo Bellavista, Section Editor-in-Chief of Future Internet
Prof. Dr. Kenji Suzuki, Editor-in-Chief of AI
Prof. Dr. Jeanine Treffers-Daller, Editor-in-Chief of Languages
Topic Editors

Keywords

  • ChatGPT
  • OpenAI
  • AI chatbots
  • natural language processing
 

Participating Journals

Journal Name Impact Factor CiteScore Launched Year First Decision (median) APC
AI
ai
- - 2020 20.8 Days CHF 1600 Submit
Algorithms
algorithms
2.3 3.7 2008 15 Days CHF 1600 Submit
Big Data and Cognitive Computing
BDCC
3.7 4.9 2017 18.2 Days CHF 1800 Submit
Future Internet
futureinternet
3.4 6.7 2009 11.8 Days CHF 1600 Submit
Informatics
informatics
3.1 4.8 2014 30.3 Days CHF 1800 Submit
Information
information
3.1 5.8 2010 18 Days CHF 1600 Submit
Languages
languages
0.9 1.1 2016 52.7 Days CHF 1400 Submit
Publications
publications
3.8 5.0 2013 35 Days CHF 1400 Submit

Preprints.org is a multidiscipline platform providing preprint service that is dedicated to sharing your research from the start and empowering your research journey.

MDPI Topics is cooperating with Preprints.org and has built a direct connection between MDPI journals and Preprints.org. Authors are encouraged to enjoy the benefits by posting a preprint at Preprints.org prior to publication:

  1. Immediately share your ideas ahead of publication and establish your research priority;
  2. Protect your idea from being stolen with this time-stamped preprint article;
  3. Enhance the exposure and impact of your research;
  4. Receive feedback from your peers in advance;
  5. Have it indexed in Web of Science (Preprint Citation Index), Google Scholar, Crossref, SHARE, PrePubMed, Scilit and Europe PMC.

Published Papers (8 papers)

Order results
Result details
Journals
Select all
Export citation of selected articles as:
14 pages, 960 KiB  
Article
ChatGPT in Education: Empowering Educators through Methods for Recognition and Assessment
by Joost C. F. de Winter, Dimitra Dodou and Arno H. A. Stienen
Informatics 2023, 10(4), 87; https://doi.org/10.3390/informatics10040087 - 29 Nov 2023
Viewed by 2776
Abstract
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized [...] Read more.
ChatGPT is widely used among students, a situation that challenges educators. The current paper presents two strategies that do not push educators into a defensive role but can empower them. Firstly, we show, based on statistical analysis, that ChatGPT use can be recognized from certain keywords such as ‘delves’ and ‘crucial’. This insight allows educators to detect ChatGPT-assisted work more effectively. Secondly, we illustrate that ChatGPT can be used to assess texts written by students. The latter topic was presented in two interactive workshops provided to educators and educational specialists. The results of the workshops, where prompts were tested live, indicated that ChatGPT, provided a targeted prompt is used, is good at recognizing errors in texts but not consistent in grading. Ethical and copyright concerns were raised as well in the workshops. In conclusion, the methods presented in this paper may help fortify the teaching methods of educators. The computer scripts that we used for live prompting are available and enable educators to give similar workshops. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

16 pages, 335 KiB  
Review
AI Chatbots in Digital Mental Health
by Luke Balcombe
Informatics 2023, 10(4), 82; https://doi.org/10.3390/informatics10040082 - 27 Oct 2023
Viewed by 7231
Abstract
Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is [...] Read more.
Artificial intelligence (AI) chatbots have gained prominence since 2022. Powered by big data, natural language processing (NLP) and machine learning (ML) algorithms, they offer the potential to expand capabilities, improve productivity and provide guidance and support in various domains. Human–Artificial Intelligence (HAI) is proposed to help with the integration of human values, empathy and ethical considerations into AI in order to address the limitations of AI chatbots and enhance their effectiveness. Mental health is a critical global concern, with a substantial impact on individuals, communities and economies. Digital mental health solutions, leveraging AI and ML, have emerged to address the challenges of access, stigma and cost in mental health care. Despite their potential, ethical and legal implications surrounding these technologies remain uncertain. This narrative literature review explores the potential of AI chatbots to revolutionize digital mental health while emphasizing the need for ethical, responsible and trustworthy AI algorithms. The review is guided by three key research questions: the impact of AI chatbots on technology integration, the balance between benefits and harms, and the mitigation of bias and prejudice in AI applications. Methodologically, the review involves extensive database and search engine searches, utilizing keywords related to AI chatbots and digital mental health. Peer-reviewed journal articles and media sources were purposively selected to address the research questions, resulting in a comprehensive analysis of the current state of knowledge on this evolving topic. In conclusion, AI chatbots hold promise in transforming digital mental health but must navigate complex ethical and practical challenges. The integration of HAI principles, responsible regulation and scoping reviews are crucial to maximizing their benefits while minimizing potential risks. Collaborative approaches and modern educational solutions may enhance responsible use and mitigate biases in AI applications, ensuring a more inclusive and effective digital mental health landscape. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
21 pages, 1002 KiB  
Article
Chatbots Put to the Test in Math and Logic Problems: A Comparison and Assessment of ChatGPT-3.5, ChatGPT-4, and Google Bard
by Vagelis Plevris, George Papazafeiropoulos and Alejandro Jiménez Rios
AI 2023, 4(4), 949-969; https://doi.org/10.3390/ai4040048 - 24 Oct 2023
Cited by 3 | Viewed by 5152
Abstract
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess [...] Read more.
In an age where artificial intelligence is reshaping the landscape of education and problem solving, our study unveils the secrets behind three digital wizards, ChatGPT-3.5, ChatGPT-4, and Google Bard, as they engage in a thrilling showdown of mathematical and logical prowess. We assess the ability of the chatbots to understand the given problem, employ appropriate algorithms or methods to solve it, and generate coherent responses with correct answers. We conducted our study using a set of 30 questions. These questions were carefully crafted to be clear, unambiguous, and fully described using plain text only. Each question has a unique and well-defined correct answer. The questions were divided into two sets of 15: Set A consists of “Original” problems that cannot be found online, while Set B includes “Published” problems that are readily available online, often with their solutions. Each question was presented to each chatbot three times in May 2023. We recorded and analyzed their responses, highlighting their strengths and weaknesses. Our findings indicate that chatbots can provide accurate solutions for straightforward arithmetic, algebraic expressions, and basic logic puzzles, although they may not be consistently accurate in every attempt. However, for more complex mathematical problems or advanced logic tasks, the chatbots’ answers, although they appear convincing, may not be reliable. Furthermore, consistency is a concern as chatbots often provide conflicting answers when presented with the same question multiple times. To evaluate and compare the performance of the three chatbots, we conducted a quantitative analysis by scoring their final answers based on correctness. Our results show that ChatGPT-4 performs better than ChatGPT-3.5 in both sets of questions. Bard ranks third in the original questions of Set A, trailing behind the other two chatbots. However, Bard achieves the best performance, taking first place in the published questions of Set B. This is likely due to Bard’s direct access to the internet, unlike the ChatGPT chatbots, which, due to their designs, do not have external communication capabilities. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

26 pages, 4052 KiB  
Article
Fluent but Not Factual: A Comparative Analysis of ChatGPT and Other AI Chatbots’ Proficiency and Originality in Scientific Writing for Humanities
by Edisa Lozić and Benjamin Štular
Future Internet 2023, 15(10), 336; https://doi.org/10.3390/fi15100336 - 13 Oct 2023
Cited by 3 | Viewed by 4348
Abstract
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots [...] Read more.
Historically, mastery of writing was deemed essential to human progress. However, recent advances in generative AI have marked an inflection point in this narrative, including for scientific writing. This article provides a comprehensive analysis of the capabilities and limitations of six AI chatbots in scholarly writing in the humanities and archaeology. The methodology was based on tagging AI-generated content for quantitative accuracy and qualitative precision by human experts. Quantitative accuracy assessed the factual correctness in a manner similar to grading students, while qualitative precision gauged the scientific contribution similar to reviewing a scientific article. In the quantitative test, ChatGPT-4 scored near the passing grade (−5) whereas ChatGPT-3.5 (−18), Bing (−21) and Bard (−31) were not far behind. Claude 2 (−75) and Aria (−80) scored much lower. In the qualitative test, all AI chatbots, but especially ChatGPT-4, demonstrated proficiency in recombining existing knowledge, but all failed to generate original scientific content. As a side note, our results suggest that with ChatGPT-4, the size of large language models has reached a plateau. Furthermore, this paper underscores the intricate and recursive nature of human research. This process of transforming raw data into refined knowledge is computationally irreducible, highlighting the challenges AI chatbots face in emulating human originality in scientific writing. Our results apply to the state of affairs in the third quarter of 2023. In conclusion, while large language models have revolutionised content generation, their ability to produce original scientific contributions in the humanities remains limited. We expect this to change in the near future as current large language model-based AI chatbots evolve into large language model-powered software. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

16 pages, 872 KiB  
Article
Qualitative Research Methods for Large Language Models: Conducting Semi-Structured Interviews with ChatGPT and BARD on Computer Science Education
by Andreas Dengel, Rupert Gehrlein, David Fernes, Sebastian Görlich, Jonas Maurer, Hai Hoang Pham, Gabriel Großmann and Niklas Dietrich genannt Eisermann
Informatics 2023, 10(4), 78; https://doi.org/10.3390/informatics10040078 - 12 Oct 2023
Cited by 2 | Viewed by 3749
Abstract
In the current era of artificial intelligence, large language models such as ChatGPT and BARD are being increasingly used for various applications, such as language translation, text generation, and human-like conversation. The fact that these models consist of large amounts of data, including [...] Read more.
In the current era of artificial intelligence, large language models such as ChatGPT and BARD are being increasingly used for various applications, such as language translation, text generation, and human-like conversation. The fact that these models consist of large amounts of data, including many different opinions and perspectives, could introduce the possibility of a new qualitative research approach: Due to the probabilistic character of their answers, “interviewing” these large language models could give insights into public opinions in a way that otherwise only interviews with large groups of subjects could deliver. However, it is not yet clear if qualitative content analysis research methods can be applied to interviews with these models. Evaluating the applicability of qualitative research methods to interviews with large language models could foster our understanding of their abilities and limitations. In this paper, we examine the applicability of qualitative content analysis research methods to interviews with ChatGPT in English, ChatGPT in German, and BARD in English on the relevance of computer science in K-12 education, which was used as an exemplary topic. We found that the answers produced by these models strongly depended on the provided context, and the same model could produce heavily differing results for the same questions. From these results and the insights throughout the process, we formulated guidelines for conducting and analyzing interviews with large language models. Our findings suggest that qualitative content analysis research methods can indeed be applied to interviews with large language models, but with careful consideration of contextual factors that may affect the responses produced by these models. The guidelines we provide can aid researchers and practitioners in conducting more nuanced and insightful interviews with large language models. From an overall view of our results, we generally do not recommend using interviews with large language models for research purposes, due to their highly unpredictable results. However, we suggest using these models as exploration tools for gaining different perspectives on research topics and for testing interview guidelines before conducting real-world interviews. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

6 pages, 500 KiB  
Communication
Children of AI: A Protocol for Managing the Born-Digital Ephemera Spawned by Generative AI Language Models
by Dirk H. R. Spennemann
Publications 2023, 11(3), 45; https://doi.org/10.3390/publications11030045 - 21 Sep 2023
Cited by 1 | Viewed by 1544
Abstract
The recent public release of the generative AI language model ChatGPT has captured the public imagination and has resulted in a rapid uptake and widespread experimentation by the general public and academia alike. The number of academic publications focusing on the capabilities as [...] Read more.
The recent public release of the generative AI language model ChatGPT has captured the public imagination and has resulted in a rapid uptake and widespread experimentation by the general public and academia alike. The number of academic publications focusing on the capabilities as well as practical and ethical implications of generative AI has been growing exponentially. One of the concerns with this unprecedented growth in scholarship related to generative AI, in particular, ChatGPT, is that, in most cases, the raw data, which is the text of the original ‘conversations,’ have not been made available to the audience of the papers and thus cannot be drawn on to assess the veracity of the arguments made and the conclusions drawn therefrom. This paper provides a protocol for the documentation and archiving of these raw data. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Figure 1

18 pages, 3680 KiB  
Article
Application of ChatGPT-Based Digital Human in Animation Creation
by Chong Lan, Yongsheng Wang, Chengze Wang, Shirong Song and Zheng Gong
Future Internet 2023, 15(9), 300; https://doi.org/10.3390/fi15090300 - 02 Sep 2023
Cited by 2 | Viewed by 3297
Abstract
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and [...] Read more.
Traditional 3D animation creation involves a process of motion acquisition, dubbing, and mouth movement data binding for each character. To streamline animation creation, we propose combining artificial intelligence (AI) with a motion capture system. This integration aims to reduce the time, workload, and cost associated with animation creation. By utilizing AI and natural language processing, the characters can engage in independent learning, generating their own responses and interactions, thus moving away from the traditional method of creating digital characters with pre-defined behaviors. In this paper, we present an approach that employs a digital person’s animation environment. We utilized Unity plug-ins to drive the character’s mouth Blendshape, synchronize the character’s voice and mouth movements in Unity, and connect the digital person to an AI system. This integration enables AI-driven language interactions within animation production. Through experimentation, we evaluated the correctness of the natural language interaction of the digital human in the animated scene, the real-time synchronization of mouth movements, the potential for singularity in guiding users during digital human animation creation, and its ability to guide user interactions through its own thought process. Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Show Figures

Graphical abstract

8 pages, 231 KiB  
Editorial
AI Chatbots: Threat or Opportunity?
by Antony Bryant
Informatics 2023, 10(2), 49; https://doi.org/10.3390/informatics10020049 - 12 Jun 2023
Cited by 3 | Viewed by 2898
Abstract
In November 2022, OpenAI launched ChatGPT, an AI chatbot that gained over 100 million users by February 2023 [...] Full article
(This article belongs to the Topic AI Chatbots: Threat or Opportunity?)
Back to TopTop