Next Article in Journal
A Qualitative Exploration into How Teachers Understand and Practice Forgiveness
Previous Article in Journal
Gamification for Teaching Integrated Circuit Processing in an Introductory VLSI Design Course
Previous Article in Special Issue
Generative Artificial Intelligence in Tertiary Education: Assessment Redesign Principles and Considerations
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies

by
Odin Monrad Schei
*,
Anja Møgelvang
and
Kristine Ludvigsen
Division of Academic Development, Western Norway University of Applied Sciences, 5063 Bergen, Norway
*
Author to whom correspondence should be addressed.
Educ. Sci. 2024, 14(8), 922; https://doi.org/10.3390/educsci14080922
Submission received: 28 May 2024 / Revised: 27 July 2024 / Accepted: 16 August 2024 / Published: 22 August 2024
(This article belongs to the Special Issue Teaching and Learning with Generative AI)

Abstract

:
With the recent arrival of publicly available AI chatbots like ChatGPT, Copilot, Gemini, and Claude follows a need for knowledge about how students in higher education perceive and use these tools, and what this might mean for their learning processes. This scoping review analyzes 24 empirical articles published between 1 January 2022 and 5 September 2023 on students’ perceptions and use of AI chatbots in higher education. The articles were reviewed using a five-stage scoping review methodology. The findings underscore a global research interest in how students engage with AI chatbots, which is especially pronounced in Asia. The studies span diverse disciplines, with a predominance in science, technology, engineering, and mathematics disciplines. The empirical findings reveal that students perceive AI chatbots as highly useful and motivating as personal task assistants and for getting immediate feedback and help with writing, coding, and academic tasks. However, students are concerned about the accuracy and reliability of the responses from the chatbots, as well as potential negative impacts on their learning processes, critical thinking, discipline, and creativity. The purpose-driven use of AI chatbots among students and their potentially positive influence on motivation and learning processes offer insights for educators and policymakers. Our research concludes that while positive attitudes, perceptions, and critical use prevail, addressing students’ concerns is crucial for responsible AI integration in higher education.

1. Introduction

The release of ChatGPT3.5 in November 2022 sparked a surge of research on the potential of artificial intelligence (AI) chatbots in higher education [1,2,3,4]. As of 2024, AI chatbots like GPT4o, Gemini 1.5 Pro, and Claude 3, harnessing deep learning algorithms, are at the forefront of technological advancements [5]. AI chatbots, a form of Generative AI, use deep learning techniques to generate human-like textual responses to questions and prompts through a complex process involving interactions with a Large Language Model (LLM) [1,6]. The ability to complete complex tasks in seconds and to give answers in a language that is understandable and relatable for humans makes these forms of AI tools a significant innovation in the field of language processing and artificial intelligence [7]. Students’ use of AI chatbots has far-reaching implications for higher education. In this review, we aim to gain knowledge of students’ perceptions and use of AI chatbots in higher education by applying a university pedagogy perspective.
Recent literature reviews underscore the promising role of AI chatbots in higher education [4,8,9,10]. Labadze et al. [10] found that AI-powered chatbots can positively influence students’ learning processes by offering study assistance, personalized learning trajectories, and the development of various academic skills. A recent meta-analysis [11] indicates that AI chatbots, due to their abilities to personalize the learning content and offer interactivity for the user, can enhance students’ learning processes, including their performance, motivation, interest, self-efficacy, and perceived value of learning. This study found that AI chatbots could offer precise, context-sensitive support and facilitate a broad spectrum of academic tasks, potentially enhancing students’ learning processes and task resolution [11] (p. 6). In a recent systematic scoping review, Ansari et al. [12] categorized students’ use of ChatGPT into three main functions: operationalizing learning, writing assistance, and adaptive learning. The study showed that ChatGPT simplified and explained complex concepts in an easily understandable manner, which provided students with access to learning resources that helped clarify their understanding. ChatGPT served as a tool for familiarizing students with unfamiliar concepts and tasks, acting as a foundation for further learning in complex subject areas. Students in the study used ChatGPT for generating ideas as a prewriting strategy, facilitating the initial stages of the writing process, and to generate first drafts. However, the study also highlighted that ChatGPT generated unauthentic information and that students could become overly dependent on AI chatbots [12].
While the pedagogic potential of AI chatbots is frequently emphasized in current studies, many researchers shed light on the academic challenges accompanying students’ use and integration of AI chatbots in higher education [13,14]. The accessibility and efficiency of AI chatbots introduce critical concerns regarding academic integrity [15]. There is a risk that students might see AI chatbots as a means of escaping from the demanding processes of scholarly exploration and writing, opting instead for quick answers from AI, thereby weakening academic thoroughness and personal intellectual development [16]. Such a shift could potentially inhibit the development of critical thinking, creativity, and problem-solving skills, which are cornerstone objectives of higher education [13,17]. AI-enabled shortcuts could also promote Marton and Säljö’s [18] “surface approach to learning” among students at the expense of a “deep approach to learning”. In the deep approach, students try to engage actively in understanding the significance of the text or material they are interacting with, striving to forge connections between new experiences, concepts, and ideas and their pre-existing knowledge and understanding. On the other hand, a surface approach entails a more passive engagement with the material, where students invest minimal cognitive effort to meet task requirements [19,20,21]. Abbas et al. [16] found that students who experienced high academic workload and time pressure to finish their tasks reported higher use of ChatGPT. Furthermore, the students who frequently used ChatGPT were more likely to engage in procrastination than those who rarely used the tool [16] (p. 17). Similarly, Lo’s [4] study on ChatGPT’s impact on higher education revealed challenges relating to a potential for misinformation and student plagiarism. The study also showed that the performance of ChatGPT3.5 exhibits variability across different academic disciplines, excelling in areas that demand critical and higher-order thinking, such as economics and programming, yet falling short in disciplines like sports sciences and psychology [4] (p. 6). This suggests that generative AI may not presently be equally qualified for use in every field of education but that its use should be critically evaluated for each subject domain and student group. These studies underscore the necessity for higher education institutions not just to reassess their pedagogical approaches and policies considering AI tool accessibility but, more critically, to actively rethink and reformulate how to scaffold and enhance students’ learning experiences in a situation where academic shortcuts and surface approaches to learning are more accessible than ever before.
To gain a better understanding of how and why students use AI chatbots, we turn towards already established models and theories of technology acceptance and use. The Technology Acceptance Model (TAM) [22,23] and The Unified Theory of Acceptance and Use of Technology. These models have been extended (e.g., TAM2 and UTAUT2), but in this article, we mainly use the core constructs of each original model) (UTAUT) [24,25] have traditionally been applied to the use and acceptance of ICT technologies. TAM and UTAUT are now being employed by researchers internationally, including in North America and Asia, to investigate the factors influencing higher education students’ adoption and interaction with AI chatbots [26,27,28]. Central to TAM are questions of how the perceptions of a technology’s usefulness and ease of use significantly shape the likelihood of its adoption. “Perceived usefulness” is the extent to which an individual believes that a particular technology will improve their job or task performance. This includes perceived improvements in the interaction process, learning activities, feedback, assessment, and learning outcomes. The “perceived ease of use” construct measures the degree to which an individual expects that using the technology will be low-effort or effort-free [22]. In a more complex approach, the UTAUT model merges insights from eight foundational models of information technology acceptance and use [25], including TAM, introducing four pivotal constructs for technology acceptance and use: performance expectancy, effort expectancy, social influence, and facilitating conditions. “Performance expectancy” is the degree to which an individual believes that using the technology will help them attain gains in job or task performance. “Effort expectancy” relates to the degree of ease associated with the use of the system. “Social influence” is the degree to which an individual perceives that important others believe they should use a new system. “Facilitating conditions” is the degree to which an individual believes that an organizational and technical infrastructure exists to support use of the system [24] (pp. 447–453). These factors collectively forecast an individual’s behavioral intention to engage with a technology, with performance expectancy often being the most reliable predictor of such intentions [25]. Menon and Shilpa [29] found that the four factors of UTAUT, along with two extended constructs, i.e., perceived interactivity and privacy concerns, could explain users’ interactions and engagement with ChatGPT. Similarly, Strzelecki [28] researched students’ acceptance of ChatGPT, finding that the constructs of habit and performance expectancy in UTAUT had the most impact on the behavioral intention for use. Dahri et al. [30] used an extended TAM model, finding a high acceptance of ChatGPT among university students. The students’ approval was influenced by many factors, including personal competency, social influence, perceived AI usefulness, enjoyment, trust, positive attitudes, and metacognitive self-regulated learning. Al-Abdullatif [26] points out that despite its widespread use, the TAM model is limited to understanding individuals’ decision-making process in accepting or rejecting new technologies and does perhaps not match the emerging criteria of AI chatbot integration in higher education. As AI chatbots are among the most advanced technologies today [31], the interactivity and feedback they offer are likely to influence the users’ perceived value of AI chatbots in ways we do not yet understand.
A substantial body of research on AI chatbots in higher education has emerged in the short period since the release of ChatGPT in November 2022 [2,3,32,33,34]. However, our understanding of how and why students in higher education adopt and engage with these technologies and how this engagement influences their critical thinking, creativity, problem-solving, and learning processes in general is still limited [16]. We also have limited knowledge about which disciplines, countries, and research methods are prevalent in this field of research. Gaining such knowledge holds critical value not only for policymakers, educational institutions, educators, and students but also for guiding future research. Exploring students’ interactions with AI chatbots is essential for shaping a research-informed approach to teaching and learning in modern higher education. This scoping review systematically examines students’ perceptions and uses of AI in higher education by reviewing empirical research studies and posing the following research questions:
RQ1:
Which disciplines, countries, and research methods are prevalent in studies of students’ perceptions and uses of AI chatbots in higher education?
RQ2:
What are the dominant themes and patterns in studies of students’ perceptions of AI chatbots in higher education?
RQ3:
What do studies reveal about AI chatbots’ role in students’ learning processes and task resolution?

2. Materials and Methods

To understand how AI chatbots influence students’ learning in higher education, we conducted a scoping review of the currently published international literature by scoping out certain parts of the research field using an exploratory approach [35,36]. We reviewed empirical studies using a five-step systematic scoping approach: (1) identifying the review questions, (2) identifying the relevant studies, (3) selecting the studies, (4) charting the data, and (5) collating, summarizing, and reporting the results [35].

2.1. Step 1: Identifying the Review Questions

We initiated our research by defining the problem statement and research questions, our focus being on the population (students in higher education) and interventions (perceptions and use of AI chatbots), allowing an exploratory investigation [35].

2.2. Step 2: Identifying the Relevant Studies

Collaborating with a librarian, we developed search strings based on the research questions and key concepts, continually refining the criteria. The databases searched were ERIC, Scopus, Web of Science, Engineering Village, ArXiv, OSFPreprints, Preprints.org, and Google Scholar (only the first 200 searches from Google Scholar were included). We have attached one of the search strings (Appendix A). The searches resulted in 2209 pre-prints and peer-reviewed international articles for screening (Figure 1).

2.3. Step 3: Selecting the Studies

The inclusion and exclusion criteria (Table 1) were developed to guide the screening process. We used Rayyan.ai, a screening tool enabling blind screening processes, to ensure compatibility with the research aims and ethical guidelines. After fine-tuning the criteria, the initial 2209 articles were independently screened by abstracts, resulting in 47 full-text articles with empirical findings for the final screening [37]. Ultimately, 24 studies were included for analysis (Table 2) following the PRISMA process (Figure 1) recommended by Page et al. [38].

2.4. Step 4: Charting the Data

The problem statement and research questions (RQs) guided the process of charting (extracting and analyzing) data from the included studies. To answer the first RQ, the first author extracted the following information from the reviewed articles: authors, year of publication, study locations, subject/discipline, research methods, and type of publication (Appendix A). To answer research questions 2 and 3, the first author performed a content analysis [63], using NVivo to facilitate and organize the analysis [64]. Initially, open coding was employed to generate descriptive codes from the data, allowing for an open exploration of the content. These codes were instrumental in interpreting the empirical findings from the articles, integrating findings that aligned with existing codes, and developing new codes when necessary. Subsequently, we examined the codes to discern the relationships and connections between them and the data. This analysis led to the identification of themes as we found connections and differences among the codes. Revisiting the codes enabled us to refine and integrate the themes, developing a cohesive understanding of the data.

2.5. Step 5: Collating, Summarizing, and Reporting the Results

In this scoping review, we mapped and organized the chartered data for each of the three research questions. The next chapter provides a report of the findings.

3. Findings and Discussion

This chapter presents and discusses the findings, addressing the previously defined research questions (RQs) in three subchapters. Each RQ is answered by a presentation of the synthesized findings from the reviewed articles and overview figures or tables. For RQs 2 and 3, we also discuss how the findings can be understood in light of the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT). Lastly, we highlight important knowledge gaps that have become clear throughout this research.

3.1. RQ1: Which Countries, Disciplines, and Research Methods Are Prevalent in Studies of Students’ Perceptions and Uses of AI Chatbots in Higher Education?

The findings relevant to this RQ are presented, with corresponding figures (Figure 1 and Figure 2).

Countries, Disciplines, and Research Methods

Geographically, the reviewed studies reveal a notable concentration of research in Asia, reflecting a shift in the publication rate on AI in higher education from North America to Asia [9,65] (Figure 2). While contributions from Oceania, Europe, North America, and Africa exist, there was less representation from these regions in our literature sample. However, this might be due to the short time between the release of ChatGPT in November 2022 and the start of our review process in September 2023. Although our search criteria included studies between 1 January 2022 and 5 September 2023, the included papers are all from 2023, after the release of ChatGPT. The short period with a new technology, between November 2022 and September 2023, could be a reason why so few empirical articles were collected and why certain countries were not publishing much on that technology yet (Figure 2).
Figure 2. Geographical distribution of included articles.
Figure 2. Geographical distribution of included articles.
Education 14 00922 g002
The reviewed studies show emerging fields of AI research, especially in Science, Technology, Engineering, and Math (STEM) education (Figure 3). This category comprises studies in computer science, computer engineering, programming, data science, mathematics (engineering), information technology, and construction hazard education. Next, interdisciplinary studies (studies researching several disciplines) are prevalent, showcasing how AI chatbots can be utilized and researched across diverse fields of education. Only four studies researched English as a Foreign Language (EFL), reflecting a shift where studies on AI in language learning—once a dominant area of AI research—have now been surpassed by research on AI in engineering and technology education [9,66]. Surprisingly, our review found only one study on the use of AI chatbots in each of the disciplines of medicine, pharmacology, and journalism, highlighting the need for more research in medicine and the social sciences, where AI chatbot use may be prevalent among students. Despite this, the interdisciplinary nature of the reviewed studies emphasizes a widespread interest and applicability of AI chatbots across different academic domains, illustrating AI chatbots’ potential influence on learning processes in multiple educational contexts (Appendix A).
In terms of research methods (Figure 4), mixed methods were the most frequent in the reviewed studies (n= 9), combining questionnaires with interviews and integrating closed and open-ended surveys. Quantitative research (n = 6) is represented by the use of standardized questionnaires, surveys, and specific measurement scales, assessing, for example, computational thinking. Qualitative approaches (n = 3) are characterized by focus group interviews and in-depth interviews. Other studies (n = 6) apply experimental designs to capture pre- and post-intervention effects, quasi-experimental designs for problem-solving tasks and multi-method qualitative approaches that included case-by-case observations and weekly self-reflections. Methods of teaching with AI chatbots included flipped classrooms and gamification (Appendix A). Diverse methods of inquiry are represented, but there is a scarcity of observational studies, a method that could highlight interesting perspectives on how students use AI in higher education. This diversity of methods reflects the need to embrace varied approaches to capture the complex nature of this research topic, i.e., investigating students’ perceptions and use of AI chatbots in higher education.

3.2. RQ2: What Are Dominant Themes and Patterns in Students’ Perceptions of AI Chatbots in Higher Education?

The syntheses and analyses of findings from the reviewed articles highlight two central themes in students’ perceptions of AI chatbots in higher education: (1) perceived usefulness and adoption and (2) perceived challenges and concerns.

3.2.1. Perceived Usefulness and Adoption of AI Chatbots

A recurring theme in the reviewed articles is students’ positive perceptions and their perceived usefulness of AI chatbots, with ChatGPT being the most prominent example [39,41,42,45,48,55]. Our findings indicate that AI chatbots are perceived as useful tools in higher education due to their ease of use, efficiency, feedback mechanisms, interactivity, and convenience of use [40,42,44] (Table 3). Students view AI chatbots not only as tools for providing immediate answers to their academic questions but also as personal tutors and assistants, enabling personalized learning support and facilitating interactive activities such as brainstorming and discussion [42,46,48]. For example, the students in Chan and Hu’s study highly valued GenAI technologies like ChatGPT because of their capabilities to provide unique insights and personalized feedback. The students also thought the technologies were user-friendly and appreciated the availability and access to anonymous support. These findings resonate with the Technology Acceptance Model (TAM), according to which technologies perceived as useful and easy to use are more likely to be adopted by users. The students report that AI chatbots improve the efficiency of their academic task resolution and find the tools not only accessible but also easy to interact with. AI chatbot use seems to minimize learners’ cognitive load and make the learning process more engaging and less time-consuming. The ease of engagement with AI chatbots further facilitates the adoption and integration of these technologies into students’ learning practices [22].
Some of the reviewed studies revealed other factors that contribute to the positive attitudes toward AI chatbots among students. Chan and Hu [42] found that students’ knowledge of generative AI chatbots and their frequency of use are positively correlated, emphasizing the importance of familiarity in fostering acceptance and integration. As students become more familiar with AI chatbots, they see a wider range of opportunities to receive personalized learning, writing, and research support and improve their workflow, efficiency, and task resolution [42]. The relationship between familiarity and frequency of use also relates to the UTAUT’s performance expectancy—the degree to which an individual believes that using a particular technology will enhance job performance [24]. As students recognize more possibilities for practical AI chatbot applications, they may develop a stronger belief in the technology’s ability to impact their academic performance positively.
The reviewed studies also confirm the UTAUT factor of social influence, demonstrating that significant others can shape students’ adoption of AI chatbots. In one study, the so-called Generation Z students (born between 1995 and 2008) expressed a more optimistic and swift inclination toward adopting AI chatbots than their teachers. This suggests a generational gap in the immediate adoption and integration of AI chatbot use in higher education [43]. Generation Z students are described as being tech-savvy, aware of the latest trends, and early adaptors of new technologies [67,68]. This generational gap may also be linked to younger students’ more frequent use of social media, where significant amounts of knowledge about ChatGPT was disseminated immediately after its release [41]. One of the reviewed studies found that many undergraduate and graduate students first encountered ChatGPT through social media platforms [41]. Social media trends and online personalities encountered through social media may function as peers for younger students, suggesting that active engagement on these platforms may lead to them becoming aware of and more easily adopting AI chatbots.
Table 3. Perceived usefulness and adoption of AI chatbots among students.
Table 3. Perceived usefulness and adoption of AI chatbots among students.
AspectFunction/CodeRepresentative QuotesOther Representative Support
Perceived usefulness and adoptionPerceived usefulness“Students reported that ChatGPT-3 helped them generate new ideas, save time in research, and enhance their writing skills. They appreciated the interactive nature of ChatGPT-3, which facilitated brainstorming and discussion” [46] (p. 360).
“Students found ChatGPT’s responses accurate, insightful, logical, reasonable, unbiased, structured, comprehensive, and customized. One STEM student praised ChatGPT’s accuracy in the AI module, while a non-STEM student appreciated its ability to offer “a lot of insightful responses from a different perspective” [62] (p. 19).
[42,43,44,48,52,53,54,55,56]
Efficiency“[ChatGPT] has been really fast in generating text. When you give it a topic, the application will return with a short essay of about 300 words in no time” [57] (p. 13)[42,57,62]
Feedback, interactivity, and availability“When I have doubt and couldn’t find other people to help me out, ChatGPT seems like a good option.” [42] (p. 9)[46,48,54,62]
Interest“The students expressed overall positive perceptions including interest … Almost 96% of the students find ChatGPT interesting or very interesting” [54] (p. 38810)[42,46,53]

3.2.2. Students’ Concerns about AI Chatbots in Higher Education

Out of the 24 reviewed studies, half highlight students’ concerns and perceived challenges related to AI chatbots’ role in higher education (Table 4). The most prominent worry concerns the accuracy and reliability of the information provided by AI chatbots [42,48,52,54,62]. Students in Zhu et al.’s [62] study emphasized the need for fact-checking responses from ChatGPT to prevent misinformation, but also that fact-checking is difficult, as ChatGPT’s responses contain extensive knowledge that users do not always have themselves. One student said: “ChatGPT may sometimes be unable to understand our question and hence not generate an accurate answer” [62] (p. 22). AI chatbots can misinterpret inputs and provide unreliable or false answers, thus creating uncertainty about the quality of responses and whether they are accurate and appropriate for the nuanced nature of higher educational learning tasks [48,62]. Another issue students highlighted was plagiarism; ChatGPT makes plagiarism easier, as the AI generates new texts that are often near impossible to detect, as each response is uniquely created and human-like [57].
Seven of the reviewed studies indicate an uncertainty among students, as they perceive AI chatbots to be highly useful for many of their academic tasks but are also aware of limitations, academic risks, and ethical concerns [46,49]. Students generally acknowledge the strength of AI chatbots but express worries about academic integrity [51,57]. The students in Petricini et al.’s [51] study were uncertain if they should use AI chatbots, and most of them agreed that although AI had value in education, it could be dangerous for students to use, emphasizing how using AI chatbots to complete coursework violated academic integrity. Participants in Liu’s [49] study agreed that there are issues related to plagiarism, information leakage, and inaccurate responses, but they still believed that ChatGPT was helpful for their English learning. Some students felt that ChatGPT would negatively affect their thinking skills because the answers to coursework and questions were so readily available [42,59,62]. Programming students in Yilmaz and Yilmaz’s [59] study thought that ChatGPT use was contributing to student laziness. The students in Chan and Hu’s [42] study expressed reservations about how over-reliance on AI chatbots can lead to hindrances in students’ growth, skills, and development over time and a decrease in critical thinking, decision-making, and creativity [42].
The mixed feelings students harbor towards relying on AI for learning tasks—appreciating the convenience but worrying about the consequences of “shortcuts” to academic task resolution—reflect the UTAUT’s performance expectancy construct [22,24,25]. Students are evaluating whether the immediate benefits of using AI chatbots genuinely translate into long-term increases in performance or academic growth. This ambiguity can be understood through the lens of Marton and Säljö’s “approaches to learning” [21]. If students use AI chatbots as part of a deep approach to learning by probing underlying concepts and spending time understanding connections, the technology could potentially enhance learning and comprehension. Conversely, a surface approach, characterized by using AI chatbots to quickly solve tasks, perhaps with a goal of finishing the task or obtaining good grades, is likely to yield little academic benefit [69]. The efficiency gained in completing tasks swiftly may come at the cost of reduced genuine learning and knowledge retention. These challenges necessitate a balanced approach to integrating AI chatbots in educational settings, addressing issues of accuracy and reliability, and developing ethical guidelines and educational policies to mitigate concerns around plagiarism and ensure the responsible use of AI chatbots.
Table 4. Challenges concerning students’ use of AI chatbots.
Table 4. Challenges concerning students’ use of AI chatbots.
AspectFunction/CodeRepresentative QuotesOther Representative Quotes
Perceived challenges and concernsAcademic integrity and plagiarism“While students believe that AI has value in education (µ = 3.78), they also agree that AI could be dangerous for students (µ = 3.40) and using AI to complete coursework violates academic integrity” [51] (p. 12)[46,49,62]
Accuracy and reliability of information“We cannot predict or accurately verify the accuracy or validity of AI generated information. Some people may be misled by false information” [42] (p. 11).
“Students have concerns about the limitations of ChatGPT, with some citing its tendency to provide generic, vague, and superficial responses that lack context, specificity, and depth. One student noted that “ChatGPT doesn’t do well in terms of giving exact case studies, but only gives a generic answer to questions” [62] (p. 22–23).
[48,52,54]
Ethical issues“AI technologies are too strong so that they can obtain our private information easily.” [42] (p. 11)[42,54,57]
Impairing learning, critical thinking, and creativity“The participants raised concerns about how ChatGPT negatively affected their critical thinking and effort to generate ideas. In the AI module, a STEM student noted, “ChatGPT gives concise answers which take away the need for critical thinking.” [62] (p. 24)[42,59]
Changing teaching and assessmentOver 50% of students in our study would not feel confident if the instructor used it to create a syllabus, and the number of students who did not feel confident having an AI grade was even higher. [51] (p. 21)[56,62]

3.3. RQ3: What Does Current Research Reveal about AI Chatbots’ Role in Students’ Learning Processes and Task Resolution?

The analysis of the reviewed studies highlights two central themes concerning the implications of students’ use of AI chatbots: (1) students’ use of AI chatbots in their learning processes, and (2) AI chatbots’ role in student task resolution, creativity, and critical thinking.

3.3.1. Students’ Use of AI Chatbots in Learning Processes

Eight of the studies show that students’ approaches to AI chatbot use are diverse and purpose-driven [43,60] (Table 5). Chan and Lee [43] found that younger students (Generation Z) intended to use AI technologies for many parts of their learning processes. These intentions included acquiring, compiling, and consolidating information for brainstorming, gaining inspiration, and condensing and summarizing complex ideas. Other uses were language learning, administrative work, and writing support. Students in programming courses used ChatGPT to enhance their code-writing skills [52,59,61]. Furthermore, Sanchez-Ruiz et al. [53] found that ChatGPT functions well as a helper for engineering students by providing solutions to the mathematical problems posed and offering step-by-step guides to the processes required for their solutions. In this way, ChatGPT was an important tool in the students’ problem-solving process. Elkhodr et al. [44] showed that ICT students had positive perceptions and improved performance in functionality, user flow, and content comprehension when using AI chatbots. The study further revealed that the ICT students’ level of reliance on ChatGPT for generating answers was moderate, indicating a balanced approach to using the AI tools for learning. Yan’s [57] research on L2 writing practicum (writing for non-native speakers) revealed that students easily grasped the basic skills needed to use and prompt ChatGPT to improve their writing capabilities. The students did, however, express concerns related to educational equity, as ChatGPT could be used as a shortcut for data gathering, literature reading, and writing. The complex process of “reading-writing-revision” could be simplified into “text-generation and post-editing”, which, as the authors state, demands a lower level of language competence and writing skills [57].
Many of the reviewed studies researched AI chatbots’ influence on students’ motivation and self-efficacy. Four of the studies underscore the positive impact of AI chatbots on student engagement and motivation [40,50,54,60]. For example, Muñoz et al. [50] highlight how using ChatGPT was a motivating factor for EFL students, especially when facilitated by experienced teachers. The students’ listening skills and interest in learning were significantly correlated with the teacher’s experience level, and ChatGPT seemed to meaningfully impact student motivation to develop their listening skills [50]. This matches Shoufan’s [54] findings, which reveal that students perceive AI tools as interesting, motivating, and helpful for learning processes because of the easy-to-grasp interface, well-structured responses, and good explanations. ChatGPT’s ability to motivate students, especially when used in conjunction with experienced teachers, illustrates UTAUT’s constructs of social influence and facilitating conditions [24,25]. Teachers’ validation and integration of AI chatbots in learning activities seem to play a crucial role in shaping students’ attitudes toward these technologies [50]. Ali et al.’s [40] findings suggest that ChatGPT generally motivates students to develop their reading and writing skills highlighting that experienced users have more faith in ChatGPT than inexperienced users. Urban et al. [56] found that using ChatGPT for solving tasks enhanced students’ self-efficacy. In educational environments, students with high levels of self-efficacy tend to approach complex assignments with more confidence and overcome problems because they perceive the tasks as manageable and within their capabilities, tending to try different approaches and persist in the face of obstacles, often leading to improved performance [70,71]. However, Urban et al. [56] found no association between higher self-efficacy and actual task performance. Higher self-efficacy was, on the other hand, associated with an overestimation of one’s performance. Although students believed that they performed better, they did not.
The findings indicating that ChatGPT is perceived as motivating and an enhancing factor for self-efficacy because of its easy-to-grasp interface and well-structured responses reflect the UTAUT construct of effort expectancy—the degree of ease associated with using a technology [22]. Students’ perceptions that AI chatbots require minimal effort to engage with and streamline their task resolution may increase their willingness to adopt the technology. Furthermore, students’ experiences that using AI chatbots improves their capabilities to solve academic tasks aligns with the UTAUT’s performance expectancy, where technology is valued for its ability to improve job performance—in this case, academic achievement [25]. These studies show that students perceive the tools as useful and that perceived usefulness (TAM) encourages ongoing engagement with the technology, predicting users’ willingness to adopt and continue using the technology [22,23]. It therefore seems likely that students who have acquired skills in using AI see many more possibilities for use, and thus continue to use AI chatbots.
Table 5. AI chatbots’ role in student learning processes.
Table 5. AI chatbots’ role in student learning processes.
AspectFunction/CodeRepresentative QuotesOther Representative Quotes
Learning processesHelpful for learning processes“… The majority of students think that ChatGPT can help them learn more effectively” [49] (p. 134)
“ChatGPT uncovered information that I usually would not think of; it helps me to see other perspectives/applications. … ChatGPT provides a very broad answer that gives the users a good idea of the different information surrounding the questions they asked.” [62] (p. 20)
[42,43,52,54,59,60,61]
Motivation and self-efficacy“ChatGPT increased students’ enthusiasm and interest in learning. Independence and intrinsic motivation had the highest mean scores, averaging 4.03 and 4, respectively. This demonstrates that ChatGPT gave students a sense of empowerment and increased engagement [50] (p. 16)
“Participants with higher self-efficacy for resolving the task with ChatGPT assistance thought task resolution was both easier and more interesting and that ChatGPT was more useful. … However, it is important to note that there was no association between higher self-efficacy and actual task performance. The higher self-efficacy was on the other hand associated with overestimation of one’s performance” [56] (p. 22).
[40,44,55,60]

3.3.2. Task Resolution, Critical Thinking Skills, and Problem Solving

Nine of the reviewed articles claim that AI chatbots have the potential to improve students’ task resolution, critical thinking skills, or problem-solving [44,46,50,52,53,55,56,59,60] (Table 6). Urban et al. [56] studied creative problem-solving and revealed that students in experimental groups who used ChatGPT created more original solutions than those in the control groups. The experimental groups’ work was more elaborated and better reflected the task goals. Moreover, Uddin’s [55] study on construction students suggested that ChatGPT can be leveraged to improve hazard recognition levels, illustrating the applicability of ChatGPT as a way of quickly obtaining an overview of a specific field: “ChatGPT helped me think about hazards that I would have otherwise overlooked” [55]. One study in the field of computer programming showed that students using ChatGPT had an advantage over those not using it in terms of scores earned on a programming task [52]. However, there were inconsistencies in the codes as ChatGPT faced challenges in comprehending some of the questions and lacked the proficiency to utilize knowledge to generate accurate solutions [52]. AI chatbots seem to currently be less good at creativity, evaluation, and nuanced opinions, and better as tools for students to use in creative and problem-solving processes. In Elkhodr et al.’s [44] study, the students using ChatGPT performed better in functionality, user flow, content, and information hierarchy than those using traditional search engines. The researchers attributed the improvement of outcomes to the nature of the task, which involved creativity and critical thinking.
Despite these optimistic findings, we know little about how AI chatbots influence students’ learning processes and knowledge retention [2,14,34]. While task resolution becomes more efficient, the use of these technologies needs to be supplemented with human instruction to ensure that students develop a robust understanding of the subjects at hand, and not just achieve high grades. It is, therefore, critical that educational institutions and educators cultivate an environment that not only fosters academic performance but also critical and independent thinking skills through the use of AI chatbots. Fostering students’ critical thinking skills is a widely accepted educational goal [14,72,73] and involves training the ability to analyze, evaluate, and synthesize information from various sources to make reasoned judgments [73,74]. Such skills can be honed alongside learning to use AI chatbots. For example, Essel et al. [75] found that using ChatGPT for in-class tasks led to improvements in students’ critical openness and reflective skepticism—their willingness to consider alternative perspectives and to be exposed to novel ideas, as well as the ability to question assumptions, pursue evidence, and evaluate arguments [75] (p. 9). Determining how educational institutions should teach, guide, tutor, and assess students’ interactions with AI chatbots is becoming a key pedagogical task [76].
The influence AI chatbots have on students’ learning and writing practices varies depending on how students interact with the tools and the social contexts of these interactions [34,75]. The role of the university teacher becomes pivotal, potentially acting as both a course designer and a mentor to guide students toward the ethical and functional use of AI chatbots, encouraging deep learning approaches and critical reflection. As Biggs states, students’ approaches to learning are profoundly impacted by their perceptions of the learning environment, their abilities, and the employed teaching strategies [21]. Negative perceptions of the learning environment or low self-efficacy may result in a surface learning approach, focusing on memorization, shortcuts, and obtaining good grades [21]. Therefore, it is imperative not to consider the integration of AI chatbots into higher education without also considering the learning environment into which they are integrated. The teaching faculty thus wields significant influence in shaping responsible and effective AI interactions.
Table 6. AI chatbots’ role in student task resolution, creativity, and critical thinking.
Table 6. AI chatbots’ role in student task resolution, creativity, and critical thinking.
AspectFunction/CodeRepresentative QuotesOther Representative Quotes
Task resolution, creativity, and critical thinkingImprovement of critical thinking and problem-solving skills“… the experimental group’s problem-solving posttest scores (M = 25.00, SD = 2.99) were significantly higher than those of the control group” [60] (p. 5).
“… results indicate that the integration of ChatGPT-3 and AI literacy training significantly improved the students’ critical thinking and journalism writing skills” [46] (p. 359).
[53,56,59]
Improvement of task resolution“ChatGPT increased students’ enthusiasm and interest in learning. Independence and intrinsic motivation had the highest mean scores, averaging 4.03 and 4, respectively. This demonstrates that ChatGPT gave students a sense of empowerment and increased engagement [50] (p. 16).
“… the solutions of participants using ChatGPT were of higher quality (i.e., they more closely reflected the general objectives of the task). Moreover, these solutions were better elaborated and more original than the solutions of peers working on their own” [56] (p. 21).
[44,52,53,55]

3.4. Knowledge Gaps and Implications

Research into students’ perceptions and use of AI chatbots, particularly regarding their effects on learning processes and skill development, is still in its early stages, with few definitive conclusions. Thus, all types of research are needed. Nonetheless, this scoping review uncovers critical areas of research where the little knowledge we currently have is particularly thin, highlighting significant gaps in our current understanding and pointing to opportunities for future study.
One notable knowledge gap from our findings relates to the scarcity of studies on perceptions and use of AI chatbots in Europe, South America, and Africa. Research from these regions could reveal other student perceptions and use tendencies. There were also few studies in medical and health sciences and social sciences. In addition, we found no observational studies among the reviewed studies. Observational research methods could help uncover unanticipated issues, dynamics, or opportunities that may not be captured through surveys, interviews, interventions, or review studies. For example, researchers could observe students’ problem-solving strategies, communication patterns, and collaborative behaviors in classrooms when utilizing AI chatbots in group learning settings or during independent work. This would also allow investigations into how AI chatbots influence the dynamics of students’ interactions with each other in higher education. AI chatbot use in higher education might contain a risk of inhibiting knowledge-sharing and dialogue among students, potentially diminishing the sense of community vital for student cohorts, inter-student collaboration, and possibilities for learning from each other.
The ethical considerations students venture into when deciding how to use AI chatbots in educational settings also emerge as a compelling area for inquiry. Exploring the decision-making processes involved in using AI chatbots and understanding students’ perceptions of these decisions could provide essential insights. The ethical implications of using and misusing AI chatbots are significant, and the boundaries of acceptable behavior, such as in cases of plagiarism, remain somewhat undefined. By examining how students make key decisions during their academic work, we can better understand how to guide them towards making ethical choices.
Another critical research gap relates to how different uses of AI chatbots influence or are influenced by students’ agency and critical thinking skills. Understanding whether AI chatbots act as catalysts for developing critical thinking or potentially inhibit learning processes due to shortcut-enabled practices emerges as important and applicable field of inquiry. The terms “deep” and “surface” approaches to learning may become increasingly relevant, offering insights into students’ orientations toward academic material [77]. Although students perceive AI chatbots as beneficial, the depth and retention of knowledge gained through their interactions with AI chatbots require further examination. We recommend investigating what learning strategies and approaches students apply when using AI chatbots compared to those who use more traditional tools.
To address the evolving challenges posed by AI chatbots in educational settings, it becomes apparent that there is a need for new or renewed theoretical models. Existing frameworks like the Technology Acceptance Model (TAM) and the Unified Theory of Acceptance and Use of Technology (UTAUT) provide foundational insights into technology adoption and usage. However, these models may not fully encompass the unique challenges and dynamics introduced by AI chatbots, which do not merely serve as passive sources of information. Instead, they allow students to actively engage with their learning material, potentially reshaping learning behaviors and cognitive processes in ways that previous models were not designed to predict or clarify. Introducing AI chatbots into learning environments presents complexities that include ethical considerations, the potential for dependency, and the impact on critical thinking and learning outcomes. As such, there is a compelling need for developing new theoretical approaches that can better address the nuances of how students interact with, learn from, and are influenced by AI chatbots.
Building on this need for theoretical innovation, another critical area requiring further exploration is the integration of AI chatbots within existing educational frameworks and curricula. Our findings highlight the importance of gaining knowledge about how to teach in ways that scaffold students’ learning while using AI chatbots. Research should focus on how these tools can complement traditional teaching methods and enhance curricular objectives. Privacy and data security concerns related to using AI chatbots in education also demand rigorous scrutiny to establish best practices for protecting students. Addressing these knowledge gaps is essential for leveraging AI chatbots to contribute positively to the educational landscape while aligning with ethical standards and supporting diverse learning needs among students in higher education. Gaining more knowledge of how students interact with AI chatbots will have significant implications for teaching and learning in higher education in the coming years.

3.5. Practical Implications for Educators

Our review shows that many students express uncertainty about how to use AI chatbots in ways that maintain academic integrity and facilitate learning. Consequently, we recommend, in line with other recent literature [4,6], that HE educators incorporate AI chatbots into their teaching practices. Changing teaching practices while also learning how to use new technologies can be a demanding challenge for educators. Therefore, we suggest that higher education institutions support educators by facilitating spaces, both locally and in a broader context, where they can systematically explore, examine, share, and discuss their experiences with using AI to support teaching and learning with colleagues and students. These spaces could be dedicated forums that promote the sharing of knowledge and experiences and encourage conversations between students and educators.
Integrating AI chatbots in teaching can be approached in various ways. Educators can use AI chatbots to design or redesign their study programs, create different types of assignments, and motivate class preparation. When used to create assignments, students may receive personalized feedback and tips on the same assignments, and the responses provided by AI chatbots can further be analyzed to foster critical thinking. The use of AI chatbots to solve topic- or field-specific case assignments may serve to prepare students for some of the challenges they will meet in their future jobs. In these scenarios, teachers can design cases or lessons where students are presented with complex, open-ended problems and must work collaboratively to find solutions [78,79]. AI chatbots can support such problem-solving activities by providing instant access to problem-specific questions and answers, problem-solving strategies, and personalized back-and-forth feedback on thoughts, ideas, and text. Integrating AI chatbots in the students’ problem-solving is in accordance with the findings in some of the reviewed studies. Sánchez-Ruiz et al. [53] found that ChatGPT effectively assisted engineering students with complex problems, while Urban et al. [56] showed it enhanced creative problem-solving. In flipped classroom settings, educators can ask students to prepare by reading articles and other texts as normal, but also use AI chatbots to research the topic, summarize articles, and create example notes before class. Since our study shows that many students perceive AI chatbots as useful, motivating, and interesting tools, this might increase preparation for in-class teaching.
Regardless of which applications are integrated into teaching, focusing on prudent and ethical use is important. One way to ensure proper utilization is to facilitate discussions about the affordances and ethics of using AI-generated texts for different types of academic tasks. Further, exploring and discussing different ways of prompting and re-prompting the AI chatbots’ responses and creating a shared bank of the best prompts and questions for specific topics can facilitate AI literacy and cooperation among the student group while also helping teachers familiarize themselves with these new technologies.

3.6. Limitations

The scoping review methodology used for this study is systematic, thorough, and transparent, ensuring a comprehensive overview of the field. It is, however, important to note some inherent limitations. Notably, the screening process revealed a predominance of studies from Asia and disciplines within STEM education. This geographical and disciplinary skew is not a result of deliberate selection bias but rather reflects the existing body of research that met our inclusion criteria. Such a concentration in the literature suggests a regional and field-specific emphasis in the current research landscape, which may limit the generalizability of our findings to other contexts. However, such findings also highlight important knowledge and research gaps, as is the goal of a scoping review.
Furthermore, we have not done any formal quality appraisals of the research articles reviewed. This decision aligns with recommended practices for scoping reviews since extensive quality assessments would extend the timeline significantly [35]. By foregoing these appraisals, we were able to include a broader range of studies and accelerate the review process. Also, some included articles are pre-prints and not peer-reviewed (Appendix A). These choices enable our review to capture the most recent developments and maintain relevance in the rapidly evolving field of AI in education.
Another limitation is that only one reviewer charted the reviewed data. This practice might have led to fewer nuances in the analysis, but is an approach recommended for efficiency [35]. While this approach may limit the perspectives integrated into our findings, using NVivo in our analysis process helps mitigate potential biases and errors, thus supporting a valid interpretation of the data and ensuring that our findings remain robust and insightful.

4. Conclusions

In conclusion, this scoping review of empirical research on students’ perceptions and use of AI chatbots in higher education has uncovered a growing field of international research. The majority of research is conducted in Asia, with a particular research interest in STEM education. Our findings reveal that students value the flexibility, interactivity, and accessibility AI chatbots provide. The chatbots’ capacity to deliver immediate feedback and individual support is particularly appreciated, aligning with established theoretical models such as TAM and UTAUT. These models highlight the significance of perceived usefulness, ease of use, performance expectancy, and social influence on technology acceptance and continued use. Furthermore, our findings show that using AI chatbots enhanced students’ self-efficacy and motivation for task resolution. In some experimental studies, AI chatbots enhanced certain aspects of students’ task resolution, critical thinking, and problem-solving. Despite positive perceptions, there are notable concerns. Students are apprehensive about the accuracy and reliability of the information provided by AI chatbots, questioning the depth of understanding that can be achieved through these interactions. Ethical concerns, especially regarding plagiarism and potential over-reliance on technological solutions, are also prominent. Our findings reveal an ambivalence among students, who recognize both these technologies’ benefits and potential drawbacks. Higher education institutions are responsible for teaching students how to use AI chatbots in ways that benefit their learning processes.
This review also aimed to identify knowledge gaps in existing research. Notably, we found few studies on students’ perceptions and use of AI chatbots in Europe, South America, and Africa, and within the fields of social sciences and health education. The lack of geographical and disciplinary diversity highlights a need for more research on perceptions and use among students in other parts of the world, ensuring that our understandings are based upon global findings. Although research methods in the reviewed studies were diverse, we found no observational studies and few qualitative studies, highlighting a methodological research gap. Furthermore, there is a pressing need for thorough explorations into how AI chatbots influence students’ learning strategies, critical thinking skills, problem-solving capabilities, and creativity. In addition, our findings highlight a need for new or renewed theoretical models to better understand students’ use of AI chatbots in their learning processes. These insights underscore the need for continuous, in-depth research to better understand the impact of AI chatbots on students’ learning approaches, processes, and outcomes. It is essential to gain knowledge about integrating AI tools into educational frameworks, ensuring they not only meet ethical standards but also genuinely enhance educational outcomes and support students’ learning processes.

Author Contributions

Conceptualization, O.M.S., A.M. and K.L.; methodology, O.M.S., A.M. and K.L.; validation, O.M.S., A.M. and K.L.; formal analysis, O.M.S.; investigation, O.M.S.; resources, O.M.S.; data curation, O.M.S.; writing—original draft preparation, O.M.S.; writing—review and editing, O.M.S., A.M. and K.L.; visualization, O.M.S.; supervision, A.M. and K.L.; project administration, O.M.S., A.M. and K.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Data Availability Statement

We have included one of the search strings for transparency. Upon reasonable request, we will share the detailed list of search strings. For requests, please contact the first author: [email protected].

Acknowledgments

The authors thank librarian Gøril Tvedten for valuable support in the literature search.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Example search string (ERIC).
Table A1. Example search string (ERIC).
#QueryResults
S1TI learn* OR AB learn* OR KW learn*490,892
S2DE “Learning”9681
S3S1 OR S2492,607
S4DE “Higher Education”516,243
S5DE “Colleges”9105
S6TI ((high* W0 educat*) or universit* or college*) OR AB ((high* W0 educat*) or universit* or college*) OR KW ((high* W0 educat*) or universit* or college*)419,407
S7S4 OR S5 OR S6637,434
S8DE “Artificial Intelligence”3167
S9TI ((artificial W0 intelligence) or ai) OR AB ((artificial W0 intelligence) or ai) OR KW ((artificial W0 intelligence) or ai)2392
S10TI (chatbot* or chatgpt or (chat W0 gpt)) OR AB (chatbot* or chatgpt or (chat W0 gpt)) OR KW (chatbot* or chatgpt or (chat W0 gpt))107
S11TI large W0 language W0 model* OR AB large W0 language W0 model* OR KW large W0 language W0 model*5
S12S8 OR S9 OR S10 OR S114132
S13DE “Students”5308
S14TI student* OR AB student* OR KW student*786,585
S15S13 OR S14788,039
S16TI (impact* or effect* or influence* or outcome* or result* or consequence* or experience*) OR AB (impact* or effect* or influence* or outcome* or result* or consequence* or experience*) OR KW (impact* or effect* or influence* or outcome* or result* or consequence* or experience*)986,762
S17S3 AND S7 AND S12 AND S15 AND S16413
S18S3 AND S7 AND S12 AND S15 AND S16132
Date of search: 5 September 2023. This search was limited to articles published between 1 January 2022 and 5 September 2023. Hits: 132.

References

  1. Tlili, A.; Shehata, B.; Adarkwah, M.A.; Bozkurt, A.; Hickey, D.T.; Huang, R.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  2. Deng, X.; Yu, Z. A meta-analysis and systematic review of the effect of chatbot technology use in sustainable education. Sustainability 2023, 15, 2940. [Google Scholar] [CrossRef]
  3. Huang, W.; Hew, K.F.; Fryer, L.K. Chatbots for language learning—Are they really useful? A systematic review of chatbot-supported language learning. J. Comput. Assist. Learn. 2022, 38, 237–257. [Google Scholar] [CrossRef]
  4. Lo, C.K. What is the impact of ChatGPT on education? A rapid review of the literature. Educ. Sci. 2023, 13, 410. [Google Scholar] [CrossRef]
  5. Kevian, D.; Syed, U.; Guo, X.; Havens, A.; Dullerud, G.; Seiler, P.; Qin, L.; Hu, B. Capabilities of Large Language Models in Control Engineering: A Benchmark Study on GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra. arXiv 2024, arXiv:2404.03647. [Google Scholar]
  6. Gimpel, H.; Hall, K.; Decker, S.; Eymann, T.; Lämmermann, L.; Mädche, A.; Röglinger, M.; Ruiner, C.; Schoch, M.; Schoop, M. Unlocking the Power of Generative AI Models and Systems Such as GPT-4 and ChatGPT for Higher Education. 2023. Available online: https://www.econstor.eu/handle/10419/270970 (accessed on 10 January 2024).
  7. Lund, B.D.; Wang, T. Chatting about ChatGPT: How May AI and GPT Impact Academia and Libraries? Library Hi Tech News, 2023. Available online: https://ssrn.com/abstract=4333415 (accessed on 15 November 2023).
  8. Imran, A.A.; Lashari, A.A. Exploring the World of Artificial Intelligence: The Perception of the University Students about ChatGPT for Academic Purpose. Glob. Soc. Sci. Rev. 2023, VIII, 375–384. [Google Scholar] [CrossRef]
  9. Crompton, H.; Burke, D. Artifcial intelligence in higher education: The state of the feld. Int. J. Educ. Technol. High. Educ. 2023, 20, 8. [Google Scholar] [CrossRef]
  10. Labadze, L.; Grigolia, M.; Machaidze, L. Role of AI chatbots in education: Systematic literature review. Int. J. Educ. Technol. High. Educ. 2023, 20, 56. [Google Scholar] [CrossRef]
  11. Wu, R.; Yu, Z. Do AI chatbots improve students learning outcomes? Evidence from a meta-analysis. Br. J. Educ. Technol. 2023, 55, 10–33. [Google Scholar] [CrossRef]
  12. Ansari, A.N.; Ahmad, S.; Bhutta, S.M. Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Educ. Inf. Technol. 2023, 29, 11281–11321. [Google Scholar] [CrossRef]
  13. Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  14. Vargas-Murillo, A.R.; de la Asuncion Pari-Bedoya, I.N.M.; de Jesús Guevara-Soto, F. Challenges and Opportunities of AI-Assisted Learning: A Systematic Literature Review on the Impact of ChatGPT Usage in Higher Education. Intl. J. Learn. Teach. Edu. Res. 2023, 22, 122–135. [Google Scholar] [CrossRef]
  15. Farrell, W.C.; Bogodistov, Y.; Mössenlechner, C. Is Academic Integrity at Risk? Perceived Ethics and Technology Acceptance of ChatGPT; Association for Information Systems (AIS) eLibrary: Sacramento, CA, USA, 2023. [Google Scholar]
  16. Abbas, M.; Jam, F.A.; Khan, T.I. Is it harmful or helpful? Examining the causes and consequences of generative AI usage among university students. Int. J. Educ. Technol. High. Educ. 2024, 21, 10. [Google Scholar] [CrossRef]
  17. Baidoo-Anu, D.; Ansah, L.O. Education in the era of generative artificial intelligence (AI): Understanding the potential benefits of ChatGPT in promoting teaching and learning. J. AI 2023, 7, 52–62. [Google Scholar] [CrossRef]
  18. Biggs, J.B. Student Approaches to Learning and Studying. Research Monograph; ERIC: Budapest, Hungary, 1987. [Google Scholar]
  19. Biggs, J.; Tang, C.; Kennedy, G. Ebook: Teaching for Quality Learning at University 5e; McGraw-Hill Education (UK): London, UK, 2022. [Google Scholar]
  20. Richardson, J.T. Approaches to learning or levels of processing: What did Marton and Säljö (1976a) really say? The legacy of the work of the Göteborg Group in the 1970s. Interchange 2015, 46, 239–269. [Google Scholar] [CrossRef]
  21. Marton, F.; Säljö, R. On qualitative differences in learning: I—Outcome and process. Br. J. Educ. Psychol. 1976, 46, 4–11. [Google Scholar] [CrossRef]
  22. Davis, F.D. Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Q. 1989, 13, 319–340. [Google Scholar] [CrossRef]
  23. Davis, F.D.; Bagozzi, R.P.; Warshaw, P.R. User acceptance of computer technology: A comparison of two theoretical models. Manag. Sci. 1989, 35, 982–1003. [Google Scholar] [CrossRef]
  24. Venkatesh, V.; Morris, M.G.; Davis, G.B.; Davis, F.D. User acceptance of information technology: Toward a unified view. MIS Q. 2003, 27, 425–478. [Google Scholar] [CrossRef]
  25. Venkatesh, V.; Thong, J.Y.; Xu, X. Consumer acceptance and use of information technology: Extending the unified theory of acceptance and use of technology. MIS Q. 2012, 36, 157–178. [Google Scholar] [CrossRef]
  26. Al-Abdullatif, A.M. Modeling Students’ Perceptions of Chatbots in Learning: Integrating Technology Acceptance with the Value-Based Adoption Model. Educ. Sci. 2023, 13, 1151. [Google Scholar] [CrossRef]
  27. Duong, C.D.; Vu, T.N.; Ngo, T.V.N. Applying a modified technology acceptance model to explain higher education students’ usage of ChatGPT: A serial multiple mediation model with knowledge sharing as a moderator. Int. J. Manag. Educ. 2023, 21, 100883. [Google Scholar] [CrossRef]
  28. Strzelecki, A. Students’ acceptance of ChatGPT in higher education: An extended unified theory of acceptance and use of technology. Innov. High. Educ. 2023, 49, 223–245. [Google Scholar] [CrossRef]
  29. Menon, D.; Shilpa, K. “Chatting with ChatGPT”: Analyzing the factors influencing users’ intention to Use the Open AI’s ChatGPT using the UTAUT model. Heliyon 2023, 9, e20962. [Google Scholar] [CrossRef] [PubMed]
  30. Dahri, N.A.; Yahaya, N.; Al-Rahmi, W.M.; Aldraiweesh, A.; Alturki, U.; Almutairy, S.; Shutaleva, A.; Soomro, R.B. Extended TAM based acceptance of AI-Powered ChatGPT for supporting metacognitive self-regulated learning in education: A mixed-methods study. Heliyon 2024, 10, e29317. [Google Scholar] [CrossRef]
  31. Lin, Y.; Yu, Z. A bibliometric analysis of artificial intelligence chatbots in educational contexts. Interact. Technol. Smart Educ. 2024, 21, 189–213. [Google Scholar] [CrossRef]
  32. Bearman, M.; Ryan, J.; Ajjawi, R. Discourses of artificial intelligence in higher education: A critical literature review. High. Educ. 2023, 86, 369–385. [Google Scholar] [CrossRef]
  33. Baytak, A. The Acceptance and Diffusion of Generative Artificial Intelligence in Education: A Literature Review. Curr. Perspect. Educ. Res. 2023, 6, 7–18. [Google Scholar] [CrossRef]
  34. Imran, M.; Almusharraf, N. Analyzing the role of ChatGPT as a writing assistant at higher education level: A systematic review of the literature. Contemp. Educ. Technol. 2023, 15, ep464. [Google Scholar] [CrossRef]
  35. Arksey, H.; O’Malley, L. Scoping studies: Towards a methodological framework. Int. J. Soc. Res. Methodol. 2005, 8, 19–32. [Google Scholar] [CrossRef]
  36. Papaioannou, D.; Sutton, A.; Booth, A. Systematic Approaches to a Successful Literature Review; SAGE Publications Ltd.: Washington, DC, USA, 2016; pp. 1–336. [Google Scholar]
  37. Nightingale, A. A guide to systematic literature reviews. Surgery 2009, 27, 381–384. [Google Scholar] [CrossRef]
  38. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E. The PRISMA 2020 statement: An updated guideline for reporting systematic reviews. Int. J. Surg. 2021, 88, 105906. [Google Scholar] [CrossRef]
  39. Abouammoh, N.; Alhasan, K.; Raina, R.; Malki, K.A. Exploring perceptions and experiences of ChatGPT in medical education: A qualitative study among medical College faculty and students in Saudi Arabia. medRxiv 2023. [Google Scholar] [CrossRef]
  40. Ali, J.K.M.; Shamsan, M.A.A.; Hezam, T.A. Impact of ChatGPT on learning motivation: Teachers and students’ voices. J. Engl. Stud. Arab. Felix 2023, 2, 41–49. [Google Scholar] [CrossRef]
  41. Bonsu, E.; Baffour-Koduah, D. From the consumers’ side: Determining students’ perception and intention to use ChatGPTin ghanaian higher education. J. Educ. Soc. Multicult. 2023, 4, 1–29. [Google Scholar] [CrossRef]
  42. Chan, C.K.Y.; Hu, W. Students’ voices on generative AI: Perceptions, benefits, and challenges in higher education. Int. J. Educ. Technol. High. Educ. 2023, 20, 43. [Google Scholar] [CrossRef]
  43. Chan, C.K.Y.; Lee, K.K.W. The AI generation gap: Are Gen Z students more interested in adopting generative AI such as ChatGPT in teaching and learning than their Gen X and Millennial Generation teachers? Smart Learn. Environ. 2023, 10, 60. [Google Scholar] [CrossRef]
  44. Elkhodr, M.; Gide, E.; Wu, R.; Darwish, O. ICT students’ perceptions towards ChatGPT: An experimental reflective lab analysis. STEM. Educ. 2023, 3, 70–88. [Google Scholar] [CrossRef]
  45. Exintaris, B.; Karunaratne, N.; Yuriev, E. Metacognition and Critical Thinking: Using ChatGPT-Generated Responses as Prompts for Critique in a Problem-Solving Workshop (SMARTCHEMPer). J. Chem. Educ. 2023, 100, 2972–2980. [Google Scholar] [CrossRef]
  46. Irfan, M.; Murray, L.; Ali, S. Integration of Artificial Intelligence in Academia: A Case Study of Critical Teaching and Learning in Higher Education. Glob. Soc. Sci. Rev. 2023, VIII, 352–364. [Google Scholar] [CrossRef]
  47. Kelly, A.; Sullivan, M.; Strampel, K. Generative artificial intelligence: University student awareness, experience, and confidence in use across disciplines. J. Univ. Teach. Learn. Pr. 2023, 20, 12. [Google Scholar] [CrossRef]
  48. Limna, P.; Kraiwanit, T.; Jangjarat, K. The use of ChatGPT in the digital era: Perspectives on chatbot implementation. Appl. Learn. Teach. 2023, 6, 64–74. [Google Scholar]
  49. Liu, B. Chinese University Students’ Attitudes and Perceptions in Learning English Using ChatGPT. Int. J. Educ. Humanit. 2023, 3, 132–140. [Google Scholar] [CrossRef]
  50. Muñoz, S.A.S.; Gayoso, G.G.; Huambo, A.C. Examining the Impacts of ChatGPT on Student Motivation and Engagement. Soc. Space 2023, 23, 1–27. [Google Scholar]
  51. Petricini, T.; Wu, C.; Zipf, S.T. Perceptions about Generative AI and ChatGPT Use by Faculty and College Students. 2023. Available online: https://osf.io/preprints/edarxiv/jyma4 (accessed on 10 January 2024).
  52. Qureshi, B. Exploring the use of chatgpt as a tool for learning and assessment in undergraduate computer science curriculum: Opportunities and challenges. arXiv 2023, arXiv:2304.11214. [Google Scholar]
  53. Sánchez-Ruiz, L.M.; Moll-López, S.; Nuñez-Pérez, A. ChatGPT Challenges Blended Learning Methodologies in Engineering Education: A Case Study in Mathematics. Appl. Sci. 2023, 13, 6039. [Google Scholar] [CrossRef]
  54. Shoufan, A. Exploring Students’ Perceptions of CHATGPT: Thematic Analysis and Follow-Up Survey. IEEE Access 2023, 11, 38805–38818. [Google Scholar] [CrossRef]
  55. Uddin, S.M.J.; Albert, A.; Ovid, A.; Alsharef, A. Leveraging ChatGPT to Aid Construction Hazard Recognition and Support Safety Education and Training. Sustainability 2023, 15, 7121. [Google Scholar] [CrossRef]
  56. Urban, M.; Děchtěrenko, F.; Lukavský, J.; Hrabalová, V. Can ChatGPT Improve Creative Problem-Solving Performance in University Students? Comput. Educ. 2024, 215, 105031. [Google Scholar] [CrossRef]
  57. Yan, D. Impact of ChatGPT on learners in a L2 writing practicum: An exploratory investigation. Educ. Inf. Technol. 2023, 28, 13943–13967. [Google Scholar] [CrossRef]
  58. Yifan, W.; Mengmeng, Y.; Omar, M.K. “A Friend or A Foe” Determining Factors Contributed to the Use of ChatGPT among University Students. Int. J. Acad. Res. Progress. Educ. Dev. 2023, 12, 2184–2201. [Google Scholar] [CrossRef] [PubMed]
  59. Yilmaz, R.; Yilmaz, F.G.K. Augmented intelligence in programming learning: Examining student views on the use of ChatGPT for programming learning. Comput. Hum. Behav. Artif. Hum. 2023, 1, 100005. [Google Scholar] [CrossRef]
  60. Yilmaz, R.; Karaoglan Yilmaz, F.G. The effect of generative artificial intelligence (AI)-based tool use on students’ computational thinking skills, programming self-efficacy and motivation. Comput. Educ. Artif. Intell. 2023, 4, 100147. [Google Scholar] [CrossRef]
  61. Zheng, Y. ChatGPT for Teaching and Learning: An Experience from Data Science Education. arXiv 2023, arXiv:2307.16650. [Google Scholar]
  62. Zhu, G.; Fan, X.; Hou, C.; Zhong, T.; Seow, P. Embrace Opportunities and Face Challenges: Using ChatGPT in Undergraduate Students’ Collaborative Interdisciplinary Learning. arXiv 2023, arXiv:2305.18616. [Google Scholar]
  63. Elo, S.; Kyngäs, H. The qualitative content analysis process. J. Adv. Nurs. 2008, 62, 107–115. [Google Scholar] [CrossRef] [PubMed]
  64. Creswell, J.W. Educational Research; Pearson: London, UK, 2012. [Google Scholar]
  65. Li, D.; Tong, T.W.; Xiao, Y. Is China emerging as the global leader in AI. Harv. Bus. Rev. 2021, 18. [Google Scholar]
  66. Chu, H.-C.; Hwang, G.-H.; Tu, Y.-F.; Yang, K.-H. Roles and research trends of artificial intelligence in higher education: A systematic review of the top 50 most-cited articles. Australas. J. Educ. Technol. 2022, 38, 22–42. [Google Scholar]
  67. Mládková, L. Learning habits of generation Z students. In European Conference on Knowledge Management; Academic Conferences International Limited: Reading, UK, 2017; pp. 698–703. [Google Scholar]
  68. Hampton, D.C.; Keys, Y. Generation Z students: Will they change our nursing classrooms. J. Nurs. Educ. Pract. 2017, 7, 111–115. [Google Scholar] [CrossRef]
  69. Trigwell, K.; Prosser, M. Improving the quality of student learning: The influence of learning context and student approaches to learning on learning outcomes. High. Educ. 1991, 22, 251–266. [Google Scholar] [CrossRef]
  70. Bandura, A. Self-Efficacy: The Exercise of Control; Macmillan: London, UK, 1997. [Google Scholar]
  71. Zimmerman, B.J. Self-efficacy: An essential motive to learn. Contemp. Educ. Psychol. 2000, 25, 82–91. [Google Scholar] [CrossRef] [PubMed]
  72. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education; Association for Computing Machinery, Inc.: New York, NY, USA, 2023. [Google Scholar] [CrossRef]
  73. Tang, T.; Vezzani, V.; Eriksson, V. Developing critical thinking, collective creativity skills and problem solving through playful design jams. Think. Ski. Creat. 2020, 37, 100696. [Google Scholar] [CrossRef]
  74. Lai, E.R. Critical thinking: A literature review. Pearson’s Res. Rep. 2011, 6, 40–41. [Google Scholar]
  75. Essel, H.B.; Vlachopoulos, D.; Essuman, A.B.; Amankwa, J.O. ChatGPT effects on cognitive skills of undergraduate students: Receiving instant responses from AI-based conversational large language models (LLMs). Comput. Educ. Artif. Intell. 2024, 6, 100198. [Google Scholar] [CrossRef]
  76. Kim, J.; Lee, S.-S. Are Two Heads Better than One?: The Effect of Student-AI Collaboration on Students’ Learning Task Performance. TechTrends Link. Res. Pract. Improv. Learn. 2023, 67, 365–375. [Google Scholar] [CrossRef] [PubMed]
  77. Biggs, J. What the student does: Teaching for enhanced learning. High. Educ. Res. Dev. 1999, 18, 57–75. [Google Scholar] [CrossRef]
  78. Bransford, J.D.; Stein, B.S. The IDEAL Problem Solver: A Guide for Improving Thinking, Learning, and Creativity; W.H. Freeman: New York, NY, USA, 1993. [Google Scholar]
  79. Jonassen, D. Supporting problem solving in PBL. Interdiscip. J. Probl.-Based Learn. 2011, 5, 95–119. [Google Scholar] [CrossRef]
Figure 1. PRISMA.
Figure 1. PRISMA.
Education 14 00922 g001
Figure 3. Examined disciplines in the included studies.
Figure 3. Examined disciplines in the included studies.
Education 14 00922 g003
Figure 4. Frequency of research methodologies.
Figure 4. Frequency of research methodologies.
Education 14 00922 g004
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
CriteriaInclusionExclusion
Article topic/study focusEmpirical studies that discuss:
AI chatbots’ effect/impact/influence on students’ learning processes or outcomes in the field of higher education (HE)
Students’ perceptions of AI chatbots in their learning processes
Students’ use of AI chatbots in their learning processes
Assessment and feedback directly relating to students’ learning with AI
Non-empirical articles
Online learning related to AI
Overviews, predictions, and strategies about AI in HE
Teaching in HE
Specialized AI tools (created) for specific fields of education
PopulationStudents in HETeachers, general public, and pupils in K12
Publication typePeer-reviewed publications in academic journals and pre-printsAll other types of publications, including books, grey literature, and conference papers
Study typePrimary studiesSecondary studies
Time periodStudies conducted between 1 January 2022 and 5 September 2023Articles outside the time period
LanguageEnglish, Norwegian, Swedish, and DanishNon-English, non-Norwegian, non-Swedish, and non-Danish
Table 2. Overview of articles.
Table 2. Overview of articles.
StudyDisciplineCountryMethodData CollectionType
[39]Abouammoh et al. (2023)MedicineSaudi ArabiaQualitativeFocus group interviewsPreprint
[40]Ali et al. (2023)EFL studentsSaudi ArabiaQuantitativeQuestionnairePeer-reviewed
[41]Bonsu and Koduah (2023)Multiple disciplinesGhanaMixedQuestionnaire and interviewsPeer-reviewed
[42]Chan and Hu (2023)Multiple disciplinesChinaMixedClosed-ended and open-ended surveyPeer-reviewed
[43]Chan and Lee (2023)Multiple disciplinesChinaMixedClosed-ended and open-ended questionnairePeer-reviewed
[44]Elkhodr et al. (2023)Information TechnologyAustraliaMixedExamination of students’ responses, instructors’ notes, and quantitative analysis of performance metricsPeer-reviewed
[45]Exintaris et al. (2023)Pharmaceutical scienceAustraliaWorkshopProblem solving and evaluation of solutionsPeer-reviewed
[46]Irfan et al. (2023)JournalismTajikistanMixedIntervention (pre-test, post-test) and interviewsPeer-reviewed
[47]Kelly et al. (2023)Multiple disciplinesAustraliaQuantitativeQuestionnairePeer-reviewed
[48]Limna et al. (2023)Multiple disciplinesThailandQualitativeIn-depth interviewsPeer-reviewed
[49]Liu (2023)EFL studentsChinaQuantitativeQuestionnairePeer-reviewed
[50]Munoz et al. (2023)EFL studentsNot specified (South America)QuantitativeQuestionnairePeer-reviewed
[51]Petricini et al. (2023)Multiple disciplinesUSAQuantitativeSurveyPreprint
[52]Qureshi (2023)Computer scienceSaudi ArabiaQuasi-experimental designProblem-solving and evaluation of solutionsPreprint
[53]Sánchez-Ruiz et al. (2023)Mathematics (Engineering)SpainBlendedFlipped teaching, gamification, problem-solving, laboratory sessions, and exams with a computer algebraic systemPeer-reviewed
[54]Shoufan (2023)Computer engineeringUnited Arab EmiratesMixedOpen-ended and closed-ended questionnairePeer-reviewed
[55]Uddin (2023Construction hazardUSAExperimental designPre- and post-intervention activity and questionnairePeer-reviewed
[56]Urban et al. (2023)Multiple disciplinesCzech RepublicExperimental designResults from experiments and surveyPreprint
[57]Yan (2023)EFL studentsChinaMulti-method qualitative approachInterview, case-by-case observation, and thematic analysisPeer-reviewed
[58]Yifan et al. (2023)Multiple disciplinesMalaysiaQuantitativeQuestionnairePeer-reviewed
[59]Yilmaz and Yilmaz (2023a)ProgrammingTurkeyCase study and MixedClosed-ended and open-ended surveyPeer-reviewed
[60]Yilmaz and Yilmaz (2023b)ProgrammingTurkeyExperimental designComputational thinking scale, computer programming self-efficacy scale, and motivation scalePeer-reviewed
[61]Zheng (2023)Data scienceUSAMixedExperiments and questionnairePeer-reviewed
[62]Zhu et al. (2023)STEM and non-STEMSingaporeQuasi-experimentWeekly surveys and online written self-reflectionsPreprint
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Schei, O.M.; Møgelvang, A.; Ludvigsen, K. Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies. Educ. Sci. 2024, 14, 922. https://doi.org/10.3390/educsci14080922

AMA Style

Schei OM, Møgelvang A, Ludvigsen K. Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies. Education Sciences. 2024; 14(8):922. https://doi.org/10.3390/educsci14080922

Chicago/Turabian Style

Schei, Odin Monrad, Anja Møgelvang, and Kristine Ludvigsen. 2024. "Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies" Education Sciences 14, no. 8: 922. https://doi.org/10.3390/educsci14080922

APA Style

Schei, O. M., Møgelvang, A., & Ludvigsen, K. (2024). Perceptions and Use of AI Chatbots among Students in Higher Education: A Scoping Review of Empirical Studies. Education Sciences, 14(8), 922. https://doi.org/10.3390/educsci14080922

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop