What Is the Impact of ChatGPT on Education? A Rapid Review of the Literature

: An artiﬁcial intelligence-based chatbot, ChatGPT, was launched in November 2022 and is capable of generating cohesive and informative human-like responses to user input. This rapid review of the literature aims to enrich our understanding of ChatGPT’s capabilities across subject domains, how it can be used in education, and potential issues raised by researchers during the ﬁrst three months of its release (i.e., December 2022 to February 2023). A search of the relevant databases and Google Scholar yielded 50 articles for content analysis (i.e., open coding, axial coding, and selective coding). The ﬁndings of this review suggest that ChatGPT’s performance varied across subject domains, ranging from outstanding (e.g., economics) and satisfactory (e.g., programming) to unsatisfactory (e.g., mathematics). Although ChatGPT has the potential to serve as an assistant for instructors (e.g., to generate course materials and provide suggestions) and a virtual tutor for students (e.g., to answer questions and facilitate collaboration), there were challenges associated with its use (e.g., generating incorrect or fake information and bypassing plagiarism detectors). Immediate action should be taken to update the assessment methods and institutional policies in schools and universities. Instructor training and student education are also essential to respond to the impact of ChatGPT on the educational environment.


Introduction
Artificial intelligence (AI) has developed rapidly in recent years, leading to various applications in different disciplines, such as healthcare [1] and education [2]. AI systems can be trained to simulate the human brain and carry out routine work using large amounts of data [3]. In healthcare, for example, AI can aid professionals in their work by synthesising patient records, interpreting diagnostic images, and highlighting health concerns [4]. AI applications have also been utilised in education to enhance administrative services and academic support [2]. One representative example is intelligent tutoring systems (ITS), which can be used to simulate one-to-one personal tutoring. The results of a meta-analysis indicated that ITS generally had a moderately positive effect on the academic achievement of college students [5]. However, the development of ITS can be challenging, as it involves not only content creation and design but also the refinement of feedback phrasing and dialogue strategies [6].
ChatGPT, a recently developed conversational chatbot created by OpenAI [7], may make it easier for instructors to apply AI in teaching and learning. ChatGPT uses natural language processing to generate human-like responses to user input. It has gained attention worldwide for its impressive performance in generating coherent, systematic, and informative responses [8]. In a surprising achievement, ChatGPT passed four separate examinations at the University of Minnesota Law School [9]. Although its scores were not (yet) very good, the results demonstrate that this AI application is capable of earning a university degree [10]. Since its release on 30 November 2022, ChatGPT has become the fastest-growing user application in history, reaching 100 million active users as of January 2023, just two months after its launch [11].
• RQ1: How does ChatGPT perform in different subject domains? • RQ2: How can ChatGPT be used to enhance teaching and learning? • RQ3: What are the potential issues associated with ChatGPT, and how can they be addressed?

The Rapid Review Approach
As ChatGPT continues to receive great attention and is increasingly used by students, there is a pressing need to understand its impact on education and take immediate action in response to its possible threats. However, a comprehensive systematic review can take several months or even years to conduct [16,17], which is not ideal for catching up with the rapidly evolving ChatGPT landscape. Therefore, a rapid review approach was used. According to Tricco et al. [17], "a rapid review is a type of knowledge synthesis in which components of the systematic review process are simplified or omitted to produce information in a short period of time" (p. 2). This approach enabled a timely synthesis and overview of recently published articles and their key findings. Accordingly, this review could provide valuable insights enabling researchers, practitioners, and policymakers to respond promptly to the influence of ChatGPT on the field of education.

Search Strategies
This rapid review followed the Preferred Reporting Items for Systematic Reviews and Meta-Analysis (PRISMA) statement when selecting relevant articles [18]. The final search was conducted on 28 February 2023 (i.e., three months after the release of ChatGPT). Therefore, the ChatGPT involved in these articles was its original release (dated 30 November 2022) based on Generative Pre-trained Transformer 3.5 (GPT-3.5). Seven electronic databases were used: (1) Academic Search Ultimate, (2) ACM Digital Library, (3) Education Research Complete, (4) ERIC, (5) IEEE Xplore, (6) Scopus, and (7) Web of Science. The search string "ChatGPT" was used in each database to search for relevant articles that included the term "ChatGPT" in their title, abstract, or keywords. The publication period was specified as 2022 to the present. Despite the use of multiple databases, only a limited number of relevant articles were found. Therefore, a title search for the term "ChatGPT" was conducted using Google Scholar within the same publication period. This approach enabled the retrieval of additional relevant articles that were not captured in the initial database search.

Inclusion and Exclusion Criteria
Academic articles published between 1 January 2022 and 28 February 2023 (the date of the final search) were reviewed, including advanced online publications and preprints. Non-academic articles (e.g., articles from mass and social media) were excluded. At the time of writing, this period covered all of the articles that had been published about ChatGPT, as it was released on 30 November 2022. To be included in this rapid review, articles had to discuss ChatGPT in the field of education, with no constraints on any specific educational contexts. Literature reviews, if retrieved, were used as background references. However, they were excluded from the synthesis to avoid duplicate findings. In addition, only English-language articles were included in this review. Table 1 summarises the inclusion and exclusion criteria for article selection.

Content Analysis
Guided by the research questions set forth above, a content analysis of the included articles was conducted. Creswell's [19] coding techniques were employed in the process of analysis and interpretation. His approach comprised of the following three main stages.

1.
Open coding: Open coding is the initial stage of coding in which the researcher examines the data and generates codes to describe and categorise the data. It allows a deep exploration and understanding of the data without imposing preconceived ideas.

2.
Axial coding: After the initial process of open coding, axial coding is used to analyse the relationships and connections between the codes generated during open coding. It involves organising the codes into broader categories or themes and establishing the relationships between them. 3.
Selective coding: Selective coding is the final stage of coding. It involves further refining and integrating the categories or themes identified during open and axial coding to develop a comprehensive picture. It identifies the core category or central phenomenon that emerged from the data.
Following Creswell's [19] approach, codes were assigned to pieces of data without using predominant frameworks in the stage of open coding. After the completion of coding the first 15 articles, all the codes assigned were reviewed and grouped by similarity, thereby reducing redundant codes in the stage of axial coding. The preliminary list of codes was used to analyse the remaining articles. Hence, any new codes that emerged were added to the list. Specific quotes were identified to further support the codes. Finally, similar codes were organised into categories in the stage of selective coding. To enhance the consistency of classification, several exemplary quotes that clearly illustrated each category were identified. Multiple reviews of the data were performed to maintain consistency in coding. Figure 1 shows that 50 and 367 outcomes were retrieved through the database search and Google Scholar, respectively. Due to duplication across databases and Google Scholar, some of the articles were removed, yielding 363 unique records for screening. Notably, many records retrieved were outside the scope of this review, including articles from mass and Educ. Sci. 2023, 13, 410 4 of 15 social media found by Google Scholar. After reviewing the unique records' titles, abstracts, and publication sources, 55 full-text articles were assessed for eligibility. Five articles were then excluded: (1) two articles that were not related to education, (2) one preprint article that was incomplete, and (3) two articles that were literature reviews. However, some of the excluded articles were used as background references. Ultimately, 50 articles [8,10, were selected for synthesis. The list of the included articles is provided in the Supplementary Materials. Figure 1 provides an overview of the article selection process. Figure 1 shows that 50 and 367 outcomes were retrieved through the databas and Google Scholar, respectively. Due to duplication across databases and Google some of the articles were removed, yielding 363 unique records for screening. N many records retrieved were outside the scope of this review, including articles fro and social media found by Google Scholar. After reviewing the unique records' t stracts, and publication sources, 55 full-text articles were assessed for eligibility. F cles were then excluded: (1) two articles that were not related to education, (2) o print article that was incomplete, and (3) two articles that were literature review ever, some of the excluded articles were used as background references. Ultima articles [8,10, were selected for synthesis. The list of the included articles is p in the Supplementary Materials. Figure 1 provides an overview of the article s process. As shown in Figure 2, the majority of the included articles were written by r ers from the United States (N = 19), followed by the United Kingdom (N = 4) and (N = 3). Figure 3 shows that 16 of the 50 included articles (32%) were published in j and two were published as an eBook (2%) and report (2%), respectively. The re 32 articles (64%) were preprints uploaded on SSRN [68] (N = 12), followed by ar (N = 7) and ResearchGate [70] (N = 5), among others. As shown in Figure 2, the majority of the included articles were written by researchers from the United States (N = 19), followed by the United Kingdom (N = 4) and Austria (N = 3). Figure 3 shows that 16 of the 50 included articles (32%) were published in journals, and two were published as an eBook (2%) and report (2%), respectively. The remaining 32 articles (64%) were preprints uploaded on SSRN [68] (N = 12), followed by arXiv [69] (N = 7) and ResearchGate [70] (N = 5), among others.

RQ1: How Does ChatGPT Perform in Different Subject Domains?
As shown in Table 2, ChatGPT-3.5′s performance was evaluated in 21 studies using tests, exams, or interviews. All of these studies, except for de Winter's [20] study that used high school exams, were conducted in the context of higher education. ChatGPT's performance varied across subject domains. It demonstrated outstanding results in critical and higher-order thinking [21] and economics [22]. However, its performance was not entirely satisfactory in other subject domains, such as law [10,23], medical education [24][25][26][27][28][29][30][31][32], and mathematics [33]. These results are consistent with Newton's [34] study, which used multiple-choice questions-based exams to assess ChatGPT's performance in economics, medical education, law, and physics. In economics, ChatGPT outperformed average students, but in other subjects, it underperformed average students by 8 to 40 points (out of 100).
Notably, medical education was a subject domain for which there was relatively more evidence of ChatGPT's performance. These results came from multiple countries. Kung et al. [24] and Gilson et al. [25] evaluated ChatGPT's performance using the United States Medical Licensing Examination. Their results suggested that ChatGPT could yield moderate accuracy and, thus, a passing score. However, Fijačko [26] found that ChatGPT failed the exams of the American Heart Association. Han et al. [27] noted that ChatGPT provided incorrect and insufficient information about cardiovascular diseases. In the context of pharmacology education in Malaysia, Nisar and Aslam [28] found that although ChatGPT could provide relevant and accurate answers, these answers lacked references and sources. In China, Wang et al. [29] testified ChatGPT using the Chinese National Medical Licensing Examination. They reported that ChatGPT's performance was lower than medical students' average score. Similar results were reported by researchers in Korea [30], India [31], and Singapore [32]. The findings of these studies suggested that, in general, ChatGPT's performance was not entirely satisfactory in the domain of medical education.

RQ1: How Does ChatGPT Perform in Different Subject Domains?
As shown in Table 2, ChatGPT-3.5 s performance was evaluated in 21 studies using tests, exams, or interviews. All of these studies, except for de Winter's [20] study that used high school exams, were conducted in the context of higher education. ChatGPT's performance varied across subject domains. It demonstrated outstanding results in critical and higher-order thinking [21] and economics [22]. However, its performance was not entirely satisfactory in other subject domains, such as law [10,23], medical education [24][25][26][27][28][29][30][31][32], and mathematics [33]. These results are consistent with Newton's [34] study, which used multiple-choice questionsbased exams to assess ChatGPT's performance in economics, medical education, law, and physics. In economics, ChatGPT outperformed average students, but in other subjects, it underperformed average students by 8 to 40 points (out of 100).
Notably, medical education was a subject domain for which there was relatively more evidence of ChatGPT's performance. These results came from multiple countries. Kung et al. [24] and Gilson et al. [25] evaluated ChatGPT's performance using the United States Medical Licensing Examination. Their results suggested that ChatGPT could yield moderate accuracy and, thus, a passing score. However, Fijačko [26] found that ChatGPT failed the exams of the American Heart Association. Han et al. [27] noted that ChatGPT provided incorrect and insufficient information about cardiovascular diseases. In the context of pharmacology education in Malaysia, Nisar and Aslam [28] found that although ChatGPT could provide relevant and accurate answers, these answers lacked references and sources. In China, Wang et al. [29] testified ChatGPT using the Chinese National Medical Licensing Examination. They reported that ChatGPT's performance was lower than medical students' average score. Similar results were reported by researchers in Korea [30], India [31], and Singapore [32]. The findings of these studies suggested that, in general, ChatGPT's performance was not entirely satisfactory in the domain of medical education. Programming [35][36][37] Outstanding to satisfactory "Nearly all answers of ChatGPT were completely correct and nicely explained . . . the explanations by ChatGPT were amazingly clear and goal-oriented" [35] (p. 1). "The lecturer graded the assignment submission with a total score of 71 out of 100 points, resulting in a grade of 'Satisfactory'" [36] (p. 4).

Software testing [38] Unsatisfactory
"ChatGPT is not able, by itself, to pass a software testing course. In total, ChatGPT was able to correctly answer 37.5% of the questions we posed" [38] (p. 8).

MCQ-based exams across subjects [34] Unsatisfactory
"ChatGPT fails to meet the passing grade on almost every MCQ exam that it is tested against, and performs significantly worse than the average human student" [34] (p. 1).

RQ2: How Can ChatGPT Be Used to Enhance Teaching and Learning?
The findings of this review suggested that ChatGPT can serve as an assistant for both instructors and students. Concerning instructors, Table 3 categorises ChatGPT's five major functions into two main aspects, namely teaching preparation (i.e., generating course materials, providing suggestions, and performing language translation) and assessment (i.e., generating assessment tasks and evaluating student performance).
Concerning the teaching preparation aspect, ChatGPT can make suggestions that assist instructors. In the words of one instructor, "ChatGPT can be a useful tool for teachers and educators to remind them of what knowledge and skills should be included in their curriculum, by providing an outline" [40] (pp. 7-8). Megahed et al. [37] asked ChatGPT to generate a course syllabus for an undergraduate statistics course. They noted that its teaching suggestions could be adopted without the need for major changes. Zhai [41] found that ChatGPT was able to provide recommendations related to special education. He commented, "These recommendations are beneficial for students with special learning needs" (p. 4). Concerning the assessment aspect, ChatGPT can help instructors generate exercises, quizzes, and scenarios for student assessment [29,45]. However, Al-Worafi et al. [49] cautioned that the assessment tasks suggested by ChatGPT might not cover all targeted learning objectives. Therefore, they recommended using ChatGPT to guide instructors in their preparation of assessments rather than to completely replace their efforts. For example, Han et al. [27] instructed ChatGPT to create a multiple-choice question with a vignette and lab values for a medical topic. The question's outcome was "a reasonable basic question to assess students' knowledge" (p. 7). The instructors could refine the language of and information contained in the question to increase its relevancy to their course requirements.
For students, ChatGPT can serve as a virtual tutor to support their learning. Table 4 categorises its six major functions into two main aspects, namely learning (i.e., answering questions, summarising information, and facilitating collaboration) and assessment (i.e., concept checking and exam preparation, drafting assistance, and providing feedback).
Consider ChatGPT's ability to facilitate collaboration in the learning aspect. Rudolph et al. [52] suggested that ChatGPT can generate different scenarios for students to work collaboratively in group activities. It can then provide a discussion structure, real-time feedback, and personalised guidance to facilitate group discussions and debates [44]. As noted by Gilson et al. [25], the enhanced small-group discourse in problem-solving benefits student learning.
Concerning the assessment aspect, students can benefit from using ChatGPT as a scaffolding tool for their initial draft and then refining the draft by correcting errors and adding references to the final versions of their written assignments [10,23]. Gilson et al. [25] noted that ChatGPT's initial answer could prompt further questioning and encourage students to apply their knowledge and reasoning skills. However, Rudolph et al. [52] cautioned that ChatGPT should not replace critical thinking and original work but should instead serve as an aid to improve writing and research skills. This review identified five major issues associated with ChatGPT in education. biased data, having limited up-to-date knowledge, and generating incorrect or fake information) and plagiarism prevention (i.e., student plagiarism and bypassing plagiarism detectors). Table 5. Major potential issues associated with ChatGPT.

Aspect
Issues Representative quotes Other Representative Support

Accuracy and reliability
Relying on biased data "These biases stem from research performed in high-income countries and textbooks [i.e., the training data of ChatGPT]" [57] (p. 2). [40,46,54] Having limited up-to-date knowledge "ChatGPT has no idea of the world after 2021 and hence it could not add any references or information after 2021" [46] (p. 14). [25,45,56] Generating incorrect/fake information "ChatGPT included a make-up article which does not exist and even provided full bibliographic details of the article with a non-functional URL" [46] (p. 14). [32,38,58] Plagiarism prevention Student plagiarism "Our experimental group [with ChatGPT support] had slightly more problems with plagiarism than the control group [without ChatGPT support]" [59] (p. 7). [40,51,60] Bypassing plagiarism detectors "Of the 50 essays inspected, the plagiarism-detection software considered 40 of them with a high level of originality, as evidenced by a similarity score of 20% or less" [61] (p. 10). [36,52,62] Consider the issue of generating incorrect or fake information in the accuracy and reliability aspect. Mogali [32] raised concerns about ChatGPT's well-written but inaccurate information. It has commonly been observed that the bibliographic citations generated by ChatGPT can be fake [32,39,46,56,58]. Concerning subject knowledge, Megahed et al. [37] found that it generated incorrect programming codes and was unable to detect and resolve its errors. This concern was echoed by Jalil et al. [38], who commented that "ChatGPT is a poor judge of its own correctness" (p. 8). ChatGPT's accuracy and reliability have also been found to be questionable in other subject domains, such as mathematics [33], sports science and psychology [39], and health professions [26,27,32].
Concerning the plagiarism prevention aspect, ChatGPT-generated texts can bypass conventional plagiarism detectors. For example, Ventayen [62] asked ChatGPT to write an essay based on existing publications and checked its output for originality using Turnitin (a plagiarism detection application [71]). However, the application found a low similarity index between the document and existing work, and no plagiarism could be detected. Khalil and Er [61] asked ChatGPT to generate 50 essays based on different open-ended questions. Half of the essays were checked using Turnitin [71], which gave an average similarity score of 13.72%. The other half were checked using iThenticate [72], another plagiarism detection application, which gave an average similarity score of 8.76%. With these scores, the ChatGPT documents were considered highly original.
The included articles proposed several strategies to address the potential issues associated with ChatGPT in education. Table 6 categorises these strategies into three main aspects, namely task design (i.e., incorporating multimedia resources, adopting novel question types, and using digital-free assessment formats), the identification of AI writing (i.e., using AI-based writing detection tools and checking references), and institutional policy (i.e., establishing anti-plagiarism guidelines and providing student education).  [22,34,52] Adopting novel question types "[Instructors] might also reconsider the types of questions they pose to students, focusing on those that require analysis rather than those that simply require [a] recall of legal rules" [10] (p. 12). [8,22,63] Employing digital-free assessment formats "blanket solution would be to make all assessments of the 'in-class' variety . . . it will be essentially impossible for ChatGPT to be leveraged in an unscrupulous fashion" [23] (p. 19). [21,31,36] Identification of AI writing Using AI-based writing detection tools "Plagiarism detectors could not identify the AI-originated text, but AI detectors did" [39] (p. 2). [21,55,56] Checking references "Although in text citations and references were provided . . . , these were all fabricated . . . This does open a potential avenue of detection by academic staff" [56] (p. 5). [39,50,52] Institutional policy Establishing anti-plagiarism guidelines "Administrations should consider how to reshape honor codes to regulate the use of language models in general" [10] (p. 12). [52,56,61] Providing student education "Our recommendations for students are to . . . be aware of academic integrity policies and understand the consequences of academic misconduct . . . provide training on academic integrity for students" [52] (pp. 14-15). [54,64,65] Consider the issue of adopting novel question types in the task design aspect. Zhai [8] proposed exploring innovative formats that encourage students to be creative and engage in critical thinking. Choi et al. [10] emphasised the importance of requiring students to analyse cases rather than simply recalling knowledge. Similarly, Geerling et al. [22] suggested requiring students to apply the concepts that they learn in their courses and even to create new materials that AI cannot replicate. Stutz et al. [36] concluded that future assessments should focus on the higher levels of Bloom's taxonomy, such as application, analysis, and creation [73].
Other strategies concerned the identification of AI writing and institutional policy aspects. In the former aspect, the findings of this review suggested that using AI-based writing detection tools and checking references were the two major strategies. Szabo [39] reported that although conventional plagiarism detectors failed to identify ChatGPT-generated texts, AI detectors were (still) able to detect them. Moreover, ChatGPT's possible failure to generate a correct reference list (see Table 5) can be a telltale sign for instructors seeking to identify whether a student has used ChatGPT [50,51,56]. In addition to detecting student plagiarism, researchers have emphasised the importance of establishing anti-plagiarism guidelines and educating students about academic integrity [52,56,61].

Discussion
In this review, 50 articles published on or before 28 February 2023 were analysed. Therefore, the ChatGPT involved in these articles was its original version based on GPT-3.5, instead of . The findings suggested that ChatGPT has the potential to enhance teaching and learning (see Tables 3 and 4). However, its knowledge and performance were not entirely satisfactory across subject domains (see Table 2). The use of ChatGPT also presents various potential issues, such as generating incorrect or fake information and student plagiarism (see Table 5). Therefore, immediate action is needed to address these potential issues (see Table 6) and optimise the use of ChatGPT in education.

Leveraging ChatGPT in Teaching and Learning
ChatGPT can be a valuable tool for instructors, providing a starting point for creating course syllabi, teaching materials, and assessment tasks. However, concerns regarding the accuracy of its generated content must be addressed. One possible solution would be to use ChatGPT to generate raw materials to train course-specific chatbots. For example, Topsakal and Topsakal [42] used ChatGPT to create dialogues to aid students' English language learning. After verifying the accuracy of the materials, instructors can ask GhatGPT to convert them into a format suitable for use with AI-based chatbots, such as Google Dialogflow [75], providing students with an interactive and personalised learning environment.
ChatGPT can also enhance active learning approaches. For example, Rudolph et al. [52] suggested using flipped learning, in which students are required to prepare for lessons by studying pre-class materials. This instructional approach can free up class time for interactive learning activities, such as group discussions. In conventional flipped classes, however, students may encounter difficulties in pre-class learning [76]. In-class participation also needs to be improved [77]. This issue became apparent during the COVID-19 pandemic, during which fully online flipped learning led to poor in-class participation and disengagement from peer sharing [78,79]. As a virtual tutor, ChatGPT can assist students in online independent study by answering their questions [28] and enhance group dynamics by suggesting a discussion structure and providing real-time feedback [44].

Challenges and Threats Posed by ChatGPT in Education
According to Sallam [14], the use of ChatGPT in education poses challenges related to its accuracy and reliability. Because ChatGPT is trained on a large corpus of data, it may be biased or contain inaccuracies. Mbakwe et al. [57] noted that bias may stem from the use of research primarily conducted in high-income countries or textbooks that are not universally applicable. As evidenced by Pavlik [54], for example, ChatGPT is not familiar with hedge fund ownership of news media. In addition, ChatGPT's knowledge is limited and has not (yet) been updated with data after 2021 [25,45,46]. Therefore, its responses may not always be accurate or reliable, particularly for specialised subject matters and recent events. Furthermore, ChatGPT may generate incorrect or even fake information [37,46,51]. This issue can be problematic for students who rely on ChatGPT to inform their learning.
Student plagiarism has become a significant concern in education. Plagiarism detection applications (e.g., Turnitin [71] and iThenticate [72]) are commonly used to identify copied content in student assignments. However, studies have found that ChatGPT can bypass these detectors by generating seemingly original content [61,62]. Bašić et al. [59] provided evidence that students who used ChatGPT were more likely to commit plagiarism than those who did not use it. ChatGPT's ability to facilitate plagiarism not only impairs academic integrity but also defeats the purpose of assessment, which is to evaluate student learning fairly. According to Cotton et al. [50], students who use ChatGPT to generate high-quality work gain an unfair advantage over their peers who do not have access to it. More importantly, instructors cannot accurately evaluate student performance when ChatGPT is involved, making it difficult to follow up on students' learning problems.

Immediate Action in Response to the Impact of ChatGPT
Immediate action must be taken to mitigate the impact of ChatGPT on education. Assessment methods and institutional policies need to be updated to address the challenges posed by the emergence of AI-generated content in student assignments. Before the launch of GPT-4 (dated 14 March 2023), instructors could refine the design of their assessment tasks by incorporating multimedia resources to reduce the risk of plagiarism. The original release of ChatGPT could not process images and videos, which resulted in the missing context that increased barriers for students seeking to use it to cheat [21,34,52]. However, GPT-4, a large multimodal model created by OpenAI [74], is able to process images. Therefore, instructors have to consider other strategies, as shown in Table 6, such as incorporating digital-free components (e.g., oral presentations [40,52,58]) into their assessment tasks.
These components require students to demonstrate their abilities in real-time and in person. At the institutional level, AI-based writing detection tools should be made available to instructors. Furthermore, anti-plagiarism guidelines should be established to clarify the boundaries of ChatGPT's involvement in student learning.
Instructor training and student education are also critical in responding to the impact of ChatGPT [64]. It is essential to train instructors on how to identify the use of ChatGPT in student assignments, which can be achieved by using AI detection tools. Instructors should also be trained on how to fully use ChatGPT in their teaching preparation and course assessment, as shown in Table 3. For students, it is crucial to introduce them to the limitations of ChatGPT, such as its reliance on biased data, limited up-to-date knowledge, and potential for generating incorrect or fake information. Therefore, instructors should teach students to use other authoritative sources (e.g., reference books) to verify, evaluate, and corroborate the factual correctness of information provided by ChatGPT [39,44]. It is also important to increase students' awareness of academic integrity policies and their understanding of the consequences of academic misconduct [52,60]. To achieve this goal, instructors should openly discuss ChatGPT in their courses and emphasise the importance of academic honesty.

Limitations of this Rapid Review
This rapid review has limitations that must be considered when interpreting the findings. First, similar to the reviews by Mhlanga [13] and Sallam [14], the majority of the included articles were preprints, meaning that they have not undergone rigorous peer review. The quality of their evidence is, therefore, questionable. Follow-up systematic reviews are needed once more peer-reviewed articles on ChatGPT are published.
Second, most of the included articles were written in the Western context, particularly in medical and higher education. Thus, the findings of this review may be biased towards these specific contexts. Further studies in other subject domains (e.g., mathematics and language education) and education contexts (e.g., primary and secondary education) are recommended.
Third, this review only focused on the original release of ChatGPT. As a result, the findings might not be applicable to other applications of GPT and GPT-4 [74] that have been launched beyond the time period of this review. Future research can test the performance of GPT-4, explore its possibilities in supporting teaching and learning, and discuss its potential threats and solutions. Furthermore, future reviews can expand their scope to include other applications of GPT in education. By conducting more in-depth research and comprehensive reviews, educators can better understand the capabilities and limitations of GPT technology, as well as develop appropriate guidelines and policies to ensure its responsible and ethical use.

Limitations of the Included Articles
In addition to the limitations of this review, there are three major limitations associated with the included articles. First, very few studies empirically examined the influence of ChatGPT on student performance and behaviour. From the study by Bašić et al. [59], for example, we learnt that using ChatGPT to support student writing might not improve performance but could instead lead to more plagiarism issues. However, these researchers acknowledged that the generalisability of their study was limited due to the small number of research participants (N = 18). Further research is needed to evaluate the benefits and potential problems of ChatGPT-assisted learning for students.
Second, some of the suggestions made in the included articles were based on the researchers' intuitive beliefs rather than empirical evidence. For example, some researchers [8,52,62] suggested designing assessment tasks that focus on creativity and critical thinking. However, specific strategies to achieve this goal were not always thoroughly discussed. Therefore, more rigorous studies are needed to provide evidence-based recommendations for using ChatGPT in education.
Finally, some researchers used ChatGPT to generate some of their content or suggestions, leading to repetitive ideas across articles. For example, the researchers of different articles asked ChatGPT how it can benefit teaching and learning, resulting in very similar content, such as personalised learning and language translation [8,43,45,46,48,51]. This observation provides another reason to discourage the direct adoption of ChatGPT-or AI-generated texts as personal ideas in research articles, in addition to Thorp's [80] concern about "plagiarism of existing works" (p. 313). When ChatGPT is used as a co-author, it may generate similar points in different articles that do not acknowledge each other, which is the equivalent of self-plagiarism. If researchers plan to involve ChatGPT in writing, additional analysis of its ideas should be conducted.

Conclusions
This rapid review of 50 articles highlighted ChatGPT's varied performance across different subject domains and its potential benefits when serving as an assistant for instructors and as a virtual tutor for students. However, its use raises various concerns, such as its generation of incorrect or fake information and the threat it poses to academic integrity. The findings of this review call for immediate action by schools and universities to update their guidelines and policies for academic integrity and plagiarism prevention. Furthermore, instructors should be trained on how to use ChatGPT effectively and detect student plagiarism. Students should also be educated on the use and limitations of ChatGPT and its potential impact on academic integrity.
Funding: This research received no external funding.

Institutional Review Board Statement: Not applicable.
Informed Consent Statement: Not applicable.

Data Availability Statement:
The data samples and detailed coding procedures can be accessed by contacting the corresponding author.

Conflicts of Interest:
The author declares no conflict of interest.