Next Article in Journal
Harnessing Large Language Models and Deep Neural Networks for Fake News Detection
Next Article in Special Issue
ChatGPT in ESL Higher Education: Enhancing Writing, Engagement, and Learning Outcomes
Previous Article in Journal
A Lightweight Neural Network for Cell Segmentation Based on Attention Enhancement
Previous Article in Special Issue
Leveraging AI-Generated Virtual Speakers to Enhance Multilingual E-Learning Experiences
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda

Department of Information Systems, College of Business and Information Systems, Dakota State University, Madison, SD 57042, USA
*
Author to whom correspondence should be addressed.
Information 2025, 16(4), 296; https://doi.org/10.3390/info16040296
Submission received: 13 February 2025 / Revised: 27 February 2025 / Accepted: 28 March 2025 / Published: 8 April 2025
(This article belongs to the Special Issue Advancing Educational Innovation with Artificial Intelligence)

Abstract

:
This systematic literature review rigorously evaluates the impact of Generative AI (GenAI) on academic integrity within higher education settings. The primary objective is to synthesize how GenAI technologies influence student behavior and academic honesty, assessing the benefits and risks associated with their integration. We defined clear inclusion and exclusion criteria, focusing on studies explicitly discussing GenAI’s role in higher education from January 2021 to December 2024. Databases included ABI/INFORM, ACM Digital Library, IEEE Xplore, and JSTOR, with the last search conducted in May 2024. A total of 41 studies met our precise inclusion criteria. Our synthesis methods involved qualitative analysis to identify common themes and quantify trends where applicable. The results indicate that while GenAI can enhance educational engagement and efficiency, it also poses significant risks of academic dishonesty. We critically assessed the risk of bias in included studies and noted a limitation in the diversity of databases, which might have restricted the breadth of perspectives. Key implications suggest enhancing digital literacy and developing robust detection tools to effectively manage GenAI’s dual impacts. No external funding was received for this review. Future research should expand database sources and include more diverse study designs to overcome current limitations and refine policy recommendations.

Graphical Abstract

1. Introduction

In higher education, the pervasive rise of Generative Artificial Intelligence (GenAI) technologies, including Large Language Models (LLMs) like ChatGPT (Version 4, https://chatgpt.com), presents unparalleled opportunities and formidable challenges. This systematic literature review critically evaluates the impact of GenAI on academic integrity within higher education institutions. While dramatically enhancing learning through customized educational experiences, these technologies pose significant risks to the fundamental tenets of academic honesty, such as originality and ethical student behavior.
The necessity for this research emerges from several critical gaps in current academic understanding. First, there is a limited comprehensive analysis that balances the beneficial aspects of GenAI against its potential to facilitate academic dishonesty, such as through the generation of ghostwritten assignments or other forms of cheating that evade detection by conventional plagiarism tools. Second, while individual studies have explored isolated aspects of GenAI’s impact, a synthesized overview remains lacking that integrates these findings to provide actionable insights and guidelines for educators and policymakers. Additionally, as GenAI technologies evolve rapidly, existing research has not kept pace with assessing their long-term implications on educational practices and integrity standards.
To address these gaps, this review adopts a structured approach, utilizing the PRISMA method to ensure a rigorous and systematic analysis of the literature. This method aids in transparent reporting and comprehensive synthesis of existing studies, and it helps identify underexplored areas that require further scholarly attention. The review aims to categorize the existing research into coherent themes that delineate how GenAI impacts student learning processes, the authenticity of academic outputs, and the ethical challenges posed in educational settings.
This structured approach draws on interdisciplinary perspectives, integrating theoretical constructs from educational psychology, ethics in technology, and instructional design to provide a comprehensive base for exploring the multifaceted impacts of GenAI on academic integrity. The scope of the review, encompassing studies published from 2021 to 2024, captures the most recent insights into the rapid advancements and applications of GenAI in education. By employing this comprehensive approach, the review offers a detailed understanding of the current state of Generative AI in higher education. It not only facilitates the classification and analysis of diverse studies but also supports the development of a coherent narrative around the capabilities and challenges of GenAI, ensuring that the insights generated are actionable and relevant to educators, policymakers, and academic leaders.
Furthermore, this review is crucial as it will outline a research agenda to fill the identified gaps. By highlighting these areas, the paper seeks to guide future research efforts that can provide deeper insights into how educational systems can adapt to and integrate GenAI responsibly. This will ultimately aid in developing robust strategies to leverage the advantages of GenAI while safeguarding against its risks to academic integrity. Through this exploration, the review will contribute to formulating evidence-based policies and practices that ensure the ethical use of GenAI in education, maintaining the integrity of academic evaluations, and fostering an environment of genuine learning and innovation.

2. Methodology

The methodological framework for this systematic literature review is structured into three crucial phases, ensuring a thorough examination of Generative AI’s (GenAI) impact on higher education, particularly concerning academic integrity. The initial phase, Research Definition and Scope, focuses on defining the scope of inquiry to include GenAI’s influence on student behavior, the authenticity of academic outputs, and the broader implications for educational practices with an emphasis on academic integrity issues such as plagiarism and cheating. This phase also outlines the ethical considerations surrounding using GenAI tools in academic settings. In the Literature Classification phase, studies are systematically categorized to assess GenAI’s impact on student learning, engagement, and the integrity of assessments. This includes reviewing the effectiveness of existing plagiarism detection tools, developing ethical guidelines, and exploring how pedagogical practices are adapting to integrate GenAI responsibly.
The final phase, Analysis, and Synthesis, synthesizes the findings to highlight prevalent trends, challenges, and potential solutions, pinpointing gaps in current research that necessitate the formulation of new academic policies or technological innovations to align GenAI with educational integrity standards. The review adheres to the current PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) 2020 guidelines, based on the framework first proposed by Liberati et al. [1]. PRISMA’s structured approach facilitates detailed and clear documentation of the review process—from data extraction to analysis and interpretation—ensuring a comprehensive, transparent, and unbiased aggregation of existing evidence. This systematic approach not only bolsters the credibility of the research but also aids in the academic community’s clear understanding and practical application of the study methods. The results of the PRISMA review are illustrated in Figure 1 in the following section. This figure was generated using the PRISMA2020 flow diagram R package developed by Haddaway et al. [2].

Search and Filtering Strategy

The search string below was utilized across four databases: ABI/INFORM, ACM Digital Library, IEEE Xplore, and JSTOR. Google Scholar was also used for an initial probing search of the research topic. Depending on the database, criteria were either placed in single quotation marks or removed from quotation marks and left in parentheses. This was done to account for each database’s unique search capabilities and achieve the maximum yield of results from the query. Searches were also limited to English-language papers published between 2021 and 2024 to filter out irrelevant articles.
Additionally, papers needed to be published in a scholarly journal or be from a conference or conference proceeding during that time. It should be noted that the ABI/INFORM database provided two unique search filters that were applied to the query: “Peer reviewed” and “Full text.” The search string was as follows: (“Generative AI” OR “Artificial Intelligence” OR “AI” OR “Large Language Model” OR “LLM”) AND (“academic integrity”) AND (“higher education” OR “university” OR “college”) AND (“impact” OR “influence”)
An exploratory search was first conducted on Google Scholar, which produced approximately 8930 results, although no specific criteria beyond date ranges were applied due to the site’s limitations. This search was performed to gauge the relevance of the research topic. After this initial assessment, four reputable scholarly databases were utilized for the review: ABI/INFORM, ACM Digital Library, IEEE Xplore, and JSTOR. These databases were selected for their comprehensive coverage of information technology and education disciplines. Peer-reviewed publications from January 2021 to December 2024 were specifically chosen to focus on the latest developments and applications of AI technologies in academic settings, as the pace of technological change has rendered earlier works less representative of the current challenges and capabilities. All searches, including the scoping search, were conducted the week of 15 May 2024.
After confirming no duplicates existed, each title and abstract was meticulously screened against specific inclusion criteria (See Table 1). Articles were included if they pertained to GenAI, or LLMs, were set within higher education contexts, addressed academic integrity, and discussed the impact of GenAI on academic integrity. Conversely, articles were excluded if they did not relate to GenAI, occurred outside of higher education settings, did not address academic integrity, or failed to mention the impact of GenAI on academic integrity.
Data extraction was conducted by multiple reviewers to ensure accuracy and minimize bias, focusing on authors, year of publication, study design, participant characteristics, GenAI technology used, and main findings related to academic integrity. Next, a qualitative synthesis was performed to identify common themes and divergent views regarding the influence of GenAI on academic integrity. This process followed Braun and Clarke’s [3] framework for systematically identifying and analyzing patterns across the data.
The final categorization emerged through an inductive coding approach, in which a detailed reading of each study facilitated the identification of key observations. Initial codes, such as “GenAI-assisted cheating”, “ethical use in pedagogy”, and “effects on student engagement”, captured specific aspects of GenAI’s impact on academic integrity. Through an iterative process of refinement, these codes were synthesized into broader themes that provided a structured understanding of the field’s current challenges and discussions. This data-driven approach ensured that themes emerged organically rather than being shaped by preconceived categories. The results of this analysis are discussed in the following section.

3. Results

The comprehensive searches conducted across four key academic databases—ABI/INFORM, ACM Digital Library, IEEE Xplore, and JSTOR—aimed to gather recent publications on the impact of Generative AI and Large Language Models (LLMs) on academic integrity within higher education. The search, spanning from 2021 to 2024, initially identified 255 records: 62 from ABI/INFORM, 165 from ACM Digital Library, 16 from IEEE Xplore, and 12 from JSTOR. A thorough verification confirmed no duplicates, maintaining the integrity of the dataset. As previously mentioned, the screening process continued by evaluating studies based on their relevance to GenAI in higher education and its impact on academic integrity. This screening led to the exclusion of 211 records due to their irrelevance to the defined topic or focus area, leaving 44 full-text articles for a more detailed assessment. Of these, 10 articles were excluded for either targeting the wrong population or having insufficient relevance to the study topic, narrowing the selection to 34 articles. An example of an “irrelevant” study would be one that discussed GenAI in an academic setting but collected data from middle school students rather than those in higher education, thus violating the inclusion rule under the Academic Integrity criteria in Table 1. The review was further enriched by examining the reference lists of the 34 remaining articles, which identified 7 additional relevant studies. Consequently, 41 articles were included in the final systematic review.
Overall, the results showed that the rapid integration of generative artificial intelligence into educational environments brings with it a spectrum of both potential benefits and inherent risks that educators and institutions must navigate. As GenAI technologies like ChatGPT become more pervasive, they offer unprecedented opportunities to enhance the learning experience through more personalized, accessible, and engaging educational content. However, these technologies also introduce complex challenges to the foundational principles of academic integrity. The dual capacity of GenAI to both support and compromise educational standards necessitates a careful assessment of how these tools are implemented in educational settings to ensure they enhance student learning without compromising ethical standards.
Thematic analysis of the literature revealed three reoccurring topical categories conveyed in the following subsections. Representing the literature in this way provides a more organized structure to the results and subsequent discussion. These categories are summarized as Risks of Academic Dishonesty and Cheating, Pedagogical Implications and Ethical Use of GenAI, and Impacts on Student Learning and Educational Practices. Table 2 further illustrates these categories and organizes the literature within them by author. The remainder of this section analyzes the literature within the context of these categorical themes.

3.1. Risks of Academic Dishonesty and Cheating

Eke [5] delves into the complexities of GenAI, such as ChatGPT, in higher education, highlighting its ability to quickly generate sophisticated texts, which can be misused for creating undetectable, ghostwritten assignments. This dual capacity poses significant risks to academic honesty. It may require multi-stakeholder efforts to train users about the ethical implementation of GenAI, as well as how to identify misuses of the technology.
Opinions on how best to integrate GenAI are divided. Lau and Guo [8] showed that university instructors have concerns about GenAI-assisted cheating in the short term, prompting reactive measures such as bans and grading adjustments. In the long term, instructors are divided between resisting GenAI to maintain traditional programming education and integrating it to prepare students for future industry demands [8].
Similarly, Güner et al. [6] explore student attitudes toward ChatGPT, noting positive aspects like enhanced learning experiences and raising serious concerns about the potential for academic dishonesty, mainly the ease of cheating and plagiarism. These technologies could inadvertently lead to academic laziness and overreliance if not carefully managed [11,14]. Echoing this concern, Hannan and Liu [7] discuss the increase in cheating and plagiarism due to the ease of access to personalized content and automated grading facilitated by GenAI. They call for a balanced approach to technology use in academic settings that does not further the digital divide or lead to privacy violations in the pursuit of automation. However, they note the benefits of GenAI-powered proctoring solutions that may mitigate academic dishonesty in online courses [7]. Slomp et al. [12] also mention the potential of GenAI models for personalized learning but caution against privacy violations, biased output, and other ethical misuse of the technology.
Several authors lament the current state of popular AI detection tools, such as iThenticate, MOSS, and Turnitin, stating they are inadequate for the needs of educators [4,9,10]. Park and Ahn [9], for instance, reveal that while ChatGPT can enhance information accessibility and efficiency, it threatens academic integrity by potentially lowering learning engagement and standards. The authors instead advocate for a holistic socio-technical solution that considers human psychology, organizational culture, and social norms [9]. Researchers such as Denny et al. [4], Khalil and Er [10], and Susnjak [13] also address the limitations of current plagiarism detection tools, which fail to recognize the original essays produced by ChatGPT, indicating a need for advancements in detection technologies and updated classroom policies. An example of this would be a process of plagiarism detection that involves a two-step approach: first, verifying the origin of the content, followed by a similarity check [10]. This suggestion came about after it was found in testing that ChatGPT was able to detect AI-generated content more accurately than traditional tools like Turnitin [10].

3.2. Pedagogical Implications and Ethical Use of GenAI

In separate studies, Lan and Chen [16] and Xie and Ding [44] both critically analyze the implications of students’ dependency on GenAI for academic tasks, which they argue could reduce critical thinking and engagement. However, both sets of authors see the potential for GenAI to support academic integrity through strategic pedagogical alignments. Educators must fully understand the capabilities and limitations of large language models to prevent their misuse in academic settings [19]. Likewise, promoting ethical GenAI use through proven frameworks and educational strategies may prevent issues like plagiarism and bias [17].
Generative AI, such as GPT-4o CoPilot and OpenAI’s GPT-4, enables the generation of sophisticated code from natural-language prompts, raising concerns about student overreliance and potential academic misconduct. This issue is reported by Denny et al. [4]. Similarly, Shoufan [27,28] underscores both the benefits and risks of using ChatGPT in educational contexts, particularly its ability to provide accurate answers that could be exploited in exams. Additionally, Liu et al. [18] suggest that GenAI could serve as a tool to reduce academic dishonesty by offering guidance that encourages meaningful engagement with the material rather than providing direct answers.
GenAI is likened to a “tool in the toolbox”, with its use in summative assessments helping to normalize its pedagogical usefulness, as discussed by Petrovska et al. [22]. Furthermore, Kendon et al. [20] advocate for integrating GenAI tools into the learning process, transforming them into a supplemental extension of the curriculum that enhances the development of higher-order skills with a focus on academic integrity. Isaac et al. [15] also support a positive stance on the use of Chatbots in the classroom, highlighting that their proper use offers benefits that outweigh the risks to academic integrity for both students and faculty.
Malinka et al. [21] provide similar advice on GenAI adoption in the classroom: “If students use this technology responsibly, their performance might be boosted. In addition, educators might benefit from ChatGPT’s abilities in curriculum creation or boosting their performance.” Others recommend teaching ethical GenAI use, explicitly stating restrictions in syllabi and requiring students to disclose GenAI use [23]. This highlights the need for ongoing adaptation in educational practices to balance the benefits of GenAI tools with maintaining academic integrity.
Wang and Cornely [32] provide a related argument that the use of advanced detection technologies, the development of clear guidelines on GenAI usage, and the promotion of ethical awareness among students and educators is key to maintaining the integrity of the educational process. Similarly, Tlili et al. [31] advocate for cautious integration of GenAI in education to mitigate ethical and practical challenges, underlining the importance of establishing clear usage guidelines. Rajabi et al. [24] come to the same conclusion, saying that “…integration must be paired with clear guidelines, redesigned assessment methods, and transparent GenAI policies to ensure responsible usage and mitigate potential drawbacks”.
Concerns about AI-generated content facilitating cheating and undermining the learning process are highlighted by Țală et al. [30], who emphasize the need for explicit guidelines to prevent plagiarism and maintain academic integrity. They, along with Saxena et al. [26], advocate for a balanced approach to GenAI in education, underscoring the importance of educating students on the responsible use of these tools to foster genuine intellectual growth. Conversely, Șerban et al. [29] argue for more stringent regulatory frameworks to ensure that GenAI’s deployment in assessments does not lead to unethical practices like cheating and plagiarism, thereby supporting the integrity of educational evaluations.
Policy reform may be a more immediate solution to these woes, as discussed by Rudolph et al. [25]. They highlight the transformative potential of generative AI in revolutionizing academic practices but also caution about its ability to facilitate academic dishonesty, such as through generating test answers [25]. Continuing, the authors stress the importance of developing comprehensive academic honesty policies, training, and tools to detect GenAI use in academic submissions, ensuring that technological integration supports authentic learning experiences and maintains high educational standards [25]. Everyone should be aware of these policies and understand the consequences of breaching them with unethical use of GenAI tools.

3.3. Impacts on Student Learning and Educational Practices

Richards et al. [39] address using GenAI in academic assessments, pointing out that although GenAI tools like ChatGPT can generate adequate responses for various assessment formats, they struggle with tasks requiring higher cognitive skills. This observation calls for reevaluating assessment methods to ensure they genuinely measure student learning [38]. Other authors note the risk of students becoming overly reliant on these tools, which might hinder their independent problem-solving skills [23].
While educators favor adapted assessments integrating GenAI to maintain academic integrity and encourage critical thinking, students express mixed reactions, particularly concerning the loss of creativity [40]. Other authors see the benefits of GenAI to enhance student learning through innovative assessment designs [41], potentially addressing such student concerns in the long term. One such example of this is a study in which Ilic and Carr [34] applied an innovative rubric to distinguish between human-written and AI-generated text. It did this by evaluating students’ ability to deconstruct and reconstruct academic language frames. However, the study results were mixed, prompting further revision and testing before standardized implementation.
Finally, Tu [42] explores students’ varying interactions with ChatGPT, noting that those with a growth mindset may benefit more from its capabilities, suggesting the need for educational strategies that harness GenAI’s benefits while preventing potential misuse. Wang et al. [43] analyze the support large language models provide in academic tasks like literature review, accentuating the need for proper training in their use to maximize benefits while managing risks effectively. Likewise, Qureshi [37] and Ali et al. [33] emphasize that institutions should revise academic integrity policies to include clear rules and consequences for GenAI tool usage, provide faculty training on GenAI adoption, and educate students on maintaining ethical standards. This aligns with Liu’s [35] prediction that students and faculty will widely accept tools like ChatGPT due to their convenience and rapid growth. He suggests that educators should update their teaching methods to “meet the changing needs of the field” [35].

4. Discussion and Research Agenda

Integrating Generative AI, such as ChatGPT, into educational environments presents opportunities and challenges that must be carefully navigated. This research agenda aims to identify existing gaps in the literature and propose future research directions across three key themes: Risks of Academic Dishonesty and Cheating, Pedagogical Implications and Ethical Use of GenAI, and Impacts on Student Learning and Educational Practices. A central, overarching theme is the balance between leveraging GenAI to enhance learning and ensuring it does not undermine academic integrity. This issue is discussed in depth at the end of the section and presents an opportunity for a fourth thematic category to explore.

4.1. Emerging Challenges of GenAI-Induced Academic Dishonesty

Existing literature highlights significant concerns about the potential for GenAI to facilitate academic dishonesty [5,7]. Future research should focus on developing advanced detection tools and strategies to mitigate these risks [4]. This includes enhancing current plagiarism detection technologies and exploring new methodologies that can identify GenAI-generated content [10]. Moreover, longitudinal studies are needed to understand the evolving nature of cheating behaviors as GenAI technologies become more sophisticated.
Additionally, exploring the socio-technical aspects of implementing these tools within academic institutions can provide insights into their effectiveness and acceptance. Furthermore, research should investigate the role of GenAI-driven proctoring solutions in reducing academic dishonesty in online courses and explore the development of holistic socio-technical solutions to address the inadequacies of current GenAI detection methods [7,9]. Establishing robust regulatory frameworks to support these technological advancements and ensure academic integrity [29] is also essential.

4.2. Pedagogical Frameworks and Ethical GenAI Integration

Multiple authors discuss GenAI’s dual impact on student engagement and critical thinking [4,16,19]. Future research should investigate the development of pedagogical frameworks that integrate GenAI in a way that supports rather than detracts from these skills. This includes creating educational programs that instruct students about the ethical use of GenAI and its limitations.
Additionally, studies should explore how regulatory frameworks can be effectively implemented to ensure GenAI is used responsibly in academic settings [29]. There is also a need to examine the development of GenAI-driven tools that support teachers in monitoring and promoting ethical GenAI use among students. Curriculum creation and enhancement would be one such example [20]. Rudolph et al. [25] highlight the importance of comprehensive academic honesty policies, training, and tools to detect GenAI use in academic submissions. Future research should focus on developing these policies and ensuring they are well-communicated and understood by all stakeholders in the educational process.

4.3. Redesigning Assessments for a GenAI-Enhanced Learning Environment

Richards et al. [39] and Tu [42] highlight the need to reassess assessment methods to ensure they accurately measure student learning in a GenAI-enhanced environment. Future research should focus on designing assessments that require higher-order thinking skills, which GenAI currently struggles to replicate. This includes exploring alternative assessment formats that reduce reliance on GenAI-generated content and promote genuine student engagement.
Furthermore, research should examine the differential impacts of GenAI on students with varying mindsets and learning styles to develop inclusive educational strategies. Sullivan et al. [41] discuss the potential for GenAI to enhance student learning through innovative assessment designs. For example, studies could investigate how these designs can be implemented effectively while maintaining academic integrity. Wang et al. [43] emphasize the necessity for proper training in the use of large language models to maximize their benefits while managing risks effectively. Research should explore the best practices for this training and how it can be integrated into educational curricula. Put simply; there is a dire need for clear academic integrity policies and faculty training on GenAI adoption [33,37].

4.4. Balancing GenAI Benefits and Academic Integrity

The most pertinent issue throughout these themes is finding the balance between leveraging GenAI to enhance educational experiences and maintaining academic integrity. Future research must focus on creating a comprehensive framework that addresses this balance, including:
  • Developing and validating new detection technologies and methodologies.
  • Designing ethical guidelines and regulatory frameworks for GenAI use in education.
  • Reassessing and redesigning assessment methods to promote higher-order thinking.
  • Investigating the impacts of GenAI on different student demographics to ensure inclusive education.
Addressing these gaps will help educators and institutions harness the benefits of Generative AI while safeguarding the foundational principles of academic integrity. Below, we delve deeper into these points to outline the future research directions needed to achieve this balance.

4.5. The Path Forward

The overarching issue that permeates all themes is the need for a balanced approach to integrating GenAI into education. The rapid advancement of Generative AI technologies such as ChatGPT presents promising opportunities and significant challenges for academic integrity. On the one hand, GenAI tools have the potential to transform learning by providing personalized, accessible, and engaging educational content. On the other hand, they pose substantial risks to the core values of honesty and fairness that underpin academic integrity. This duality necessitates a nuanced understanding and careful management of GenAI in educational contexts.
The problem at the heart of this issue is the potential for misusing GenAI, leading to academic dishonesty and a decline in genuine student learning. Likewise, the sophisticated capabilities of AI to generate human-like text can easily be exploited to produce ghostwritten assignments that current detection tools may fail to recognize [5,10]. This scenario raises critical questions about the effectiveness of traditional plagiarism detection methods and the need for new, more advanced systems that can identify AI-generated content.
Moreover, the dependency on GenAI to complete academic tasks could reduce students’ critical thinking and independent problem-solving skills [16,36]. This reliance on GenAI tools risks academic integrity and undermines the educational process by fostering academic laziness and impeding the development of essential cognitive skills.
Ethical considerations further complicate the integration of GenAI in education, exacerbated by an urgent need for robust regulatory frameworks and moral guidelines to ensure that GenAI is used responsibly and does not facilitate dishonest behaviors such as cheating and plagiarism [29,31]. These guidelines should be informed by empirical research and should focus on promoting transparency, accountability, and fairness in GenAI applications.
Going forward, research must take a multifaceted approach to address these challenges. First, there is a need for developing advanced detection tools capable of identifying AI-generated content. As highlighted by Khalil and Er [10], current plagiarism detection tools are insufficient for this purpose. Future research should focus on developing and validating new detection technologies and methodologies that leverage the latest AI and machine learning advancements to avoid potential misuse. This involves extensive testing and validation across diverse educational contexts to ensure reliability and accuracy.
Second, educational institutions should invest in professional development programs for educators to enhance their understanding of GenAI technologies and their implications for teaching and learning. Lan & Chen [16] and Kasneci et al. [19] emphasize the need for educators to fully understand the capabilities and limitations of large language models to prevent their misuse. Enhancing educator competencies will help them design and implement GenAI-integrated pedagogies that support critical thinking and genuine learning.
Third, ethical guidelines and best practices for GenAI use in education must be developed and rigorously evaluated in real-world settings. Șerban et al. [29] and Tlili et al. [31] stress the importance of creating clear policies that promote transparency, accountability, and fairness. Research should focus on how these policies can be effectively communicated and enforced. This involves engaging multiple stakeholders, including educators, students, policymakers, and GenAI developers, to ensure that guidelines are practical and comprehensive.
Fourth, reassessing and redesigning assessment methods to promote higher-order thinking is crucial. As Richards et al. [39] and Tu [42] noted, that traditional assessment methods may not be effective in a GenAI-enhanced learning environment. Future research should explore innovative assessment designs that challenge students to demonstrate higher-order thinking skills, creativity, and problem-solving abilities. This might include project-based assessments, oral exams, and other formats less susceptible to GenAI assistance.
Finally, ongoing research should continuously evaluate the long-term impacts of GenAI integration on educational practices and student learning. This includes conducting longitudinal studies to monitor changes in academic integrity, learning outcomes, and student attitudes toward GenAI over time. Understanding the differential impacts of GenAI on various student demographics is critical for developing inclusive educational strategies [42,43].
In summary, balancing the benefits and risks of GenAI in education requires a coordinated and sustained research effort. By developing advanced detection tools, enhancing educator competencies, establishing ethical guidelines, and continuously evaluating the impacts of GenAI, researchers can contribute to a more balanced and responsible integration of GenAI in higher education. This multi-pronged approach will help to ensure that GenAI technologies enhance learning without compromising academic integrity.

4.6. Forward-Thinking Research Questions

To effectively advance the research agenda, it is essential to formulate forward-thinking research questions that explore themes of advancements in AI detection tools, address issues related to academic integrity as influenced by AI, enhance pedagogical frameworks, ensure ethical use of Generative AI, and assess impacts on student learning, as detailed in Table 3. These questions cover a spectrum of high-level and specific issues, from developing GenAI detection methods to verifying AI-generated work authorship to crafting pedagogical frameworks that integrate GenAI to bolster critical thinking. They also explore the creation of ethical guidelines for GenAI in education and the assessment of GenAI’s long-term effects on educational practices and outcomes. This holistic approach will help ensure the responsible and effective use of GenAI technologies in educational settings.
The following list is not meant to be exhaustive. Rather, it is intended to provide a series of logical future research starting points predominantly guided by the themes qualitatively identified and discussed in the present study. Questions are intentionally posited from a variety of methodological perspectives to increase applicability.

5. Limitations and Future Work

This review faces several limitations that may impact the comprehensiveness and depth of the findings yet provide avenues for future work. The results were sourced exclusively from four primary databases selected for their strong reputation in hosting scholarly articles on Information Systems, specifically about Generative AI. However, the inclusion of additional databases or alternative sources (white papers, pre-prints, etc.) could potentially yield more diverse and insightful results. Future studies should consider expanding their database sources to enhance the breadth of the literature reviewed. Furthermore, the review process was carried out solely by the two authors. The involvement of additional researchers could enrich the analysis and synthesis of the data, particularly in developing the discussion and shaping a more robust research agenda. Further, themes such as “Pedagogical Frameworks and Ethical Use of AI” as well as several related research questions, e.g., “How can assessments be redesigned to better measure creativity and higher-order thinking?” have emerged from the review. Future research may explore these questions in relation to AI ethics, educational psychology, Bloom’s Taxonomy and the impact on learning. Finally, the review focused on English-language-only papers. Future studies could extend their search to papers written in different languages and from non-Western literature sources.

6. Conclusions

Integrating Generative AI (GenAI) in higher education offers transformative possibilities but also introduces significant challenges, particularly in maintaining academic integrity. While GenAI tools like ChatGPT can enrich personalized learning experiences and make education more accessible, they pose risks, such as enabling academic dishonesty and diminishing critical thinking skills. Future research must develop sophisticated detection technologies that distinguish between human and AI-generated content.
Educators must be equipped with the skills to integrate GenAI into their teaching practices responsibly, ensuring these tools enhance rather than undermine genuine learning. Establishing comprehensive ethical guidelines and regulatory frameworks will be vital, promoting transparency, accountability, and fairness in the use of GenAI within educational settings. Additionally, engaging various stakeholders—including educators, students, policymakers, and GenAI developers—is essential in crafting practical and inclusive guidelines. Future research should also explore innovative assessment methods that foster higher-order thinking and creativity, addressing the shortcomings of traditional assessments and tailoring GenAI tools to support diverse learning needs and styles. This balanced approach is necessary to harness the benefits of GenAI while safeguarding foundational educational principles.

Author Contributions

Conceptualization, K.B. and O.E.-G.; methodology, K.B. and O.E.-G.; validation, K.B. and O.E.-G.; formal analysis, K.B. and O.E.-G.; investigation, K.B. and O.E.-G.; writing—original draft preparation, K.B.; writing—review and editing, K.B. and O.E.-G.; supervision, O.E.-G. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author. A protocol was not registered for this review.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence
LLMLarge Language Models

References

  1. Liberati, A.; Altman, D.G.; Tetzlaff, J.; Mulrow, C.; Gøtzsche, P.C.; Ioannidis, J.P.A.; Clarke, M.; Devereaux, P.J.; Kleijnen, J.; Moher, D. The PRISMA Statement for Reporting Systematic Reviews and Meta-Analyses of Studies That Evaluate Health Care Interventions: Explanation and Elaboration. BMJ 2009, 339, b2700. [Google Scholar] [CrossRef]
  2. Haddaway, N.R.; Page, M.J.; Pritchard, C.C.; McGuinness, L.A. PRISMA2020: An R package and Shiny app for producing PRISMA 2020-compliant flow diagrams, with interactivity for optimised digital transparency and Open Synthesis. Campbell Syst. Rev. 2022, 18, e1230. [Google Scholar] [CrossRef] [PubMed]
  3. Braun, V.; Clarke, V. Using thematic analysis in psychology. Qual. Res. Psychol. 2006, 3, 77–101. [Google Scholar] [CrossRef]
  4. Denny, P.; Prather, J.; Becker, B.A.; Finnie-Ansley, J.; Hellas, A.; Leinonen, J.; Luxton-Reilly, A.; Reeves, B.N.; Santos, E.A.; Sarsa, S. Computing Education in the Era of Generative AI. Commun. ACM 2024, 67, 3624720. [Google Scholar] [CrossRef]
  5. Eke, D.O. ChatGPT and the rise of generative AI: Threat to academic integrity? J. Responsible Technol. 2023, 13, 100060. [Google Scholar] [CrossRef]
  6. Güner, H.; Er, E.; Akçapınar, G.; Khalil, M. From chalkboards to AI-powered learning. Educ. Technol. Soc. 2024, 27, 386–404. [Google Scholar]
  7. Hannan, E.; Liu, S. AI: New source of competitiveness in higher education. Compet. Rev. Int. Bus. J. 2023, 33, 265–279. [Google Scholar] [CrossRef]
  8. Lau, S.; Guo, P. From “Ban It Till We Understand It” to “Resistance is Futile”: How University Programming Instructors Plan to Adapt as More Students Use AI Code Generation and Explanation Tools such as ChatGPT and GitHub Copilot. In Proceedings of the 2023 ACM Conference on International Computing Education Research V.1, Chicago, IL, USA, 7–11 August 2023; pp. 106–121. [Google Scholar] [CrossRef]
  9. Park, H.; Ahn, D. The Promise and Peril of ChatGPT in Higher Education: Opportunities, Challenges, and Design Implications. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–21. [Google Scholar] [CrossRef]
  10. Khalil, M.; Er, E. Will ChatGPT get you caught? Rethinking of Plagiarism Detection. In Learning and Collaboration Technologies; Springer: Cham, Switzerland, 2023. [Google Scholar]
  11. Sheard, J.; Denny, P.; Hellas, A.; Leinonen, J.; Malmi, L.; Simon. Instructor Perceptions of AI Code Generation Tools—A Multi-Institutional Interview Study. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V.1, New York, NY, USA, 7 March 2024; pp. 1223–1229. [Google Scholar] [CrossRef]
  12. Slomp, E.M.; Ropelato, D.; Bonatti, C.; Da Silva, M.D. Adaptive Learning in Engineering Courses: How Artificial Intelligence (AI) Can Improve Academic Outcomes. In Proceedings of the 2024 IEEE World Engineering Education Conference (EDUNINE), Guatemala City, Guatemala, 10–13 March 2024; pp. 1–6. [Google Scholar] [CrossRef]
  13. Susnjak, T. ChatGPT: The End of Online Exam Integrity? arXiv 2022, arXiv:2212.09292. [Google Scholar] [CrossRef]
  14. Zastudil, C.; Rogalska, M.; Kapp, C.; Vaughn, J.; MacNeil, S. Generative AI in Computing Education: Perspectives of Students and Instructors. arXiv 2023, arXiv:2308.04309. [Google Scholar]
  15. Isaac, M.; Ateeq, M.; Hafizh, H.; Hu, B.; Shodipo, D. Leveraging Artificial Intelligence with Zone of Proximal Development: An ARCS Motivational E-Learning Model. In Proceedings of the 2023 IEEE International Conference on Teaching, Assessment and Learning for Engineering (TALE), Auckland, New Zealand, 28 November–1 December 2023; pp. 1–8. [Google Scholar] [CrossRef]
  16. Lan, Y.-J.; Chen, N.-S. Teachers’ agency in the era of LLM and generative AI. Educ. Technol. Soc. 2024, 27, I–XVIII. [Google Scholar]
  17. Li, Z.; Dhruv, A.; Jain, V. Ethical Considerations in the Use of AI for Higher Education: A Comprehensive Guide. In Proceedings of the 2024 IEEE 18th International Conference on Semantic Computing (ICSC), Laguna Hills, CA, USA, 5–7 February 2024; pp. 218–223. [Google Scholar] [CrossRef]
  18. Liu, R.; Zenke, C.; Liu, C.; Holmes, A.; Thornton, P.; Malan, D.J. Teaching CS50 with AI: Leveraging Generative Artificial Intelligence in Computer Science Education. In Proceedings of the 55th ACM Technical Symposium on Computer Science Education V. 1, Portland, OR, USA, 20–23 March 2024; pp. 750–756. [Google Scholar] [CrossRef]
  19. Kasneci, E.; Sessler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for good? On opportunities and challenges of large language models for education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  20. Kendon, T.; Wu, L.; Aycock, J. AI-Generated Code Not Considered Harmful. In Proceedings of the 25th Western Canadian Conference on Computing Education, Vancouver, BC, Canada, 4–5 May 2023; pp. 1–7. [Google Scholar] [CrossRef]
  21. Malinka, K.; Peresíni, M.; Firc, A.; Hujnák, O.; Janus, F. On the Educational Impact of ChatGPT: Is Artificial Intelligence Ready to Obtain a University Degree? In Proceedings of the 2023 Conference on Innovation and Technology in Computer Science Education V. 1, Turku, Finland, 7–12 July 2023; pp. 47–53. [Google Scholar] [CrossRef]
  22. Petrovska, O.; Clift, L.; Moller, F.; Pearsall, R. Incorporating Generative AI into Software Development Education. In Proceedings of the 8th Conference on Computing Education Practice, Durham, UK, 5 January 2024; pp. 37–40. [Google Scholar] [CrossRef]
  23. Prather, J.; Denny, P.; Leinonen, J.; Becker, B.A.; Albluwi, I.; Craig, M.; Keuning, H.; Kiesler, N.; Kohn, T.; Luxton-Reilly, A.; et al. The Robots Are Here: Navigating the Generative AI Revolution in Computing Education. In Proceedings of the 2023 Working Group Reports on Innovation and Technology in Computer Science Education, Turku, Finland, 7–12 July 2023; pp. 108–159. [Google Scholar] [CrossRef]
  24. Rajabi, P.; Taghipour, P.; Cukierman, D.; Doleck, T. Exploring ChatGPT’s impact on post-secondary education: A qualitative study. In Proceedings of the 25th Western Canadian Conference on Computing Education, Vancouver, BC, Canada, 4–5 May 2023; pp. 1–6. [Google Scholar] [CrossRef]
  25. Rudolph, J.; Tan, S.; Tan, S. ChatGPT: Bullshit spewer or the end of traditional assessments in higher education? J. Appl. Learn. Teach. 2023, 6, 342–363. [Google Scholar] [CrossRef]
  26. Saxena, N.; Kumar, A.; Makwana, P.; Band, G.; Teltumbade, G.R.; Gomathinayagam, I. Artificial Intelligence’s (AI) Role in Higher Education—Challenges and Applications. Acad. Mark. Stud. J. 2024, 28, 1–9. [Google Scholar]
  27. Shoufan, A. Can Students without Prior Knowledge Use ChatGPT to Answer Test Questions? An Empirical Study. ACM Trans. Comput. Educ. 2023, 23, 1–29. [Google Scholar] [CrossRef]
  28. Shoufan, A. Exploring Students’ Perceptions of ChatGPT: Thematic Analysis and Follow-Up Survey. IEEE Access 2023, 11, 38805–38818. [Google Scholar] [CrossRef]
  29. Serban, D.; Cristache, S.E.; Ciobotar, N.G.; Francu, L.G.; Mansou, J. Quantitative Evaluation of Willingness to Use Artificial Intelligence within Business and Economic Academic Environment. Amfiteatru Econ. 2024, 26, 259–274. [Google Scholar] [CrossRef]
  30. Tala, M.L.; Muller, C.N.; Nastase, I.A.; State, O.; Gheorghe, G. Exploring University Students Perceptions of Generative Artificial Intelligence in Education. Amfiteatru Econ. 2024, 26, 71–88. [Google Scholar] [CrossRef]
  31. Tlili, A.; Shehata, B.; Adarkwah, M.A.; Bozkurt, A.; Hickey, D.T.; Huang, R.; Agyemang, B. What if the devil is my guardian angel: ChatGPT as a case study of using chatbots in education. Smart Learning Environments. Smart Learn. Environ. 2023, 10, 15. [Google Scholar] [CrossRef]
  32. Wang, J.; Cornely, P.-R. Addressing Academic Misconduct in the Age of ChatGPT: Strategies and Solutions. In Proceedings of the 2023 7th International Conference on Education and E-Learning, Tokyo, Japan, 25–27 November 2023. [Google Scholar]
  33. Ali, W.; Alami, R.; Alsmairat MA, K.; AlMasaeid, T. Consensus or Controversy: Examining AI’s Impact on Academic Integrity, Student Learning, and Inclusivity Within Higher Education Environments. In Proceedings of the 2024 2nd International Conference on Cyber Resilience (ICCR), Dubai, United Arab Emirates, 26–28 February 2024; pp. 1–5. [Google Scholar] [CrossRef]
  34. Ilic, P.; Carr, N. Work in Progress: Safeguarding Authenticity: Strategies for Combating AI-Generated Plagiarism in Academia. In Proceedings of the 2023 IEEE Frontiers in Education Conference (FIE), College Station, TX, USA, 18–21 October 2023; pp. 1–5. [Google Scholar] [CrossRef]
  35. Liu, Y. Leveraging the Power of AI in Undergraduate Computer Science Education: Opportunities and Challenges. In Proceedings of the 2023 IEEE Frontiers in Education Conference (FIE), College Station, TX, USA, 18–21 October 2023; pp. 1–5. [Google Scholar] [CrossRef]
  36. Prather, J.; Reeves, B.N.; Denny, P.; Becker, B.A.; Leinonen, J.; Luxton-Reilly, A.; Powell, G.; Finnie-Ansley, J.; Santos, E.A. “It’s Weird That it Knows What I Want”: Usability and Interactions with Copilot for Novice Programmers. ACM Trans. Comput.-Hum. Interact. 2024, 31, 1–31. [Google Scholar] [CrossRef]
  37. Qureshi, B. ChatGPT in Computer Science Curriculum Assessment: An analysis of Its Successes and Shortcomings. In Proceedings of the 2023 9th International Conference on E-Society, e-Learning and e-Technologies, Portsmouth, UK, 9–11 June 2023; pp. 7–13. [Google Scholar] [CrossRef]
  38. Raza, M.R.; Hussain, W. Preserving Academic Integrity in Teaching with ChatGPT: Practical Strategies. In Proceedings of the 2023 IEEE International Conference on Web Intelligence and Intelligent Agent Technology (WI-IAT), Venice, Italy, 26–29 October 2023; pp. 158–162. [Google Scholar] [CrossRef]
  39. Richards, M.; Waugh, K.; Slaymaker, M.; Petre, M.; Woodthorpe, J.; Gooch, D. Bob or Bot: Exploring ChatGPT’s Answers to University Computer Science Assessment. ACM Trans. Comput. Educ. 2024, 24, 1–32. [Google Scholar] [CrossRef]
  40. Smolansky, A.; Cram, A.; Raduescu, C.; Zeivots, S.; Huber, E.; Kizilcec, R.F. Educator and Student Perspectives on the Impact of Generative AI on Assessments in Higher Education. In Proceedings of the Tenth ACM Conference on Learning @ Scale, Copenhagen, Denmark, 20–22 July 2023; pp. 378–382. [Google Scholar] [CrossRef]
  41. Sullivan, M.; Andrew, K.; McLaughlan, P. ChatGPT in higher education: Considerations for academic integrity and student learning. J. Appl. Learn. Teach. 2023, 6, 1–10. [Google Scholar] [CrossRef]
  42. Tu, Y.-F. Roles and functionalities of ChatGPT for students with different growth mindsets. Educ. Technol. Soc. 2024, 27, 198–214. [Google Scholar]
  43. Wang, J.; Hu, H.; Wang, Z.; Yan, S.; Sheng, Y.; He, D. Evaluating Large Language Models on Academic Literature Understanding and Review: An Empirical Study among Early-stage Scholars. In Proceedings of the CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 11–16 May 2024; pp. 1–18. [Google Scholar] [CrossRef]
  44. Xie, X.; Ding, S. Opportunities, Challenges, Strategies, and Reforms for ChatGPT in Higher Education. In Proceedings of the 2023 International Conference on Educational Knowledge and Informatization (EKI), Guangzhou, China, 22–24 September 2023; pp. 14–18. [Google Scholar] [CrossRef]
Figure 1. PRISMA Flow Diagram.
Figure 1. PRISMA Flow Diagram.
Information 16 00296 g001
Table 1. Inclusion/Exclusion Criteria.
Table 1. Inclusion/Exclusion Criteria.
CriteriaIncludedExcluded
AI, Generative AI, or LLMsThe article pertains to AI, generative AI, or LLMsThe article does not pertain to AI, generative AI, or LLMs
Higher EducationThe article took place within a higher education settingThe article did not take place within a higher education setting
Academic IntegrityThe article addressed academic integrity as an area of concernThe article did not address academic integrity as an area of concern
Impact/InfluenceThe study mentioned AI’s effects on academic integrityThe study did not mention AI’s effects on academic integrity
Table 2. Categorical Themes from the Literature.
Table 2. Categorical Themes from the Literature.
CategoryAuthors and Publication Year (Alphabetically by Category)
Risks of Academic Dishonesty and CheatingDenny et al. [4], Eke [5], Güner et al. [6], Hannan and Liu [7], Lau and Guo [8], Park and Ahn [9], Khalil and Er [10], Sheard et al. [11], Slomp et al. [12], Susnjak [13], Zastudil et al. [14]
Pedagogical Implications and Ethical Use of AIDenny et al. [4], Isaac et al. [15], Lan & Chen [16], Li et al. [17], Liu et al. [18], Kasneci et al. [19], Kendon et al. [20], Malinka et al. [21], Petrovska et al. [22], Prather et al. [23], Rajabi et al. [24], Rudolph et al. [25], Saxena et al. [26], Shoufan [27,28], Șerban et al. [29], Țală et al. [30], Tlili et al. [31], Wang and Cornely [32]
Impacts on Student Learning and Educational PracticesAli et al. [33], Ilic and Carr [34], Liu [35], Prather et al. [36], Qureshi [37], Raza and Hussein [38], Richards et al. [39], Smolansky et al. [40], Sullivan et al. [41], Tu [42], Wang et al. [43], Xie and Ding [44]
Table 3. Research Agenda.
Table 3. Research Agenda.
Theme(s)Research Questions
AI-Induced Academic DishonestyHow can new AI detection tools effectively identify AI-generated academic content?
What emerging patterns of academic dishonesty are associated with AI tools?
How can behavioral modeling improve the effectiveness of AI detection systems in academic settings?
What impact does transparency in AI detection methods have on academic honesty?
What is the statistical effectiveness of AI detection tools in identifying ghostwritten assignments compared to traditional methods?
How does the frequency of academic dishonesty incidents change with the implementation of AI surveillance technologies?
Pedagogical Frameworks and Ethical Use of AIHow can pedagogical frameworks be designed to integrate AI ethically?
What are the best practices for training educators to incorporate AI technologies ethically?
What strategies can maintain student engagement with extensive GenAI integration?
How can interdisciplinary collaborations enhance the ethical use of AI in educational settings?
What is the correlation between the use of AI in teaching and the ethical understanding of AI among educators?
How does student performance differ in courses that integrate AI tools versus those that do not, as measured by standardized assessments?
Assessment Methods in an AI-Enhanced EnvironmentHow can assessments be redesigned to better measure creativity and higher-order thinking?
How can assessment strategies be tailored to accommodate diverse student learning styles in an AI-enhanced environment?
What are the implications of using AI for creating adaptive assessment tasks?
How can AI-integrated curricula measure deep learning and critical thinking effectively?
What percentage of assessments can be effectively automated with AI without loss in assessment quality?
How does the integration of AI in assessments affect the distribution of student grades across various cognitive levels?
Balancing AI Benefits and Academic IntegrityWhat frameworks can balance the educational benefits of AI with the need for academic integrity?
What are the long-term effects of AI integration on educational practices and academic integrity?
How can the effectiveness of AI in maintaining academic integrity be monitored across various disciplines?
How can institutions promote a culture of academic integrity that effectively incorporates AI?
What is the impact of AI tools on academic integrity violations year-over-year in institutions that have adopted AI?
How do quantitative measures of student satisfaction and learning outcomes vary before and after the implementation of AI-driven educational tools?
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Bittle, K.; El-Gayar, O. Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information 2025, 16, 296. https://doi.org/10.3390/info16040296

AMA Style

Bittle K, El-Gayar O. Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information. 2025; 16(4):296. https://doi.org/10.3390/info16040296

Chicago/Turabian Style

Bittle, Kyle, and Omar El-Gayar. 2025. "Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda" Information 16, no. 4: 296. https://doi.org/10.3390/info16040296

APA Style

Bittle, K., & El-Gayar, O. (2025). Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information, 16(4), 296. https://doi.org/10.3390/info16040296

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop