Next Article in Journal
Enhancing Cost Prediction and Estimation Techniques for Sustainable Building Maintenance and Future Development
Next Article in Special Issue
Risk and Emergency Communication
Previous Article in Journal
Architecting Inclusion in e-CNY: Settlement-Upon-Payment, Domestic Interoperability, and User Control
Previous Article in Special Issue
Design Justice in Online Courses: Principles and Applications for Higher Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Entry

Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration

by
Manolis Adamakis
* and
Theodoros Rachiotis
School of Physical Education and Sport Science, National and Kapodistrian University of Athens, 17237 Dafne, Greece
*
Author to whom correspondence should be addressed.
Encyclopedia 2025, 5(4), 180; https://doi.org/10.3390/encyclopedia5040180
Submission received: 3 August 2025 / Revised: 1 October 2025 / Accepted: 20 October 2025 / Published: 28 October 2025
(This article belongs to the Collection Encyclopedia of Social Sciences)

Definition

Artificial Intelligence (AI), particularly Generative AI (GenAI) and Large Language Models (LLMs), is rapidly reshaping higher education by transforming teaching, learning, assessment, research, and institutional management. This entry provides a state-of-the-art, comprehensive, evidence-based synthesis of established AI applications and their implications within the higher education landscape, emphasizing mature knowledge aimed at educators, researchers, and policymakers. AI technologies now support personalized learning pathways, enhance instructional efficiency, and improve academic productivity by facilitating tasks such as automated grading, adaptive feedback, and academic writing assistance. The widespread adoption of AI tools among students and faculty members has created a critical need for AI literacy—encompassing not only technical proficiency but also critical evaluation, ethical awareness, and metacognitive engagement with AI-generated content. Key opportunities include the deployment of adaptive tutoring and real-time feedback mechanisms that tailor instruction to individual learning trajectories; automated content generation, grading assistance, and administrative workflow optimization that reduce faculty workload; and AI-driven analytics that inform curriculum design and early intervention to improve student outcomes. At the same time, AI poses challenges related to academic integrity (e.g., plagiarism and misuse of generative content), algorithmic bias and data privacy, digital divides that exacerbate inequities, and risks of “cognitive debt” whereby over-reliance on AI tools may degrade working memory, creativity, and executive function. The lack of standardized AI policies and fragmented institutional governance highlight the urgent necessity for transparent frameworks that balance technological adoption with academic values. Anchored in several foundational pillars (such as a brief description of AI higher education, AI literacy, AI tools for educators and teaching staff, ethical use of AI, and institutional integration of AI in higher education), this entry emphasizes that AI is neither a panacea nor an intrinsic threat but a “technology of selection” whose impact depends on the deliberate choices of educators, institutions, and learners. When embraced with ethical discernment and educational accountability, AI holds the potential to foster a more inclusive, efficient, and democratic future for higher education; however, its success depends on purposeful integration, balancing innovation with academic values such as integrity, creativity, and inclusivity.

1. Introduction

Artificial Intelligence (AI), especially Generative AI (GenAI) systems (i.e., systems capable of generating novel content in response to prompts) like Large Language Models (LLMs), are increasingly transforming higher education worldwide by reshaping teaching, learning, assessment, research, and institutional management. These applications introduce both new opportunities and critical challenges. Universities are exploring ways to integrate these technologies to enhance learning processes while simultaneously addressing concerns about academic integrity, algorithmic bias, tool transparency, and data protection [1,2,3]. While a growing body of research examines individual applications of AI in universities, the literature remains fragmented across diverse disciplinary and policy perspectives. Few works provide an integrated, state-of-the-art synthesis that consolidates established knowledge about pedagogical integrity, AI literacy, and policy integration. This entry seeks to address this gap by offering a comprehensive, evidence-based overview of mature knowledge that can serve as a reference point for educators, researchers, and policymakers engaged in shaping the future of AI-enhanced higher education.
The adoption of AI in higher education has accelerated significantly, with a wide range of applications now actively reshaping the educational landscape [4,5]. Previously published work, such as Crompton and Burke’s [6] review of AI’s potential in universities, highlight benefits such as personalized learning and enhanced instruction, while also addressing ethical concerns related to academic integrity and plagiarism detection. Although several studies, e.g., [7,8], report that AI improves learning performance and critical thinking and acknowledge the risks of over-reliance on such systems, these often rely on short-term or cross-sectional designs, limiting the ability to draw causal conclusions. Meta-analyses, e.g., [9], further suggest positive impacts of ChatGPT on academic performance, but the presence of novelty effects and lack of long-term follow-up reduce the strength of the evidence.
In addition, Ganjavi et al. [10] emphasize the urgent need for academic journals to develop explicit policies on the ethical use of GenAI in scholarly writing, advocating for greater transparency in AI-assisted authorship. Similarly, Cheng et al. [3] propose concrete recommendations for responsible AI usage in academic composition, underscoring the importance of human oversight and critical evaluation of AI-generated content.
A primary dimension of AI integration in higher education involves the practical adoption of AI technologies to enhance personalized learning, instructional efficiency, and administrative processes. An additional essential dimension of GenAI integration is student education on its ethical and effective use. Hazari [11] and Vashishth et al. [12] highlight the need for dedicated curricula focused on AI literacy, aiming to equip students with a robust understanding of both the capabilities and limitations of such technologies. Ajani et al. [13] further call on educational institutions to reform their academic programs by incorporating both theoretical and practical training in AI, ensuring institutional readiness.
Furthermore, safeguarding academic integrity remains a central concern. Jarrah et al. [14] examine the complex relationship between ChatGPT and plagiarism, urging the implementation of strict, transparent policies on the use of generative AI in academic work. These challenges, as Farahani and Ghasmi [15] also argue, necessitate well-structured governance strategies that account for the pedagogical, ethical, and social consequences of AI deployment in higher education. Additionally, AI’s impact varies markedly across disciplines, with STEM fields benefiting differently compared to humanities and social sciences. These findings underscore the necessity of careful pedagogical design, continuous human oversight, and comprehensive institutional policies to harness AI’s benefits without compromising educational values. In sum, GenAI presents a promising yet demanding technological advancement—one that calls for balanced, carefully managed integration into university environments.
Based on the above, this entry constitutes a state-of-the-art, evidence-based synthesis of current uses of AI in higher education. A state-of-the-art review tends to address more current matters in contrast to the combined retrospective and current approaches of the literature review. The review intends to offer new perspectives on an issue and highlight potential areas in need of further research [16]. By avoiding speculation or futuristic projections, it outlines established practices, institutional policies, educational applications, and pedagogical implications, serving as a reference point for educators, researchers, and policymakers alike. However, as an Encyclopedia entry, the purpose of this manuscript is not to provide exhaustive analysis of any single domain, but rather to integrate mature knowledge across pedagogy, ethics, and policy.
This entry is based on a targeted review of the recent scholarly literature, institutional reports, and policy documents published primarily between 2023 and 2025, with selective inclusion of earlier seminal works (e.g., on Intelligent Tutoring Systems) to provide historical context. Sources were identified through academic databases such as Scopus, Web of Science, and PubMed, as well as policy documents from international organizations (e.g., UNESCO and OECD). Sources were selected to provide a comprehensive, evidence-based overview of mature, evidence-based findings that illustrate established applications of AI in higher education. Emphasis was placed on well-established findings and expert consensus to inform educators, researchers, and policymakers. This approach ensures that the entry synthesizes robust and representative knowledge for reference purposes.

2. A Brief AI History in (Higher) Education

Over the decades, the history of AI in education illustrates a gradual expansion in functionality and capacity (Figure 1). Early systems such as Intelligent Tutoring Systems (ITS) primarily offered rule-based, adaptive learning focused on specific knowledge domains. With advances in machine learning, data availability, and natural language processing, more sophisticated AI tools emerged, enabling personalized learning paths, automated grading, and plagiarism detection. The recent introduction of generative AI and LLMs represents a pivotal shift, bringing AI capable of generating coherent academic text, providing real-time feedback, and supporting complex research tasks. This evolution reflects a broader trend from narrow, task-specific applications to versatile, user-facing systems that directly shape how students and educators engage with knowledge.
Although current public discourse is heavily focused on the latest developments in GenAI tools, the use of AΙ in education spans several decades (Figure 1). As early as the 1970s, AI-based systems were first implemented in support of personalized and adaptive instruction, giving rise to the field of ITS [17,18,19]. The first documented ITS was SCHOLAR, developed by Jaime Carbonell in 1970, which paved the way for future AI-powered pedagogical innovation in AI-driven pedagogy [20]. In the years that followed, several highly influential systems emerged, including Stanford University’s BIP (1977), MIT’s WUMPUS (1977), and the SOPHIE and DEBUGGY systems [21], as well as the later AutoTutor [22].
Research has shown that the instructional effectiveness of ITS closely approximates that of human instructors [23], while also significantly enhancing the performance of both students and educators [24]. Consequently, ITS currently are regarded as one of the most promising mechanisms for the future of education [19]. While early ITS were limited in scale and subject scope, they laid the groundwork for the idea of adaptive computer-assisted instruction [15,25].
The 21st century, and especially the post-2010 era, marked a turning point in the broader adoption of AI in educational settings. Two key developments facilitated this shift: (1) the explosion of educational data from e-learning platforms, MOOCs, and educational software, which gave rise to learning analytics; and (2) advances in machine learning algorithms. In the context of higher education, applications began to proliferate, including course recommendation systems, automated grading tools for programming and mathematics, and advanced plagiarism detection mechanisms. Platforms such as Turnitin began integrating early forms of AI, such as paraphrase recognition, to support academic integrity. Even prior to 2020, several universities had already experimented with AI-based chatbots for student services (e.g., FAQs about programs and services) and with prototype virtual teaching assistants [1,7].
A notable pre-LLM example of AI use in higher education is the case of Jill Watson, an AI teaching assistant implemented at Georgia Tech in 2016. Developed by Professor Ashok Goel using IBM’s Watson platform, Jill was deployed in an online course forum to automatically answer frequently asked student questions. Remarkably, students did not realize that Jill was not a human assistant until the end of the semester. The perceived effectiveness of this initiative illustrated the potential of AI to scale up educational support in large-enrollment online courses. Similar initiatives soon followed at other institutions, foreshadowing the recent explosion of interest in educational chatbots [15,26,27].
The 2020s’ represents a clear inflection point. The public release of ChatGPT in late 2022 brought AI to the forefront of university discussions, as it marked the first time that a powerful, generative AI tool became widely accessible and capable of producing academic-level output in response to general prompts. This development ushered in a new phase of AI integration. Whereas AI in higher education had previously operated in the background, often unnoticed, it has now become directly accessible to end-users—students and faculty alike—as an everyday tool. University communities now face a technological leap wherein students can use a chatbot to do everything from clarifying difficult concepts to drafting full assignments. This shift necessitates the re-evaluation of multiple educational parameters—from assignment design and assessment strategies to the graduate skills required in a GenAI-informed academic and professional environment [4,6,13,14].
Understanding the historical development of AI in higher education is crucial because it reveals the evolving ambitions and capacities of these technologies. Early AI applications aimed primarily to mimic and support human tutoring through ITS, achieving measurable improvements in personalized and adaptive learning. Subsequent milestones included expanded machine learning capabilities, large-scale data analytics, and automated assessment tools that enhanced scalability and efficiency. The emergence of generative AI and LLMs marks a transformative turning point, enabling more sophisticated interactions, content creation, and administrative support. This historical perspective underscores that today’s AI tools are the result of incremental innovations and reflective learning, providing essential context for their responsible and effective deployment in universities.

3. AI in Higher Education

3.1. The Need for AI Literacy in Higher Education

The term AI literacy refers to the ability of individuals to understand, use, and critically engage with AI tools and systems, while recognizing both their potential and their limitations [28]. This concept is gaining increasing significance within higher education, as GenAI applications become rapidly embedded in teaching and learning processes [11,28]. As noted by Roschelle et al. [29], AI offers novel opportunities but also introduces profound pedagogical challenges for universities, necessitating the adoption of innovative approaches to instruction and curriculum design (Figure 2).
In higher education, the imperative to cultivate such capacities has intensified, especially when GenAI tools—such as LLMs (e.g., ChatGPT-4o)—are increasingly embedded in most facets of the educational process. The EDUCAUSE [30] report delineates AI literacy through four interconnected dimensions: comprehending the foundational concepts of AI systems, using AI tools actively, critically assessing AI-generated outputs, and participating ethically in public discourse regarding AI applications.
Artificial Intelligence literacy diverges in important ways from other digital literacies that have emerged within digital and information education. Unlike digital literacy, which focuses on the ability to use digital technologies and participate in digital environments with civic and social awareness, or information literacy, which involves locating, evaluating, and applying information critically [31], AI literacy requires a deeper understanding of algorithmic logic, the opacity of black-box systems, and the social and ethical implications of their deployment [32].
Similarly, media literacy pertains to the analysis and interpretation of media messages and representations [33], while data literacy focuses on the interpretation, analysis, and critical use of data—a vital skill in the context of learning analytics and AI-enabled education [30]. Lastly, metaliteracy provides a holistic framework that integrates all these literacies, combining metacognitive awareness, collaborative learning, and socially responsible engagement in the digital ecosystem [34].
Within this multilayered framework, AI literacy draws on all these literacies, yet adds a distinct critical and ethical dimension, emphasizing the ways in which AI technologies shape cognitive processes, creativity, and decision-making [35]. What sets AI literacy apart is its focus not only on how individuals use AI, but also on how AI, in turn, influences their language, memory, and reasoning. For this reason, AI literacy may be considered the most complex—and perhaps one of the most essentials—form of literacy in the current educational landscape. As a pedagogical imperative, AI literacy transcends technical proficiency; it functions as a framework for empowering students and citizens to coexist, collaborate, and reflect within an ecosystem where AI is an increasingly influential force in knowledge production, judgment, and human agency.
To operationalize these competencies in day-to-day study, AI literacy should explicitly embed self-regulated learning. Beyond technical, ethical, and critical competencies, AI literacy in higher education should cultivate self-regulated learning so students can manage the cognitive, metacognitive, and motivational demands of AI-assisted learning. Recent psychometric work proposes a four-pillar self-regulated learning profile tailored to AI contexts—motivational components (intrinsic/extrinsic motivation, self-efficacy), cognitive/metacognitive strategies, time and task management, and environmental/technological self-regulation—with solid construct validity and criterion links to GPA, technology interest, and digital literacy [36]. These findings imply that goal setting, monitoring/evaluating AI outputs, attention control, time-boxing, resource vetting, and technological adaptation are core competencies for responsible, independent learning with LLMs and learning analytics. Disciplinary cases (e.g., GeoAI, NLP) show that effective tool use requires students to regulate misconceptions, plan workflows, and sustain motivation while interacting with adaptive systems [37]. Embedding self-regulated learning micro-skills within AI-literacy curricula—prompt-planning and verification checklists, think-aloud monitoring of AI feedback, time-management protocols for tool use, and explicit resource-credibility routines—can mitigate over-reliance and support deep learning rather than mere performance boosts [36,37]).
The incorporation of AI literacy into university curricula signifies not just an acquaintance with technology tools, but a fundamental reconfiguration of students’ understanding, use, and assessment of artificial intelligence. Farrelly and Baker [2] emphasise that GenAI technologies are altering the functions of students and educators, necessitating the integration of systematic teaching on ethics, transparency, and responsible usage. Ganjavi et al. [10] assert that it is essential for students to be instructed on the ethical frameworks regulating the application of AI in academic writing, especially concerning plagiarism. Cheng et al. [3] further emphasise transparency as a fundamental ethical principle in the implementation of instructional AI.
Alongside technical training, the development of critical thinking is equally essential to evaluate AI-generated content. As Vashishth et al. [12] mention, education should not be limited to tool usage but must enhance students’ capacity to assess the accuracy, reliability, and potential biases of AI outputs. Rodafinos [38] illustrates the creative and productive affordances of tools like ChatGPT, while also warning of their potential to undermine academic integrity when used uncritically.
Nonetheless, heightened reliance on AI tools—particularly without human oversight or metacognitive evaluation—may yield profound cognitive repercussions. The research conducted by Kosmyna et al. [39] indicates that the frequent utilization of tools like ChatGPT for academic writing correlates with substantial decreases in brain activation in areas related to working memory, creativity, and executive function. Participants utilizing the LLM demonstrated significantly reduced brain connection throughout Theta, Alpha, and Beta frequency bands in contrast to those who operated without technology assistance. Furthermore, when prompted to retrieve passages from their AI-generated texts, 83% were unable to repeat even a single sentence, signifying a significant deterioration in episodic memory. This effect is termed “cognitive debt”, the degradation of inherent cognitive functions resulting from the delegation of intellectual tasks to algorithmic systems [39]. Therefore, although AI can improve performance, its unselective application may undermine cognitive independence and genuine learning.
The recent study by Tripathi [40] reveals that many undergraduate students possess low levels of AI knowledge, highlighting the urgent need to institutionalize AI literacy courses at all levels of higher education. This need is even more pronounced in developing countries, where, as Iskandarova et al. [41] observe, students often lack access to technological resources and trained academic staff. In a more systematic review, Bittle and El Gayar [42] argue that developing AI literacy strategies is essential to safeguarding academic integrity and preventing instances of plagiarism and ethical misconduct. To assess the effectiveness of such programs, Gander and Harris [43] advocate the use of diagnostic instruments for measuring AI-related skills and literacy, thus enabling targeted student support.
In summary, the development of AI literacy in higher education is both urgent and multidimensional. It is not merely about the competent use of digital tools, but rather about strengthening ethical judgment, pedagogical responsibility, and conscious engagement in the digital transformation of education [2,11,29]. This can be achieved through the systematic integration of AI literacy across disciplines, utilizing interdisciplinary and experiential approaches that foster technological awareness, metacognitive skills, and democratic reflection. Most importantly, educational environments must be cultivated in which students are not passive users of AI tools but critical interlocutors engaging with both the capabilities and limitations of algorithmic systems. Adopting frameworks such as the ETHICAL AI Framework promotes responsible use, enhances transparency, and upholds academic integrity within an increasingly dynamic technological ecosystem [44]. At the same time, faculty training is equally critical: educators must be equipped to integrate AI responsibly into their teaching practices, guide student use, and assess its impact on learning and academic culture [45,46]. Therefore, AI literacy is not a technological luxury, but a pedagogical necessity—one that is intrinsically linked to the future of university education and democratic knowledge production.

3.2. General Use of AI in Higher Education

Artificial Intelligence has become a foundational tool in higher education, influencing a wide range of academic functions—from teaching and learning to writing, research, and student assessment. Particularly since 2020, with the global proliferation of generative AI tools (e.g., ChatGPT, Copilot, Claude, Gemini, etc.), the use of these technologies has expanded rapidly with minimal delay [1]. According to Crompton and Song [25], AI holds substantial potential to transform higher education by enhancing personalized learning and improving instructional accessibility (Figure 3).
Recent systematic reviews converge on a balanced picture: AI not only augments teaching methods and individualizes learning (adaptive pathways, timely feedback), but also streamlines administrative work (e.g., scheduling, communications) and cultivates creativity and critical thinking when embedded in thoughtful pedagogy [47]. At the same time, these syntheses flag data-privacy and algorithmic-bias risks, digital inequities/infrastructure gaps, and the possibility of diminished human interaction, underscoring the need for careful design, educator training, and cross-stakeholder collaboration so that implementations remain aligned with educational values [47]. When applied with these safeguards, AI can make higher education more adaptive, inclusive, and effective across disciplines, an effect already visible in domains such as geography, where NLP, learning analytics, and ITS support student-centred learning yet still face ethics and access constraints [37].

3.2.1. Student Uses

The adoption of AI extends across all levels of the academic community—students, teaching staff, researchers, and institutional administrators. A recent survey by Jereb and Urh [48] found that nearly 90% of students had already used some form of AI tool for information retrieval, assignment writing, or studying. Most students employ these tools as “cognitive assistants” for summarizing content, interpreting material, translating text, and reformulating ideas, while a significant portion also uses them to paraphrase academic terminology or even generate entire paragraphs [7]. Crompton and Burke [6] assert that colleges adopting AI technology have attained enhanced learning personalization, more effective control of faculty workload, and superior tracking of academic success. Similarly, Roschelle et al. [29] observe that AI improves accessibility, flexibility, and active student engagement, especially in mixed and remote learning contexts.

3.2.2. Student Evaluation

Artificial Intelligence tools are utilised in the evaluation of students. Shi and Xuwei [49] elucidate the use of platforms like Gradescope and EvalAI for automated feedback provision. Jacques et al. [50] further warn that, in the absence of pedagogical supervision, these instruments could compromise profound learning. This prompts the essential inquiry of “cognitive responsibility displacement”: who bears ultimate responsibility for meaning-making—the learner or the algorithm? Furthermore, among teaching faculty, AI has become entrenched in many instructional workflows. Professors now use AI to generate quizzes, grade written assignments, provide feedback, and tailor content for students with learning differences [51]. A noteworthy application includes the use of Copilot AI companion to generate teaching scripts with customizable tone and content [50].

3.2.3. Academic Writing

Artificial Intelligence has also been adopted for academic writing. Rodafinos [38] and Cheng et al. [3] examine the positive contributions of AI to assignment writing, particularly in structuring, language precision, and statistical support. However, they also note the risk of depersonalized thinking and formulaic academic expression. Hicks et al. [52], in a more critical perspective, argue that unchecked use of tools like ChatGPT results in linguistic homogenization and superficial argumentation.

3.2.4. Ethical and Institutional Dilemmas

On the other hand, extensive utilization is not free from institutional and ethical dilemmas. Concerns about content authenticity, openness in AI contributions, and evaluative accountability persist as significant issues. Cheng et al. [3] advocate for the incorporation of AI disclosure statements in scholarly work, whilst entities like UNESCO and the European University Association (EUA) have urged the creation of formal ethical frameworks for AI application in higher education [2]. The international study conducted by Crain et al. [51] reveals that the use of AI correlates positively with heightened student worry around potential replacement and academic devaluation, especially within the humanities disciplines. These prompts apprehensions regarding whether AI bolsters or undermines the fundamental principles of academic experience. From an administrative perspective, AI is employed for performance forecasting, accessibility improvement (e.g., automated lecture transcription), and enhancing student experience. According to Jacques et al. [50], insufficient algorithmic transparency has resulted in “black box” systems that undermine institutional accountability.
In conclusion, the general use of AI in higher education offers promising opportunities for innovation, efficiency, and learning personalization. At the same time, it introduces new ethical, pedagogical, and institutional responsibilities that demand a holistic response from universities, instructors, and students alike.

3.3. Teaching, Learning and Assessment Performance and Improvement

3.3.1. Pedagogical Mechanisms, Personalization, and Higher-Order Learning

Artificial Intelligence is fundamentally reshaping teaching, learning, and assessment in higher education. From generative AI tools like ChatGPT to adaptive learning systems and intelligent agents, universities are entering a new era of personalization, automation, and instructional efficiency. Institutions increasingly leverage AI for quiz generation, automated grading, personalized feedback, and the development of interactive educational content. ITS enables real-time monitoring of student progress, identification of cognitive gaps, and the design of tailored learning trajectories. As Roschelle et al. [29] note, such tools support experiential learning and enhance student engagement, particularly in blended or distance learning environments.
Artificial Intelligence also holds strong potential to personalize the learning experience. Algorithms adapt content delivery to student profiles, learning performance, and individual preferences, enabling differentiated difficulty levels and targeted scaffolding [53]. Studies have linked such tools to increased student engagement, self-regulated learning, and academic achievement [8,54]. In particular, Wang and Fan’s [8] meta-analysis found that the use of AI tools improved learning outcomes, enhanced content comprehension, and fostered higher-order cognitive skills such as critical thinking.

3.3.2. The Need for Causal Evidence and Methodological Limits

The use of ChatGPT in higher education has sparked growing academic interest, especially regarding evolving perceptions and attitudes among students and faculty. While existing studies suggest that students generally view ChatGPT positively, instructors tend to express more skepticism or ambivalence. However, subjective attitudes alone cannot establish the true impact of ChatGPT on learning. Current cross-sectional findings remain inconclusive: some studies associate ChatGPT use with improved academic performance, while others indicate potential negative effects. The absence of causal evidence highlights the need for controlled experimental studies.
To fill this study void, Deng et al. [9] performed a comprehensive review and meta-analysis of existing experimental data. Their findings were substantial: interventions utilising ChatGPT positively influenced student academic performance, exhibiting a markedly high total impact size. Furthermore, these therapies improved emotional and motivational states—such as self-confidence, concentration, and intrinsic motivation—while diminishing cognitive burden. However, methodological constraints, like the rare application of power analysis and inadequate separation of tool influence from the quality of AI-generated results, diminish the dependability of these conclusions. The absence of long-term trials suggests that the observed advantages may indicate a novelty effect instead of enduring learning improvements. The potential of ChatGPT to enhance student learning is exciting; nevertheless, it necessitates additional exploration through more rigorous experimental methodologies and the application of objective assessments of higher-order abilities. Future research should prioritise differentiating between tool-assisted performance and genuine cognitive development to guarantee the pedagogical validity and enduring effectiveness of AI-supported treatments inside the university setting.

3.3.3. Neurocognitive Considerations and Cognitive Debt

The integration of LLMs into university teaching has introduced new dynamics into learning, but it also poses cognitive and pedagogical dilemmas. The pioneering experimental study by Kosmyna et al. [39], mentioned also previously, explored how the use of ChatGPT affects both the content of written work and students’ internalization and recall of knowledge. Their research investigated the neurocognitive and learning effects of LLM-assisted writing. Using a sample of 54 participants, the researchers employed EEG to track brain activity and natural language processing (NLP) tools to analyze written output under three conditions: (1) LLM-assisted writing, (2) search-engine-supported writing, and (3) unaided writing (“brain-only”). The results were striking: the LLM group demonstrated significantly reduced neural connectivity in the theta, alpha, and beta frequency bands—regions associated with working memory, creativity, and executive function. In contrast, participants in the “brain-only” group showed greater neural engagement, broader thematic variety, stronger recall, and a heightened sense of authorship. Moreover, when students in the LLM condition were asked to write without AI support in a follow-up session, they showed impaired recall and remained anchored to linguistic patterns previously suggested by the model. This illustrates the emergence of cognitive debt: a condition in which over-reliance on AI gradually diminishes one’s capacity for independent thinking and memory.
Yet, other studies suggest that LLMs can support learning when embedded within structured pedagogical frameworks. Kasneci et al. [55] argue that when AI tools are framed as “assistants” or “curators” rather than content generators, they can promote metacognitive engagement. Likewise, Sajja et al. [56] emphasize that LLMs can enhance personalization in learning, provided they augment—rather than replace—human judgment. Kosmyna et al. [39] thus offer a critical warning: although LLMs may provide short-term ease, their unchecked use in academic writing may ultimately erode core cognitive capacities.

3.3.4. Metacognition, Self-Regulated Learning, and Disciplinary Variation

Artificial intelligence has proven particularly effective in supporting the development of metacognitive skills such as self-regulation, performance monitoring, and strategic study planning. Platforms equipped with learning analytics mechanisms provide students with real-time feedback on their mistakes, the time spent on each task, and patterns of success or failure, thereby enhancing their capacity to reflect on their learning processes [57]. Moreover, AI integration varies significantly across disciplines. In the STEM fields, for example, AI is typically used to solve structured problems and automate assessment, while in the humanities and social sciences, its use is more reflective and creative—serving as a dialogic partner or a tool for enhancing argumentation and critical analysis [29,58].

3.3.5. Motivation, Autonomy, and Design Caveats

Artificial Intelligence tools also contribute to increased intrinsic motivation, student autonomy, and psychological resilience. The 24/7 availability of personalized learning support enhances student confidence while reducing anxiety often associated with traditional instruction [59,60]. However, Huang et al. [61] caution that overdependence on AI may weaken students’ conceptual depth and mastery of foundational learning skills. A systematic review by Sumbal et al. [62] highlights that while ChatGPT performs well on standardized tasks, it often fails in complex cognitive challenges when not paired with sound pedagogical design. Moreover, AI is not a panacea for all educational needs. Prillaman [63] warns that excessive reliance on AI may intensify performance expectations for both students and instructors and may contribute to cognitive fatigue due to continuous assessment pressures.

3.3.6. Institutional Exemplars and Research Workflows

At the University of Edinburgh, AI is used in programming education as an “on-demand tutor,” capable of analyzing syntax errors in real time [58]. The MITx platform at the Massachusetts Institute of Technology incorporates AI to analyze learner behavior, while in humanities departments, ChatGPT is employed to support writing development and argumentation skills [29,57]. AI has also deeply transformed academic writing and research processes. Students utilize tools such as ChatGPT, Jasper, and Scispace to draft outlines, rephrase paragraphs, and generate article summaries, thereby improving writing flow and conceptual understanding [38]. Meanwhile, researchers and faculty use advanced platforms like Research Rabbit and Elicit for automated literature mapping, identifying research gaps, and synthesizing reviews in a matter of minutes [15]. AI also contributes to language editing and document formatting, particularly for non-native English speakers, through tools like Paperpal and Writefull [54].
Overall, AI is redefining how students learn, how instructors teach, and how universities assess knowledge. Its benefits in terms of personalization, autonomy, and performance enhancement are evident. However, its responsible implementation requires clear pedagogical strategies, continuous training of users, and alignment with the principles of academic integrity. Only through this lens can AI function not as a threat, but as a catalyst for a more equitable, flexible, and high-quality higher education landscape.

4. AI Tools for Educators and Teaching Staff in Higher Education (with Examples)

The rapid evolution of AI has led to the development of many tools tailored to the needs of teachers at university level. These tools serve teaching, assessment, administrative management and enhancement of the learning experience, with a main focus on practical and effective use by teachers. A summary table of all presented AI tools is included in Table S1.

4.1. The Importance of Responsible Tool Selection

The integration of AI into university teaching is neither a neutral nor a purely technical process. As Cheng et al. [3] emphasize, selecting and using AI tools demands conscious pedagogical judgment, active supervision, and critical reflection. AI tools are not value-free; they embody design choices, implicit biases, and algorithmic constraints. Therefore, instructors must assess not only a tool’s usability or user-friendliness, but also parameters such as data security, operational transparency, ethical compliance, and alignment with institutional or curricular values.
The notion of responsible AI in higher education encompasses policies that guarantee AI technologies do not compromise genuine learning, promote unethical conduct, or supplant pedagogical partnerships. When intentionally integrated, AI tools can enhance educational efficacy by facilitating personalisation, feedback, accessibility, and empathetic teaching.
Nonetheless, actualising this promise necessitates institutional investment. Universities must formulate explicit AI usage rules, offer faculty specialised professional development, and integrate AI literacy as a fundamental pedagogical competency [64,65]. In this setting, educators who critically, reflectively, and flexibly utilise digital resources have a strategic advantage in both controlling their instruction and fostering a more resilient, egalitarian, and relevant academic environment.

4.2. Literature Review and Scientific Mapping

Conducting literature reviews constitutes a critical phase in course development, curriculum design, and evidence-based instructional planning. AI tools developed to support this process allow university educators to locate, evaluate, and visualize relevant academic sources with greater precision and efficiency.
Elicit employs LLMs to retrieve and summarize academic articles in response to natural language queries. It functions as a conceptual mapping assistant, helping instructors identify theoretical trends, synthesize contrasting perspectives, and generate evidence matrices automatically. This tool is particularly useful for educators who aim to design lectures grounded in current theoretical frameworks [66].
ResearchRabbit enables the visualisation of literature via interactive network graphs, illustrating the connections across articles, authors, and topic groups. It assists educators in tracking the development of a study domain or proposing organised inquiry routes for student-directed investigation [66,67].
Connected Papers, in contrast, creates “families of papers” by grouping a chosen publication with related, preceding, or derived works, according to co-citation patterns. This tool is optimal for faculty creating theoretical or methodological courses seeking to demonstrate the lineage of scientific knowledge.
Collectively, these instruments augment instructors’ metacognitive abilities, promote the clarity and justification of pedagogical decisions, and cultivate academic information literacy in higher education. Although they do not supplant critical assessment, they enhance it through sophisticated visualisation and automatic feedback, so confirming the educator’s role as both curator and facilitator of knowledge.

4.3. Personalized Teaching and Instructional Design

Teaching in higher education requires continuous content development, adaptation to diverse student cohorts, and the creation of multimodal instructional materials. Modern AI tools offer university instructors the ability to generate structured, differentiated, and interactive teaching content, significantly reducing preparation time.
EduAide.AI functions as a comprehensive course design assistant. It enables the creation of learning objectives, worksheets, comprehension questions, and both formative and summative assessment scenarios. Particularly notable is its automated adjustment feature, which aligns instructional content to various levels of difficulty—thus facilitating differentiated teaching strategies [68].
MagicSchool AI is designed for educators seeking to produce quizzes, case studies, essay prompts, or even classroom dialogue scripts. The platform allows customization of tone, knowledge depth, and pedagogical alignment, making it a useful resource for integrating AI into curriculum frameworks [55].
Curipod, by contrast, specializes in the development of interactive presentations that incorporate real-time polling, open-ended questions, and collaborative student input. Its features promote engagement and conceptual understanding through digital interactivity, particularly suited to blended and flipped learning environments [69]. All together, these platforms, as well as other similar tools, empower university educators to design and implement high-quality courses with flexibility, efficiency, and a strong student-centered orientation.

4.4. Assessment and Feedback

Assessment is an essential element of higher education pedagogy, and AI now provides instructors with sophisticated tools to enhance this process. Modern AI tools streamline assignment grading, provide precise and prompt feedback, and alleviate the administrative load typically linked to personalized assessment.
The e-rater® system, created by ETS, was one of the initial functional automated essay scorings (AES) tools and is extensively utilized in language evaluations, implementing statistical models and natural language processing [70]. Pearson’s Intelligent Essay Assessor (IEA) utilizes latent semantic analysis (LSA) to assess essays and short-answer responses, demonstrating high concordance with human evaluators [71].
A prominent instance of LLM is Zero-Shot LLM Framework for Assignment Grading, which employs models such as GPT-4o to evaluate open-ended responses without prior training, relying exclusively on prompt engineering [72]. The Human–AI Collaborative Essay Scoring Framework employs a dual-layer methodology that combines LLM-generated ratings with human supervision to guarantee reliability and transparency [73]. The RATAS system (Rubric-Automated Tree-based Answer Scoring) utilizes generative AI for rubric-based qualitative evaluations, yielding explicable and uniform outcomes [74].
Furthermore, several platforms have emerged that incorporate LLMs and AI agents for individualized assessment and feedback in higher education. Notably, the collaboration between OpenAI and Instructure introduced AI-powered grading into the Canvas LMS. This allows for the creation of LLM-enabled assignments—interactive, chat-based evaluations that align with specific learning objectives, with integrated in-sights in the gradebook and final grading subject to human review. Perusall, initially designed as an annotation tool, now provides AI-based grading for collaborative reading assignments, integrating LMS support and automating the evaluation of participation and commentary [75]. These platforms not only automate the grading of large volumes of student work but also promote transparency, rubric alignment, and the maintenance of human oversight as a core principle [76]. For example, Copilot for Marking utilises natural language processing to discern the argumentation framework of student responses and aids in producing enhanced commentary. The tool employs rhetorical and structural concepts to enhance students’ metacognitive participation. Its use has been linked to enhanced efficacy in formative feedback and less cognitive load for educators [77].
As an alternative to Gradescope, Crowdmark offers collaborative grading for large courses. It allows multiple evaluators to work simultaneously with shared rubrics and consolidated feedback. This system has proven effective in large-enrollment institutions by enhancing transparency and grading accuracy [66].
For open-ended responses and essay grading, tools like EssayGrader and CoGrader provide rubric-based frameworks, automated analysis of coherence and topical alignment, and administrative functionalities such as bulk grading and report exports. These tools support grading consistency and save time in resource-constrained academic settings.
Effective deployment of AI in assessment requires faculty training in rubric design, ethical considerations (e.g., algorithmic bias), and proper interpretation of AI-generated outputs. When implemented thoughtfully, these tools do not merely automate grading but actively enhance the learning experience by providing immediate and meaningful feedback.

4.5. Organization and Administrative Management

Teaching at university demands not only subject-matter expertise but also strong organizational skills. Instructors must juggle multiple responsibilities, including lesson planning, student communication, administrative compliance, and documentation of instructional activities. In this context, AI tools contribute to the streamlining of daily workflows through automation and cognitive augmentation.
Notion AI serves as a multifunctional platform for organizing notes, creating structured lesson plans, and managing academic syllabi. Faculty can use it to build personal databases of instructional materials, activity calendars, task lists, and generative-AI-enhanced course modules [56]. It also enables the transformation of raw notes into summaries or presentation slides, facilitating collaborative teaching and the management of multiple courses.
Otter.ai provides instantaneous transcription for lectures, virtual meetings, and voice recordings. This technology is particularly beneficial for accessibility, offering students immediate textual representations of spoken material, thereby facilitating understanding and reflective learning. It functions as a repository of documentation regarding instructors’ pedagogical techniques.
Tactiq interfaces with video conferencing services like Zoom and Google Meet to provide real-time captioning and concise bullet-point explanations. It facilitates the extraction of essential discussion points from meetings or mentoring sessions, hence improving decision-making and oversight of student groups [78]. These tools together enhance teachers’ metacognitive and administrative autonomy, fostering an organisational model based on information flow, transparency, and accessibility [79].

4.6. Reflective Teaching Support

Beyond assessment and content management, AI can also enhance teaching practice as a reflective tool for educators themselves. Feedback systems, dialogue analysis tools, and pattern recognition engines are increasingly being deployed in university environments to support more targeted, adaptive, and empathetic forms of instruction.
In early childhood and secondary education, a study by Meyer et al. [80] demonstrated that using LLMs to generate tailored feedback on student writing significantly improved academic performance and students’ engagement. Similar affordances can be leveraged in higher education, where AI can assist instructors in producing reflective feedback, identifying student misconceptions in open-ended activities, and generating “engagement prediction models” via discourse analysis in learning management systems (LMS) [81].
In more advanced implementations, tools such as AI Teaching Assistants are integrated into interactive environments (e.g., Inq-ITS, Write&Improve), and have been piloted in teacher education programs to monitor instructional scenarios and analyze participation trends in live classroom discussions [82]. The adoption of these tools enhances instructors’ capacity to perform metacognitive analyses of their teaching through data-informed dashboards, promoting a model of instruction that is reflective, adaptive, and socioemotionally responsive.

5. Ethical Use of AI in Teaching, Learning, Course Design, and Assessment

5.1. Institutional and Publishing Policies

The integration of GenAI into higher education has led to substantial shifts in academic practices worldwide, intensifying the discourse around usage policies, academic integrity, and ethical guidelines. As tools such as ChatGPT and other LLMs become increasingly embedded in writing, assessment, and research, universities, publishers, and global organizations are called upon to define clear frameworks for their responsible and ethical use.
A key priority for many universities is the institutionalization of policies that balance the adoption of AI with the preservation of academic integrity. Farrelly and Baker [2] caution that integrating AI without strategic planning may result in overdependence and diminished expectations for authentic thinking and writing. They recommend the development of ethical codes of conduct, explicit delineation of acceptable uses, and a distinction between “collaborative” and “substitutive” applications of AI as core safeguards.
In academic publishing, the study by Ganjavi et al. [10] reveals a wide variety of practices globally. While 87% of journals have adopted AI-related policies, only 24% of publishers offer clear guidance to authors. Most policies prohibit listing AI as an author but allow the use of AI tools, provided that their contributions are transparently acknowledged. This lack of standardization poses challenges for authors and underscores the need for harmonized international guidelines [10].

5.2. Detection, Fairness, and Disclosure-by-Design

The rising utilisation of AI by students for academic assistance and assignment composition has led numerous universities to explore strategies for identifying illicit AI usage. Common solutions include AI detection tools (e.g., GPTZero, Originality.ai, Turnitin’s AI functionality). Nevertheless, independent evidence highlights substantial false-positive risks, often misclassifying human-written text as AI-generated, with disproportionate impact on non-native English writers, raising fairness and equity concerns [83,84]. Reflecting these risks, the University of Pittsburgh disabled Turnitin’s AI detection, noting the potential for erroneous accusations [85]. In line with this shift, institutions are moving from punitive detection toward preventive and pedagogical strategies, including assessment redesign (e.g., prompts centred on personal experience, iterative/oral components, reflective and practice-based tasks) and disclosure-by-design approaches.
An increasing number of universities require or encourage disclosure of AI use in coursework, typically via an appendix that documents the tools employed, the prompts issued, the purpose, and the extent of integration, in line with the ETHICAL principles [3]. A widely recommended practice is adding an appendix that documents tools used, prompts, purposes, and the scope of integration. This aligns with the ETHICAL principles for coursework: Embrace awareness of capabilities/limits; ensure Transparency; Highlight the student’s own intellectual contribution; preserve Integrity through proper attribution; Cultivate deep engagement over tool dependence; Append specific usage details; and Learn to distinguish appropriate from inappropriate uses (Figure 4). Together, disclosure and process documentation increase transparency and trust while preserving academic voice and originality.
The academic community has begun adopting additional preventive measures that promote transparency and accountability. According to Cheng et al. [3], three core principles guide the ethical use of AI in academic writing: (1) human oversight and evaluation; (2) substantial human contribution to content creation; and (3) explicit disclosure of AI involvement. These principles are essential to sustaining trust in academic systems. Complementarily, recent policy surveys indicate widespread student use of GenAI in assessments, reinforcing the urgency of assessment redesign and clear disclosure norms to safeguard authenticity while supporting learning.
Uzun [44] links unregulated AI use to an increase in academic misconduct. A lack of awareness regarding acceptable practices, combined with limited training on evaluating AI-generated content, has led to a rise in plagiarism cases. Educational initiatives and AI literacy workshops are thus recommended to equip students and instructors with the competencies required for responsible AI engagement.

5.3. International Frameworks and Equity Considerations

At the level of international policy, both the UNESCO report [86] and the COPE [87] recommendations advocate for global frameworks grounded in ethical AI use, transparency, and reliability. Likewise, Moya et al. [88] propose international guidelines for AI integration, with particular attention to protecting academic integrity in cross-border education settings. In developing countries, the implementation of such policies requires adaptation to local capabilities. Sallu et al. [89] emphasise the need for investment in infrastructure, technical expertise, and equitable access to AI, so that technology becomes a driver of inclusion rather than a mechanism of exclusion.

5.4. Assessment Reform and Adaptive Governance

Policy considerations also encompass the necessity to reform assessment practices. Rodzi et al. [90] contend that conventional evaluation methods may become outdated in the age of generative models. Proposed alternatives, including oral examinations, multi-phase projects, and reflective analyses, are successful in maintaining authenticity. UNESCO [90,91] similarly advocates using AI as a helper rather than a replacement, a stance echoed across institutional recommendations.
Ultimately, ongoing policy evaluation and adjustment to technology progress are needed. Bozkurt et al. [92,93], in their “AI Teaching and Learning Manifesto,” promote adaptive frameworks that evolve with educational requirements and AI advancements. Formulating standards for the ethical application of AI in higher education is not solely a technical issue; it is a pedagogical and institutional necessity. Universities must achieve equilibrium between innovation and ethics, ensuring that AI enhances, rather than undermines, the fundamental ideals of higher education: autonomy, critical thinking, trustworthiness, and authenticity.

6. Institutional Integration of AI in Higher Education: Policy Recommendations and Strategic Implementation

6.1. Policy and Governance

The strategic integration of AI in higher education requires not only technological readiness and well-developed AI literacy (as previously discussed), but also institutional coherence, ethical awareness, and pedagogical foresight. As tools such as ChatGPT and LLMs become increasingly prevalent in academic settings, universities are called upon to establish clear, unified, and ethically informed policy frameworks that harness the potential of AI without compromising the core values of education.
A foundational step is the development of transparent and adaptable institutional policies governing AI use. These frameworks should explicitly define acceptable applications (e.g., for stylistic or syntactic refinement) and prohibit inappropriate uses (e.g., submission of content fully generated by AI). Policies should embed disclosure-by-design (e.g., AI-use statements/appendix) and require human-in-the-loop review for high-stakes decisions, with simple model/version logs for traceability. It is recommended that institutions establish technology monitoring committees and adopt standardized terminology across departments to prevent confusion and inconsistency [10,38].

6.2. Literacy and Capacity Building

Equally essential is the strengthening of AI literacy among both students and faculty. Introductory seminars (e.g., “AI for Academic Users”) should be embedded across curricula, while faculty should receive professional training in prompting, ethical dilemmas, and pedagogical integration of AI tools [2,3]. Consistent with previous reports, literacy initiatives can integrate self-regulated learning micro-skills (goal setting, verification of AI outputs, time management, credibility checks) and short professional development micro-credentials for staff who deploy AI-enabled activities.

6.3. Curriculum, Assessment, and Academic Integrity

Pedagogically grounded AI integration into learning and assessment activities can mitigate covert usage while promoting critical thinking. For example, instructors can design assignments that require students to compare their own responses to those generated by AI tools and reflect on the differences. Such strategies reposition AI as a learning ally rather than a means of circumvention [46]. To ensure consistency with previous sections from this entry, courses can include brief AI-use statements and an AI appendix in assessed work, while assessment redesign (e.g., iterative/oral components, process portfolios with transparent rubrics) helps preserve academic voice and authenticity.

6.4. Infrastructure, Data Protection, and Collaboration

Institutional investment in AI infrastructure is likewise promoted. This may involve offering campus-wide memberships to esteemed technologies or creating internal chatbots connected to institution libraries. Experimental environments like AI Labs facilitate responsible and innovative research of AI technologies. Data protection and digital sovereignty are essential elements. Institutional regulations must guarantee adherence to the General Data Protection Regulation (GDPR), specify the locations of student and teacher data storage, and mandate openness in the operations of AI tools [94]. Where appropriate, conduct Data Protection Impact Assessment for core integrations, document data residency/retention, and maintain concise model/version registries for automated decision support; where sovereignty is a priority, consider open-source or locally hosted options with equitable campus licensing. Ongoing assessment of AI integration results is crucial. Empirical research within institutions, coupled with qualitative and quantitative stakeholder feedback, can guide iterative enhancements in policy and practice.
Finally, inter-university collaboration is a key enabler. Networks such as the European University Association (EUA) and global organizations like UNESCO [86] encourage the alignment of good practices and a unified academic voice in shaping regulatory frameworks (e.g., the EU AI Act). Through such cooperation, institutions can advocate education-specific exceptions and safeguards that reflect the unique dynamics of teaching and research.
In summary, the integration of AI into higher education should not occur passively, but rather through deliberate implementation rooted in ethical discernment, institutional vision, and a deep respect for the values of academia, autonomy, academic freedom, equity, and quality. This section provides the connective policy and practice tissue so that ethical principles, learning evidence, and tools function coherently at institutional scale.

7. Discussion

The incorporation of AI into higher education represents not just technological advancement but a profound alteration of pedagogical methods, learning processes, and student-instructor engagement. The empirical findings presented in this entry demonstrate that AI, particularly in its GenAI, influences various aspects of academic life, including written expression support, language revision, productivity enhancement, pedagogical models, assessment strategies, and institutional teaching policies. AI tools offer personalized learning experiences, streamline assessment, and support academic productivity. They facilitate higher student engagement, autonomy, and accessibility, especially in blended and remote learning environments.
Although the benefits are apparent, the discussion over AI utilisation must not neglect significant problems, such as ethical implementation, authenticity, equitable access, and the necessity for institutional safeguards. Crain et al. [51] underscore prevalent apprehensions among teachers and students about the decline of cognitive abilities, whereas further stress the necessity of oversight, transparency, and disclosure via mechanisms like AI usage declarations. Excessive dependence on unregulated AI and LLMs threatens to standardise academic discourse and restrict critical critique in educational contexts, potentially fostering overreliance and cognitive deficits (“cognitive debt”) [39]. In short, opportunities and risks are co-present and require coordinated academic, technological, and policy responses.
Policies for the integration of AI at both institutional and governmental levels have gained considerable importance; however, institutional governance remains fragmented, with inconsistent policies and uneven adoption of AI literacy initiatives contributing to uncertainty. Comparative case studies demonstrate various levels of preparedness: nations like Finland and Singapore have integrated AI into their curricula and national strategies, while others, such Greece and Germany, have deficiencies in institutional coordination. This illustrates a wider trend of uneven adaptation, intensifying the disparity between “digitally mature” and “technologically vulnerable” institutions. Thus, the absence of standardized ethical frameworks for AI use in academic contexts complicates effective implementation and risks confusion or misuse. A coherent institutional pathway helps reduce this variability by aligning governance, literacy, curriculum, and data practices.
Artificial Intelligence is not a neutral tool. It may reproduce biases, perpetuate social inequalities, raise fairness issues—especially for non-native speakers—and exacerbate digital divides between resource-rich and vulnerable institutions or student populations. For these reasons, it requires interpretive vigilance by the academic community. The ability of LLMs to fabricate citations or produce content lacking authentic grounding raises serious questions about learning quality and the reliability of student submissions.
An especially pressing concern is equal access. Research conducted by Farahani and Ghasmi [15] and the relevant sections of this entry on AI literacy highlight the potential for exacerbating the digital gap. The commercialization of AI technologies and the inequitable distribution of digital resources may exacerbate the marginalization of students from disadvantaged backgrounds, so limiting their capacity to engage fully in educational activities. Regulatory and ethical frameworks lag behind swift technological progress. Ganjavi et al. [10] highlight the absence of standardized regulations for AI utilization in academic production, resulting in varying interpretations of permitted activities by instructors and heightening the risk of confusion or misapplication. This inconsistency is reflected at universities, where AI integration frequently relies on the judgement of individual faculty members instead of cohesive institutional strategies. Consequently, the strategic implementation of AI in higher education must be based on transparency, pedagogical insight, equity, and human supervision. Neither uncritical technological zeal nor excessive caution can sufficiently tackle these intricate difficulties. What is necessary is ongoing contemplation, institutional development, and collaborative dedication to maintaining the quality, authenticity, and democratic nature of education. In practice, this means pairing ethical policies with staged institutional roll-out rather than ad hoc adoption.
To move from diagnosis to solutions, we recommend a package of evidence-based practices: adopt disclosure-by-design (AI-use statements, model/version logs) and mandate human-in-the-loop review for any high-stakes or graded use [42,95], redesign assessments toward iterative, authentic, and oral/defense-based tasks with transparent rubrics and planned AI affordances [77,96,97] build AI-literacy and staff professional development that covers bias awareness, prompt-planning, verification, and data-privacy basics; and establish governance guardrails—bias audits, Data Protection Impact Assessments, GDPR-aligned data minimization, and multi-stakeholder oversight—to ensure equitable, transparent implementation across disciplines [95]. Taken together, these steps balance innovation with integrity and make AI deployments more robust, fair, and educationally meaningful. [42,77,95,96,97].
Various evaluation models and tools have been developed to assess the use and impact of AI in higher education, ranging from diagnostic tools for measuring AI literacy to frameworks for monitoring AI’s pedagogical effectiveness and ethical implications. While a detailed examination of these approaches is beyond the scope of this entry, which focuses on synthesizing established knowledge about AI’s role in education, it is important to recognize their existence as part of the broader ecosystem supporting responsible AI integration and the importance of systematic evaluation. Future work should continue to refine these tools and adapt them to diverse educational contexts, ensuring that measurement strategies remain aligned with evolving AI capacities and the values of academic integrity, inclusivity, and transparency.
Future research should prioritize rigorous longitudinal and/or experimental designs to empirically measure AI’s long-term pedagogical impact, disentangling short-term tool effects from genuine cognitive development. Additionally, comparative research across diverse educational contexts to ensure that AI solutions are culturally responsive and equitable is crucial. Investigations into effective AI literacy instruction and equitable access strategies are essential. Lastly, the development of adaptive policy and governance frameworks that evolve with technological advances and reflect diverse institutional needs is critical.

8. Conclusions

Artificial intelligence represents a transformative force in higher education, offering powerful tools to enhance learning, teaching, and research. However, its deployment is accompanied by significant ethical, institutional, and pedagogical challenges. The lack of regulatory consensus, the absence of policy frameworks in many countries, and technological asymmetries between students and institutions underscore the need for strategic institutional integration.
Based on the current state-of-the-art review, this integration should be anchored in four foundational pillars:
1.
Academic Integrity
AI use must be accompanied by clearly defined ethical standards and rules. Disclosure statements, human oversight, and the avoidance of misuse are prerequisites for maintaining scientific credibility.
2.
AI Literacy Development
Educating students and staff about AI’s capabilities, limitations, and ethical dimensions is essential for conscious, critical engagement with these technologies.
3.
Pedagogical Integration
AI should not merely serve as a productivity tool but be meaningfully embedded in teaching and assessment to promote critical thinking, creativity, and learner autonomy.
4.
Institutional and Policy Adaptation
Universities must develop coherent strategies that combine the protection of academic ethics with support for innovation. Policies should be adaptive, transparent, and responsive to the specific needs of institutions.
Operationalizing this agenda requires evidence-informed governance: disclosure-by-design with human-in-the-loop oversight for high-stakes uses, self-regulated learning-infused AI-literacy for students and staff, assessment redesign toward iterative/authentic tasks with transparent rubrics, and routine bias/privacy audits under GDPR-aligned data stewardship, coupled with equitable access provisions. Together, these measures align innovation with academic integrity, equity, and durable learning value.
As a concluding remark, AI is neither a cure-all nor an intrinsic danger. It is, instead, a “technology of selection.” The successful, ethical, and strategic incorporation into higher education relies on the choices made by institutions, educators, and learners. The opportunity is unparalleled: if embraced with institutional commitment and educational accountability, AI can facilitate a more inclusive, efficient, and democratic future in education.

Supplementary Materials

The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/encyclopedia5040180/s1, Table S1: Comparative overview of AI tools used by educators in higher education.

Author Contributions

Conceptualization, M.A.; methodology, M.A.; writing—original draft preparation, M.A. and T.R.; writing—review and editing, M.A. and T.R.; visualization, T.R.; supervision, M.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this manuscript, the authors used ChatGPT (GPT-4.5 model) and Perplexity (Perplexity AI App v4.0.0) for the purposes of language and grammar improvement, formatting and style guidance, as well as clarity checks. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
GenAIGenerative Artificial Intelligence
LLMsLarge Language Models
LMSLearning management systems
ITSIntelligent Tutoring Systems
GDPRGeneral Data Protection Regulation
NLPNatural Language Processing
EEGElectroencephalography

References

  1. Liu, Y.; Han, T.; Ma, S.; Zhang, J.; Yang, Y. Summary of ChatGPT-Related Research and Perspective Towards the Future of Large Language Models. Meta-Radiology 2023, 1, 100017. [Google Scholar] [CrossRef]
  2. Farrelly, T.; Baker, N. Generative Artificial Intelligence: Implications and Considerations for Higher Education Practice. Educ. Sci. 2023, 13, 1109. [Google Scholar] [CrossRef]
  3. Cheng, A.; Calhoun, A.; Reedy, G. Artificial Intelligence–Assisted Academic Writing: Recommendations for Ethical Use. Adv. Simul. 2025, 10, 22. [Google Scholar] [CrossRef] [PubMed]
  4. O’Donnell, F.; Porter, M.; Rinella Fitzgerald, D. The Role of Artificial Intelligence in Higher Education. Ir. J. Technol. Enhanc. Learn. 2024, 8. [Google Scholar] [CrossRef]
  5. Shishavan, H.B. AI in Higher Education. In 2024: ASCILITE 2024 Conference Proceedings; The University of Melbourne: Melbourne, Australia, 2024. [Google Scholar] [CrossRef]
  6. Crompton, H.; Burke, D. Artificial Intelligence in Higher Education: The State of the Field. Int. J. Educ. Technol. High. Educ. 2023, 20, 22. [Google Scholar] [CrossRef]
  7. Dempere, J.; Modugu, K.; Hesham, A.; Ramasamy, L.K. The Impact of ChatGPT on Higher Education. Front. Educ. 2023, 8, 1206936. [Google Scholar] [CrossRef]
  8. Wang, J.; Fan, W. The Effect of ChatGPT on Students’ Learning Performance, Learning Perception, and Higher-Order Thinking: Insights from a Meta-Analysis. Humanit. Soc. Sci. Commun. 2025, 12, 621. [Google Scholar] [CrossRef]
  9. Deng, R.; Jiang, M.; Yu, X.; Lu, Y.; Liu, S. Does ChatGPT Enhance Student Learning? A Systematic Review and Meta-Analysis of Experimental Studies. Comput. Educ. 2025, 227, 105224. [Google Scholar] [CrossRef]
  10. Ganjavi, C.; Eppler, M.B.; Pekcan, A.; Biedermann, B.; Abreu, A.; Collins, G.S.; Gill, I.S.; Cacciamani, G.E. Publishers’ and Journals’ Instructions to Authors on Use of Generative AI in Academic and Scientific Publishing: Bibliometric Analysis. BMJ 2024, 384, e077192. [Google Scholar] [CrossRef] [PubMed]
  11. Hazari, S. Justification and Roadmap for Artificial Intelligence (AI) Literacy Courses in Higher Education. J. Educ. Res. Pract. 2024, 14, 7. [Google Scholar] [CrossRef]
  12. Vashishth, T.K.; Sharma, V.; Sharma, K.K.; Kumar, B. Enhancing Literacy Education in Higher Institutions with AI: Opportunities and Challenges. In Advances in Educational Technologies and Instructional Design; IGI Global: Palmdale, PA, USA, 2024; pp. 198–215. [Google Scholar] [CrossRef]
  13. Ajani, O.A.; Akintolu, M.; Afolabi, S.O. The Emergence of Artificial Intelligence in the Higher Education. Int. J. Res. Bus. Soc. Sci. 2024, 13, 157–165. [Google Scholar] [CrossRef]
  14. Jarrah, A.M.; Wardat, Y.; Fidalgo, P. Using ChatGPT in Academic Writing Is (Not) a Form of Plagiarism: What Does the Literature Say? Online J. Commun. Media Technol. 2023, 13, e202346. [Google Scholar] [CrossRef]
  15. Farahani, M.S.; Ghasmi, G. Artificial Intelligence in Education: A Comprehensive Study. Forum Educ. Stud. 2024, 2, 1379. [Google Scholar] [CrossRef]
  16. Grant, M.J.; Booth, A. A Typology of Reviews: An Analysis of 14 Review Types and Associated Methodologies. Health Inf. Libr. J. 2009, 26, 91–108. [Google Scholar] [CrossRef] [PubMed]
  17. Conati, C.; Gertner, A.; VanLehn, K. Using Bayesian Networks to Manage Uncertainty in Student Modeling. User Model. User-Adap. Interact. 2002, 12, 371–417. [Google Scholar] [CrossRef]
  18. Conati, C.; Maclaren, H. Empirically Building and Evaluating a Probabilistic Model of User Affect. User Model. User-Adap. Interact. 2009, 19, 267–303. [Google Scholar] [CrossRef]
  19. Luckin, R.; Holmes, W. Intelligence Unleashed: An Argument for AI in Education; UCL Knowledge Lab: London, UK, 2016. [Google Scholar]
  20. Dargue, B.; Biddle, E. Just Enough Fidelity in Student and Expert Modeling for ITS. In International Conference on Augmented Cognition; Springer International Publishing: Cham, Switzerland, 2014. [Google Scholar]
  21. Sleeman, D.H.; Brown, J.S. Intelligent tutoring systems: An overview. In Intelligent Tutoring Systems; Sleeman, D.H., Brown, J.S., Eds.; Academic Press: New York, NY, USA, 1982; pp. 1–11. [Google Scholar]
  22. Graesser, A.C.; Chipman, P.; Haynes, B.C.; Olney, A. AutoTutor: An Intelligent Tutoring System with Mixed-Initiative Dialogue. IEEE Trans. Educ. 2005, 48, 612–618. [Google Scholar] [CrossRef]
  23. VanLehn, K.; Graesser, A.C.; Jackson, G.T.; Jordan, P.; Olney, A.; Rosé, C.P. When are tutorial dialogues more effective than reading? Cogn. Sci. 2007, 31, 3–62. [Google Scholar] [CrossRef] [PubMed]
  24. Spector, J.; Merrill, M.; David, M. Handbook of Research on Educational Communications and Technology, 3rd ed.; Springer: New York, NY, USA, 2014. [Google Scholar]
  25. Crompton, H.; Song, D. The Potential of Artificial Intelligence in Higher Education. Rev. Virtual Univ. Católica Norte 2021, 62, 1–4. [Google Scholar] [CrossRef]
  26. Goel, A.; Polepeddi, L. Jill Watson: A Virtual Teaching Assistant for Online Education. Georgia Institute of Technology Research Report. 2018. Available online: https://dilab.gatech.edu/test/wp-content/uploads/2022/06/GoelPolepeddi-DedeRichardsSaxberg-JillWatson-2018.pdf (accessed on 28 September 2025).
  27. Abdurohman, N.R. Artificial Intelligence in Higher Education: Opportunities and Challenges. Eurasian Sci. Rev. 2025, 2, 1683–1695. [Google Scholar] [CrossRef]
  28. Betül, C.; Durgut, G. AI Literacy in Higher Education: Knowledge, Skills, and Competences; Association for the Advancement of Computing in Education (AACE): Waynesville, NC, USA, 2024; Available online: https://www.learntechlib.org/primary/p/224519 (accessed on 28 September 2025).
  29. AI and the Future of Learning: Expert Panel Report; Roschelle, J., Lester, J., Fusco, J., Eds.; Digital Promise: Washington, DC, USA, 2020; Available online: https://circls.org/reports/ai-report (accessed on 3 August 2025).
  30. EDUCAUSE. Defining AI Literacy for Higher Education; EDUCAUSE: Washington, DC, USA, 2024; Available online: https://www.educause.edu/content/2024/ai-literacy-in-teaching-and-learning/defining-ai-literacy-for-higher-education (accessed on 28 September 2025).
  31. Zhang, B.; Dafoe, A. Artificial Intelligence: American Attitudes and Trends; Center for the Governance of AI, University of Oxford: Oxford, UK, 2019; Available online: https://governanceai.github.io/US-Public-Opinion-Report-Jan-2019/ (accessed on 3 August 2025).
  32. Long, D.; Magerko, B. What Is AI Literacy? Competencies and Design Considerations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, Honolulu, HI, USA, 25–30 April 2020; Association for Computing Machinery: New York, NY, USA, 2020; pp. 1–16. [Google Scholar] [CrossRef]
  33. Media and Information Literacy in the Global South: Politics, Policies and Pedagogies; Ragnedda, M., Mutsvairo, B., Eds.; Routledge: London, UK, 2018. [Google Scholar]
  34. Mackey, T.P.; Jacobson, T.E. Metaliteracy: Reinventing Information Literacy to Empower Learners; ALA Neal-Schuman: Chicago, IL, USA, 2014. [Google Scholar]
  35. Touretzky, D.S.; Gardner-McCune, C.; Martin, F.; Seehorn, D. Envisioning AI for K-12: What Should Every Child Know about AI? Proc. AAAI Conf. Artif. Intell. 2019, 33, 9795–9799. [Google Scholar] [CrossRef]
  36. Yurt, E. The Self-Regulation for AI-Based Learning Scale: Psychometric Properties and Validation. Int. J. Curr. Educ. Stud. 2025, 4, 95–118. [Google Scholar] [CrossRef]
  37. Şanlı, C. Artificial Intelligence in Geography Teaching: Potentialities, Applications, and Challenges. Int. J. Curr. Educ. Stud. 2025, 4, 47–76. [Google Scholar] [CrossRef]
  38. Rodafinos, A. The Integration of Generative AI Tools in Academic Writing: Implications for Student Research. Soc. Educ. Res. 2025, 6, 250–258. [Google Scholar] [CrossRef]
  39. Kosmyna, N.; Hauptmann, E.; Yuan, Y.T.; Situ, J.; Liao, X.-H.; Beresnitzky, A.V.; Braunstein, I.; Maes, P. Your Brain on ChatGPT: Accumulation of Cognitive Debt When Using an AI Assistant for Essay Writing Task. arXiv 2025, arXiv:2506.08872. [Google Scholar] [CrossRef]
  40. Tripathi, C.R. Awareness of Artificial Intelligence (AI) among Undergraduate Students. NPRC J. Multidiscip. Res. 2024, 1, 126–142. [Google Scholar] [CrossRef]
  41. Iskandarova, S.; Yusif-Zada, K.; Mukhtarova, S. Integrating AI into Higher Education Curriculum in Developing Countries. In Proceedings of the 2024 IEEE Frontiers in Education Conference (FIE), Washington, DC, USA, 13–16 October 2024; pp. 1–9. [Google Scholar] [CrossRef]
  42. Bittle, K.; El Gayar, O. Generative AI and Academic Integrity in Higher Education: A Systematic Review and Research Agenda. Information 2025, 16, 296. [Google Scholar] [CrossRef]
  43. Gander, T.; Harris, G. Understanding AI Literacy for Higher Education Students: Implications for Assessment. AI High. Educ. Symp. 2024, 1, 8. [Google Scholar] [CrossRef]
  44. Uzun, L. ChatGPT and Academic Integrity Concerns: Detecting Artificial Intelligence Generated Content. Lang. Educ. Technol. 2023, 3, 45–54. [Google Scholar]
  45. Kudiabor, H. How AI-Powered Science Search Engines Can Speed Up Your Research. Nature 2024, 621, 688. [Google Scholar] [CrossRef] [PubMed]
  46. Mollick, E.; Mollick, L. Assigning AI: Seven Approaches for Students, with Prompts. arXiv 2023, arXiv:2306.10052. [Google Scholar] [CrossRef]
  47. Takona, J.P. AI in Education: Shaping the Future of Teaching and Learning. Int. J. Curr. Educ. Stud. 2024, 3, 1–25. [Google Scholar] [CrossRef]
  48. Jereb, E.; Urh, M. The Use of Artificial Intelligence among Students in Higher Education. Organizacija 2024, 57, 333–345. [Google Scholar] [CrossRef]
  49. Shi, J.; Xuwei, Z. Integration of AI with Higher Education Innovation: Reforming Future Educational Directions. Int. J. Sci. Res. 2023, 12, 1727–1731. [Google Scholar] [CrossRef]
  50. Jacques, P.H.; Moss, H.K.; Garger, J. A Synthesis of AI in Higher Education: Shaping the Future. J. Behav. Appl. Manag. 2024, 25, 103–111. [Google Scholar] [CrossRef]
  51. Crain, C.; Ewing, A.; Billy, I.; Anush, H. The Advantages and Disadvantages of AI in Higher Education. Bus. Manag. Rev. 2025, 15, 160–169. [Google Scholar] [CrossRef]
  52. Hicks, M.T.; Humphries, J.; Slater, J. ChatGPT Is Bullshit. Ethics Inf. Technol. 2024, 26, 38. [Google Scholar] [CrossRef]
  53. Krishnakumar, M.; Balasubramanian, K. Effectiveness of AI in Enhancing Quality Higher Education: A Survey Study. Int. J. Multidiscip. Res. 2024, 6. [Google Scholar] [CrossRef]
  54. Mohamed, A.M.; Shaaban, T.S.; Bakry, S.H.; Guillén-Gámez, F.D.; Strzelecki, A. Empowering the Faculty of Education Students: Applying AI’s Potential for Motivating and Enhancing Learning. Innov. High. Educ. 2024, 50, 587–609. [Google Scholar] [CrossRef]
  55. Kasneci, E.; Seßler, K.; Küchemann, S.; Bannert, M.; Dementieva, D.; Fischer, F.; Gasser, U.; Groh, G.; Günnemann, S.; Hüllermeier, E.; et al. ChatGPT for Good? On Opportunities and Challenges of Large Language Models for Education. Learn. Individ. Differ. 2023, 103, 102274. [Google Scholar] [CrossRef]
  56. Sajja, R.; Sermet, Y.; Cikmaz, M.; Cwiertny, D.; Demir, I. Artificial Intelligence-Enabled Intelligent Assistant for Personalized and Adaptive Learning in Higher Education. Information 2024, 15, 596. [Google Scholar] [CrossRef]
  57. Msambwa, M.M.; Wen, Z.; Daniel, K. The Impact of AI on the Personal and Collaborative Learning Environments in Higher Education. Eur. J. Educ. 2025, 60, e12909. [Google Scholar] [CrossRef]
  58. Velázquez-García, L. AI-Based Applications Enhancing Computer Science Teaching in Higher Education. J. Inf. Syst. Eng. Manag. 2025, 10, 14–32. [Google Scholar] [CrossRef]
  59. Alasgarova, R.; Rzayev, J. The Role of Artificial Intelligence in Shaping High School Students’ Motivation. Int. J. Technol. Educ. Sci. 2024, 8, 311–324. [Google Scholar] [CrossRef]
  60. Wang, W.; Li, W. The Impact of AI Usage on University Students’ Willingness for Autonomous Learning. Behav. Sci. 2024, 14, 956. [Google Scholar] [CrossRef] [PubMed]
  61. Huang, X.; Zhang, Y.; Lyu, X. Assessment of the Impact of Artificial Intelligence on College Student Learning Based on the CRITIC Method. In Proceedings of the 2023 IEEE International Conference on Education, Applications and Standards of Converging Technologies (EASCT), Lonavla, India, 7–9 April 2023; pp. 1–12. [Google Scholar] [CrossRef]
  62. Sumbal, A.; Sumbal, R.; Amir, A. Can ChatGPT-3.5 Pass a Medical Exam? A Systematic Review of ChatGPT’s Performance in Academic Testing. J. Med. Educ. Curric. Dev. 2024, 11, 23821205241238641. [Google Scholar] [CrossRef] [PubMed]
  63. Prillaman, M. Is ChatGPT Making Scientists Hyper-Productive? The Highs and Lows of Using AI. Nature 2024, 627, 16–17. Available online: https://www.nature.com/articles/d41586-024-00592-w (accessed on 3 August 2025). [CrossRef] [PubMed]
  64. OECD. OECD Digital Education Outlook 2023: Towards an Effective Digital Education Ecosystem; OECD Publishing: Paris, France, 2023. [Google Scholar]
  65. OECD. Empowering Learners for the Age of AI: An AI Literacy Framework for Primary and Secondary Education; OECD Publishing: Paris, France, 2025; Available online: https://ailiteracyframework.org (accessed on 3 August 2025).
  66. Kankam, M.; Nazari, E.; Owan, H.D. Artificial Intelligence Tools in Higher Education Institutions: Review of Adoption and Impact. Humanit. Soc. Sci. Commun. 2024, 11, 912. [Google Scholar] [CrossRef]
  67. Owan, H.D.; Akinwalere, T.K.; Ivanov, V. AI tools for motivation, engagement and performance in higher education. Learn. Anal. J. 2023, 9, 67–84. [Google Scholar]
  68. Cacicio, S.; Riggs, R. ChatGPT: Leveraging AI to Support Personalized Teaching and Learning. Adult Lit. Educ. 2023, 5, 70–74. [Google Scholar] [CrossRef]
  69. Chan, C.K.Y.; Tsi, L.H.Y. Will Generative AI Replace Teachers in Higher Education? A Study of Teacher and Student Perceptions. Stud. Educ. Eval. 2024, 83, 101395. [Google Scholar] [CrossRef]
  70. Hussein, M.A.; Hassan, H.; Nassef, M. Automated Language Essay Scoring Systems: A Literature Review. PeerJ Comput. Sci. 2019, 5, e208. [Google Scholar] [CrossRef] [PubMed]
  71. Burrows, S.; Gurevych, I.; Stein, B. The Eras and Trends of Automatic Short Answer Grading. Int. J. Artif. Intell. Educ. 2015, 25, 60–117. [Google Scholar] [CrossRef]
  72. Grévisse, C. LLM-Based Automatic Short Answer Grading in Undergraduate Medical Education. BMC Med. Educ. 2024, 24, 1060. [Google Scholar] [CrossRef] [PubMed]
  73. Tobler, S. Smart Grading: A Generative AI-Based Tool for Knowledge-Grounded Answer Evaluation in Educational Assessments. MethodsX 2024, 12, 102345. [Google Scholar] [CrossRef] [PubMed]
  74. Gao, R.; Merzdorf, H.E.; Anwar, S.; Hipwell, M.C.; Srinivasa, A.R. Automatic Assessment of Text-Based Responses in Post-Secondary Education: A Systematic Review. Comput. Educ. Artif. Intell. 2024, 6, 100206. [Google Scholar] [CrossRef]
  75. Craig, C.D.; Kay, R.H. A Systematic Review of the Perusall Application: Exploring the Benefits and Challenges of Social Annotation Technology in Higher Education. In INTED 2024 Proceedings; IATED: Valencia, Spain, 2024; pp. 5829–5839. [Google Scholar] [CrossRef]
  76. Instructure. Instructure and OpenAI Announce Global Partnership to Embed AI Learning Experiences Within Canvas. Instructure, 2025. Available online: https://www.instructure.com/press-release/instructure-and-openai-announce-global-partnership-embed-ai-learning-experiences (accessed on 5 August 2025).
  77. Kofinas, A.K.; Tsay, C.H.-H.; Pike, D. The Impact of Generative AI on Academic Integrity of Authentic Assessments within a Higher Education Context. Br. J. Educ. Technol. 2025, 56, e13585. [Google Scholar] [CrossRef]
  78. Gerlich, M. AI Tools in Society: Impacts on Cognitive Offloading and the Future of Critical Thinking. Societies 2025, 15, 6. [Google Scholar] [CrossRef]
  79. Stanford Report. AI-Assisted Productivity Tools in Higher Education: Trends and Recommendations; Stanford University Publications: Stanford, CA, USA, 2023. [Google Scholar]
  80. Meyer, J.; Jansen, T.; Schiller, R.; Liebenow, L.W.; Steinbach, M.; Horbach, A.; Fleckenstein, J. Using LLMs to Bring Evidence-Based Feedback into the Classroom: AI-Generated Feedback Increases Secondary Students’ Text Revision, Motivation, and Positive Emotions. Comput. Educ. Artif. Intell. 2024, 6, 100199. [Google Scholar] [CrossRef]
  81. MentalUP. Top 15 AI Grading Tools for Teachers in 2025; MentalUP: London, UK, 2025; Available online: https://www.mentalup.co/blog/ai-grading-tools-for-teachers (accessed on 20 August 2025).
  82. Jeon, J.; Lee, S. Large Language Models in Education: A Focus on the Complementary Relationship between Human Teachers and ChatGPT. Educ. Inf. Technol. 2023, 28, 15873–15892. [Google Scholar] [CrossRef]
  83. Weber-Wulff, D.; Anohina-Naumeca, A.; Bjelobaba, S.; Foltýnek, T.; Guerrero-Dib, J.; Popoola, O.; Šigut, P.; Waddington, L. Testing of Detection Tools for AI-Generated Text. Int. J. Educ. Integr. 2023, 19, 26. [Google Scholar] [CrossRef]
  84. Springer Nature. Why AI Writing Detectors Fall Short: Scientific Critique of Detection Tools; Springer White Papers; Springer Nature: Cham, Switzerland, 2023. [Google Scholar]
  85. University of Pittsburgh Teaching Center. Why We Disabled AI Detection in Turnitin. 2024. Available online: https://teaching.pitt.edu (accessed on 3 August 2025).
  86. UNESCO. Artificial Intelligence and the Future of Teaching and Learning: Insights and Recommendations. UNESCO Report. 2023. Available online: https://unesdoc.unesco.org/ark:/48223/pf0000386351 (accessed on 3 August 2025).
  87. Committee on Publication Ethics (COPE). Guidance for Editors: AI Tools and Authorship; COPE: London, UK, 2023; Available online: https://publicationethics.org (accessed on 28 September 2025).
  88. Moya, B.; Eaton, S.E.; Pethrick, H.; Hayden, K.A.; Brennan, R.; Wiens, J.; McDermott, B.; Lesage, J. Academic Integrity and Artificial Intelligence in Higher Education Contexts: A Rapid Scoping Review. Can. Perspect. Acad. Integr. 2024, 7. [Google Scholar] [CrossRef]
  89. Sallu, S.; Raehang, R.; Qammaddin, Q. Exploration of Artificial Intelligence (AI) Application in Higher Education. J. Comput. Netw. Arch. High Perform. Comput. 2024, 6, 315–327. [Google Scholar] [CrossRef]
  90. Rodzi, Z.; Rahman, A.A.; Razali, I.N.B.; Nazri, I.S.B.M.; Abd Gani, A.F. Unraveling the Drivers of Artificial Intelligence (AI) Adoption in Higher Education. In Proceedings of the 2023 International Conference on University Teaching and Learning (InCULT) 2023, Shah Alam, Malaysia, 18–19 October 2023; pp. 1–6. [Google Scholar] [CrossRef]
  91. UNESCO. Artificial Intelligence Needs Assessment Survey—Africa; UNESCO: Paris, France, 2023; Available online: https://www.unesco.org/en/articles/unesco-launches-findings-artificial-intelligence-needs-assessment-survey-africa (accessed on 3 August 2025).
  92. Bozkurt, A.; Hollands, F.; Mishra, S. AI Teaching and Learning Manifesto: A Global Vision; UNESCO Institute for Information Technologies in Education: Paris, France, 2024. [Google Scholar]
  93. Bozkurt, A.; Xiao, J.; Farrow, R.; Bai, J.Y.; Nerantzi, C.; Moore, S.; Dron, J.; Stracke, C.M.; Singh, L.; Crompton, H.; et al. The Manifesto for Teaching and Learning in a Time of Generative AI: A Critical Collective Stance to Better Navigate the Future. Open Prax. 2024, 16, 487–513. [Google Scholar] [CrossRef]
  94. Mariam, G.; Adil, L.; Zakaria, B. The Integration of Artificial Intelligence (AI) into Education Systems and Its Impact on the Governance of Higher Education Institutions. Int. J. Prof. Bus. Rev. 2024, 9, 13. [Google Scholar] [CrossRef]
  95. An, Y.; Yu, J.H.; James, S. Investigating Higher Education Institutions’ Guidelines and Policies on Generative AI. Int. J. Educ. Technol. High. Educ. 2025, 22, 57. [Google Scholar] [CrossRef]
  96. Khlaif, Z.N. Redesigning Assessments for AI-Enhanced Learning. Educ. Sci. 2025, 15, 174. [Google Scholar] [CrossRef]
  97. Khlaif, Z. Rethinking Educational Assessment in the Age of Artificial Intelligence. In Fostering Inclusive Education with AI and Emerging Technologies; IGI Global Scientific Publishing: Hershey, PA, USA, 2025; pp. 131–144. [Google Scholar] [CrossRef]
Figure 1. Important milestones of AI development in higher education [Created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Figure 1. Important milestones of AI development in higher education [Created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Encyclopedia 05 00180 g001
Figure 2. Core AI literacy competencies for students and faculty members [created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Figure 2. Core AI literacy competencies for students and faculty members [created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Encyclopedia 05 00180 g002
Figure 3. Graphical representation of AI general use in higher education [created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Figure 3. Graphical representation of AI general use in higher education [created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Encyclopedia 05 00180 g003
Figure 4. ETHICAL use of AI in university coursework [Based on Cheng et al. [3]; created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Figure 4. ETHICAL use of AI in university coursework [Based on Cheng et al. [3]; created with GNU Image Manipulation Program (GIMP), version 3.0.4].
Encyclopedia 05 00180 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Adamakis, M.; Rachiotis, T. Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration. Encyclopedia 2025, 5, 180. https://doi.org/10.3390/encyclopedia5040180

AMA Style

Adamakis M, Rachiotis T. Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration. Encyclopedia. 2025; 5(4):180. https://doi.org/10.3390/encyclopedia5040180

Chicago/Turabian Style

Adamakis, Manolis, and Theodoros Rachiotis. 2025. "Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration" Encyclopedia 5, no. 4: 180. https://doi.org/10.3390/encyclopedia5040180

APA Style

Adamakis, M., & Rachiotis, T. (2025). Artificial Intelligence in Higher Education: A State-of-the-Art Overview of Pedagogical Integrity, Artificial Intelligence Literacy, and Policy Integration. Encyclopedia, 5(4), 180. https://doi.org/10.3390/encyclopedia5040180

Article Metrics

Back to TopTop