Next Article in Journal
Gendered Perceptions of Diversity in Educational Leadership Promotions in Irish Schools: A Quantitative Study
Previous Article in Journal
Developing an Early Warning System with Personalized Interventions to Enhance Academic Outcomes for At-Risk Students in Taiwanese Higher Education
Previous Article in Special Issue
Technology Review of Magic School AI: An Intelligent Way for Education Inclusivity and Teacher Workload Reduction
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Ethical Problems in the Use of Artificial Intelligence by University Educators

by
Roman Chinoracky
and
Natalia Stalmasekova
*
Faculty of Operation and Economics of Transport and Communications, University of Zilina, Univerzitná 8215/1, 010 26 Žilina, Slovakia
*
Author to whom correspondence should be addressed.
Educ. Sci. 2025, 15(10), 1322; https://doi.org/10.3390/educsci15101322
Submission received: 31 July 2025 / Revised: 23 September 2025 / Accepted: 2 October 2025 / Published: 6 October 2025

Abstract

This study examines the ethical problems of using artificial intelligence (AI) applications in higher education, focusing on activities performed by university educators. Drawing on Slovak legislation that defines educators’ responsibilities, the study classifies their activities into three categories: teaching, scientific research, and other (academic management and self-directed professional development). From standpoint of methodology, a thematic review of 42 open-access, peer-reviewed articles published between 2022 and 2025 was conducted across the Web of Science and Scopus databases. Relevant AI applications and their associated ethical issues were identified and thematically categorized. Results of this study show that AI applications are extensively used across all analysed areas of university educators’ activities. Most notably used are applications that are generative language models, editing and paraphrasing tools, learning and assessment software, management and search tools, visualizing and design tools, and analysis and management systems. Their adoption raises ethical concerns which can be thematically grouped into six categories: privacy and data protection, bias and fairness, transparency and accountability, autonomy and oversight, governance gaps, and integrity and plagiarism. The results provide universities with a structured analytical framework to assess and address ethical risks related to AI use in specific academic activities. Although the study is limited to open-access literature, it offers a conceptual foundation for future empirical research and the development of ethical, institutionally grounded AI policies in higher education.

1. Introduction

The year 2022 marks a year in which OpenAI first introduced the ChatGPT platform. This platform relies on deep learning-based models (known as large language models) enable the generation of coherent text, code, images, and other content at a level that is almost indistinguishable from human-generated content. The launch of ChatGPT significantly reshaped public perceptions of artificial intelligence (AI), paving the way for its widespread adoption and transformative impact across multiple industries. At the same time, academic interest in the application of AI to the work processes of university educators continues to grow. This trend is closely linked to the needs of digital transformation in higher education, a process that was accelerated by the COVID-19 pandemic. To fully understand the gaps in research in terms of the scope of AI penetration into university educator environment we conducted an analysis of scientific review papers. We used the Web of Science database, and the search phrases we used in our query were “AI applications for teaching in higher education” and “Use of AI in higher education”. Search conditions included review article (as document type), open access (as document type) and publication year from 2022 to 2025. This query identified a total of 51 relevant studies. Out of the total number of identified studies, by our own screening, we identified a sample of 27 review studies.
According to the findings of these studies, AI in higher education teaching can be understood as a software application with the potential to enhance instructional efficiency: by supporting personalized learning (Allam et al., 2025) and language education (Deep et al., 2025). Another potential of AI is in its ability to reduce teachers’ administrative workload (Lee & Moore, 2024). The application of AI in improving efficiency takes the form of systems for student performance prediction, automated assessment, chatbots, and intelligent assistants that provide feedback and detect student behaviour patterns (Allam et al., 2025; Deep et al., 2025; Llurba & Palau, 2024). AI also holds distinct value in the context of social robots and agents acting as tutors, which may impact teaching effectiveness through cognitive learning support (Pai et al., 2024).
If we look at the gaps in terms of usage of AI: AI integration is often reduced to technological functionality, without considering how the use of AI applications transforms teaching and learning processes within specific subjects. This insight is further supported by other studies (Garlinska et al., 2023; Lopez-Regalado et al., 2024), which show that research on AI applications tends to focus on technical outcomes and rarely systematically considers the learning process. There is a lack of conceptual framework of AI as a application within educational theory. In many cases, AI is applied to support factual learning, while its impact on the development of argumentative skills or independent reasoning remains largely unexplored (Demiröz & Tıkız-Ertürk, 2025; Khairullah et al., 2025). In addition, there is no classification of AI applications based on pedagogical purpose, which significantly hinders their targeted didactic application (Celik et al., 2022; Dempere et al., 2023). For example, there is a lack of systematic analysis of differences in the use of AI applications across medical, technical, and humanities disciplines (Ma et al., 2025). Another unresolved issue is the use of learning analytics. While learning analytics have the potential to improve instructional design, there is a lack of standardized frameworks for their pedagogical interpretation and for involving teachers in data-driven decision-making (Drugova et al., 2024).
Another research gap concerns the role of the teacher in AI-mediated teaching processes. Many studies reduce teachers to the role of data providers or passive users of AI outputs overlooking their active and creative involvement in the design of instructional scenarios (Bozkurt, 2023). Such an approach fails to reflect the complexity and professionalism of teaching-related decision-making and diminishes the importance of human pedagogical intuition and value-based judgment. It remains insufficiently explored how AI transforms teacher identity, what new competencies are required of educators, and to what extent they are prepared for these changed conditions (Chee et al., 2024; Khairullah et al., 2025). Further findings suggest that teachers lack adequate applications, methodological guidelines, and support to interpret AI system outputs, which contributes to mistrust and caution in adopting these technologies in pedagogical practice (Celik et al., 2022; Ocen et al., 2025; Ansari et al., 2024). Professional development in the field of AI tends to focus primarily on technical aspects, with limited emphasis on pedagogical skills, values, and teachers’ decision-making competencies in the instructional process (Chee et al., 2024; Khairullah et al., 2025).
Despite availability of possibility of immediate and individualized feedback (Lee & Moore, 2024; Sembey et al., 2024), there is rarely research about students’ interpretation of this feedback, what impact it has on their self-regulation, or if they find it credible (Sharadgah & Sa’di, 2022; Aljuaid, 2024). Ethical and methodological requirements of using AI when marking complex academic work such as essays, arguments, or works of art are equally inconclusive (Allam et al., 2025). Several works report low algorithmic judgment explainability that can threaten fairness and transparency of the marking process (Celik et al., 2022; Llurba & Palau, 2024).
The area of academic integrity has been identified as another research challenge. Several studies point to the limited effectiveness of existing plagiarism detection systems, especially when dealing with texts generated by large language models. At the same time, there is concern about the lack of standardized applications and the low transparency of these systems’ decision-making processes, which can lead to false accusations and a loss of user trust in digital control mechanisms (Pudasaini et al., 2024).
The impact of AI on student demographics remains a inefficiently covered area of research. Most studies operate with an implicit model of a “standard student,” rarely addressing the needs and experiences of students with disabilities, language barriers, or neurodiverse profiles (Chee et al., 2024; Deep et al., 2025). There is also a lack of analysis regarding the accessibility of AI applications in terms of socioeconomic status or digital infrastructure (W. Pang & Wei, 2025; Salas-Pilco & Yang, 2022). In this context, it is important to recall that technologies can not only eliminate inequalities but also reproduce and deepen them—especially when designed without the participation of marginalized groups (Phokoye et al., 2024; Tapullima-Mori et al., 2024). From a technological perspective, similar insights are focused on robotics. There is uneven distribution of robotic technologies across geographic regions and the insufficient connection between technical development and pedagogical practice. Their findings indicate that in developing countries, the main barriers are inadequate infrastructure and the lack of teacher training (Phokoye et al., 2024).
The ethical problems of using AI in higher education remain limited. Existing studies focus on specific areas of interest. Several of these studies highlight inadequate personal data protection, non-transparent algorithmic decision-making, and the absence of informed consent (Aljuaid, 2024; Ocen et al., 2025; Bozkurt, 2023). Risks such as algorithmic bias, the potential for automated stigmatization of students, and the loss of trust in educational institutions largely remain theoretical concerns without empirical grounding (Farrelly & Baker, 2023; Dempere et al., 2023). Moreover, research is lacking on the impact of AI on academic rights, such as the right to explanation, copyright of AI-assisted output, or the right to human judgment (Sharadgah & Sa’di, 2022; Sengul et al., 2024).
From an institutional governance perspective, AI applications are no longer merely supplementary to teaching but are increasingly involved in decision-making processes at the level of university management. However, findings show that research in this area is lagging. There is a lack of analysis on how AI influences decisions related to admissions, performance evaluations, scholarship allocation, or strategic planning (Khairullah et al., 2025; Ocen et al., 2025). Data on the return on investment in AI systems and their impact on the workload of academic and administrative staff are also missing (Tapullima-Mori et al., 2024).
Research gaps were identified in the evaluation of AI system effectiveness. Some work records the successful outcomes of pilot projects, and of these few are founded on proper research designs that would permit generalizing their findings. Descriptive research on impact on learning, teaching, engagement, or employability does not occur (Bozkurt, 2023). Comparative work between diverse applications and methodologies and summative or evaluative work on pedagogic impact are non-existent (Llurba & Palau, 2024).
Based on the synthesis of findings from the analysed review papers, we identified 11 groups of research gaps. Identified research gaps are grouped into as follows:
Despite the advancement of AI platforms and applications, the research gaps remain unanswered within academia. Each of these 11 research gaps represent an area for potential future research. However, it is not the aim of this article to examine all identified areas. Instead, this article focuses on one specific domain—research gap number 9: “Limited systematic review of the ethical problems associated with AI integration in higher education”.
Ethical issue, being one of the important terms utilized while framing this gap, according to the work of Kovac (2018) and Colnerud (2013) is a situation where one has to choose between conflicting moral values or principles where often no clear-cut solution is available. They are distinguished from factual or technical issues in that they involve qualified judgments regarding what is right or what is just or responsible within particular circumstances. Ethical issues within scientific and medical practice may require balancing and weighing of cost versus safety and individual rights and greater societal interest. Unlike yes-or-no problems, moral dilemmas often involve making concessions between conflicting goods or lesser of two evils.
The objective of this study is to investigate the extent and the problems with which ethical dimensions of AI have been explored in the context of higher education. Our goal is not to be specific as many studies are. We aim for our study to offer a broad and general review that outlines the ethical issues associated with the use of AI applications across the full range of university educators’ professional activities.
This research gap represents a combination of a theoretical and empirical gap, as it addresses the insufficient conceptualization and limited systematic investigation of the ethical dimensions of use of AI in the context of higher education.
For the purposes of conducting research that aims to address the selected research gap (“Limited systematic review of the ethical problems associated with AI integration in higher education”), it is necessary to define the range of activities performed by educators working in higher education institutions. A clear understanding of these activities enables the identification of those that may be supported or transformed by AI, thereby providing a foundation for the analysis of the ethical problems arising from AI-assisted practices of university educators.
As the authors of this article, we are from Slovakia and thus draw upon Act No. 131/2002 Coll. on Higher Education Institutions (National Council of the Slovak Republic, 2002), which defines the scope of activities carried out by university educators within the Slovak context. The research itself is financed by the The Slovak Research and Development Agency and project VV-MVP-24-0375, titled “An ethical concept for the use of artificial intelligence in higher education”. Therefore, our focus is primarily on the activities defined by Slovak legislation. Output of this study is the base for further research dedicated to mentioned project. It should be noted that the core responsibilities of university educators are likely to remain consistent across countries. Outcomes of this research may be generalizable and applicable to educators beyond the Slovak higher education system.
According to Act No. 131/2002 Coll. on Higher Education Institutions (National Council of the Slovak Republic, 2002), the work of every university educator employed at a higher education institution can be divided into teaching, scientific research, and other activities that are not classified as teaching or research.
Teaching activities include:
  • Preparation of study materials—Developing syllabi, presentations, worksheets, and supporting materials used in lectures, seminars, and practical sessions.
  • Conducting lectures, seminars, and practical classes—Delivering in-person or distance-based lectures, seminars, and exercises, during which the educator explains course content and assigns tasks for students to complete.
  • Student assessment—Evaluating tests, term papers, oral and written exams, and recording grades in electronic gradebooks.
  • Providing student consultations—Offering support at designated times to explain course content, assignment instructions, and requirements for term and final theses.
  • Supervising final theses—Designing topics and annotations for final theses, announcing available topics, and guiding students through regular consultations, supervision, and academic support during the writing process.
  • Writing opponent reviewer reports for final theses—Preparing assessments of thesis quality from a scientific and professional perspective based on in-depth reading of students’ work. The reviewer provides a written evaluation along with a final grade.
  • Coordinating internships, collaboration with professional practice, and field trips—Organizing and facilitating internships, partnerships with professional environments, and excursions as supporting components of lectures, seminars, and practical training.
Scientific research activities include:
  • Conducting research and development activities—Engaging in basic and applied research based on the educator’s area of expertise, including the preparation of studies and analyses, as well as participation in team-based research projects.
  • Publishing research findings—presentation of research outputs through peer-reviewed journals, conference proceedings, monographs, academic books, and teaching texts.
  • Submitting and managing scientific research project proposals—Applying for research grants and managing related administrative agendas.
  • Cooperation with industry and practice—Conducting research activities in collaboration with external partners and facilitating knowledge transfer between academia and practice.
  • Organizing research events—Planning and coordinating activities aimed at supporting scientific research, such as conferences, workshops, and symposia.
Other activities, not classified as teaching or research, include:
  • Academic management—Serving on academic senates, scientific boards, faculty or university leadership bodies, and participating in departmental, faculty, or institutional management.
  • Professional development—Engaging in self-education and training in teaching and research, both in-person and online, including attending courses and workshops.
For each of these activities, in accordance with identified research gap, we formulate two research questions: What types of AI applications are being used to support key academic activities performed by university educators? What are the ethical problems of integrating AI into key educational, research, and academic activities carried out by university educators?
Both research questions complement each other. Answering the first research question provides the basis for answering the second research question. Answering the second research question is in line with the identified research gap and the research objective of this study. This means that research questions guide this study’s objective.
To identify the AI applications used by university educators, it is necessary to conduct a comprehensive analysis of scientific studies. At the same time, these scientific studies must be focused on identifying ethical problems of AI use by university educators.

2. Materials and Methods

For the purposes of conducting the analysis, we followed the methodology recommendations for in-depth analysis of scientific studies as outlined in the publications “Enhancing Transparency in Reporting the Synthesis of Qualitative Research: ENTREQ” by Tong et al. (2012), “Moving Qualitative Synthesis Research Forward in Education: A Methodological Systematic Review” by Maeda et al. (2022).
The search strategy we adopted had a systematic character. First, we selected the databases to which we, as researchers affiliated with the institution we work for (University of Zilina), have access. In our case, these are the scientific databases Web of Science and Scopus. In both databases, we searched for studies using keywords and Boolean operators (which have specific syntax for each database). The tables with queries we used are provided in Appendix A and Appendix B. Each query contains keywords divided into three groups. The first group of keywords was designed to cover, as broadly as possible, each work activity and its associated sub-activities of university educators. The second group includes keywords aimed at filtering scientific papers based on whether they examine ethical aspects. The third group of keywords serves to filter studies that focus on higher education.
In the Web of Science database, additional filtering of results was carried out outside the query. This means that within the list of records, a filter was applied for Open Access and for publication years from 2022 to 2025. The lower bound of the publication year range is because the AI application ChatGPT was launched in that year. This application represents a turning point in how AI began to be applied in public, and therefore this year is crucial for us and serves as the lower limit of our publication year filter. The additional filtering query applied in the Scopus database was same as the query we used in Web of Science database. Scopus query included criteria specifying that publications must be open access and fall within the publication range of 2022 to 2025. List of records used for the identification step of scientific records is shown in Table 1.
In the next step, we proceeded with screening and filtering of the records on basis of their titles and abstracts. To facilitate the screening of a large dataset (n = 5517 records), we utilized ChatGPT as a pre-screening tool. The bibliographic records (titles and abstracts) exported from Web of Science and Scopus were uploaded into the model in structured Excel (xlsx) format. ChatGPT was instructed to identify whether each record explicitly addressed either of the two research questions: (1) the use of AI applications to support academic activities of university educators, and (2) the ethical issues associated with such use. If no explicit reference was found, the model was prompted to detect implicit relevance based on thematic and contextual indicators (e.g., inferred ethical implications, indirect mentions of AI use in teaching or research). Based on ChatGPT’s classification and annotations, we manually reviewed and validated each record to confirm its inclusion. Articles lacking sufficient relevance or falling outside the scope were excluded. This hybrid process allowed us to efficiently reduce the dataset while maintaining human oversight in all final inclusion decisions. Although this method increased scalability and efficiency, we acknowledge that it may introduce bias due to reliance on model interpretation. Therefore, a sample-based manual validation was conducted during screening to verify that the inclusion process remained consistent with the study’s objectives. We present this methodological approach as an experimental enhancement to traditional review processes rather than a substitute for systematic screening protocols.
In the final step of document filtering, the articles were reviewed, and those meeting the inclusion criteria were included for further analysis (Table 2). In this phase, we as authors searched for information in the articles that was again explicitly or implicitly related to the research questions and overall study goal. Records that were not relevant to the topic were excluded.
In total, findings from 42 studies, out of 103 studies identified in previous step, were incorporated into the analysis (Table 3). These articles are presented in the Results section and served as the basis for addressing the research questions. The final sample provided sufficient information to support the study’s overall goal. In some cases, individual articles contributed to multiple activities and sub-activities, resulting in overlapping entries—particularly when the same study described the use of AI applications and related ethical implications in various contexts.

3. Results

The outputs from the analysed articles are presented as a sequence of answers to the research questions listed at the end of the introduction section. This means that we first present which AI applications academia identifies as supportive for selected activities carried out by teachers at universities. Second, we present the ethical issues that arise when using these applications. The text is supplemented by an overview table that summarizes the findings. Another important note to mention is that activities, for which no relevant records were identified, are not mentioned further in the analysis.

3.1. Teaching

3.1.1. Preparation of Study Materials

The creation of study materials is an area where the use of artificial intelligence (AI) in higher education is developing at a steady pace. Alrayes et al. (2024) report that large language models such as ChatGPT influence how new content is generated, how scientific articles are summarized, and how personalized assignments and tests are prepared.
Tapalova and Zhiyenbayeva (2022) also comment that smart chatbots and machine learning software are used to facilitate automation of assignment grading and tracking of students’ performance. Certain tools such as Gradescope, Knewton Alta, Duolingo, and Knowji put higher education teachers in a position where they can design personalized study plans, develop multimodal materials, and respond adaptively to the needs of diverse groups of students.
As Alzakwani et al. (2025) recommend, inclusion of AI applications within learning environments makes possible customization of resources, automates a number of organizational aspects of instruction, and is valuable within the measurement of written production. Haroud and Saqri (2025) further note that educators often rely on generative AI to create visual materials, handle everyday teaching tasks, and provide quick feedback to students. They describe AI as a supporting resource that allows instructors to devote more time to mentoring and student interaction.
Findings from a study by Shakib Kotamjani et al. (2023), conducted among university instructors in Uzbekistan, indicate that 13 out of 15 respondents use AI applications such as ChatGPT or chatbots daily to proofread texts and prepare teaching materials.
The use of AI applications in the preparation of educational content raises several ethical concerns. Alzakwani et al. (2025) identify the issue of accuracy and reliability. Although software like ChatGPT can instantly generate syllabi, teaching materials, or test questions, the data may not necessarily be up-to-date or of factual correctness. Teachers are then committed to diligently check and edit AI-generated work so that incorrect or outdated information is not disseminated and instruction is of high quality.
There is then the matter of copyright and ownership of work. Generative AI software is created using exhaustive data sets but often has licensing and provenance that are nebulous. Without proper protection, teachers can unknowingly disseminate work with copyrighted work that was not gained by permission. As a protection against this threat, institutions ought to have clear operating procedures that require teachers to redact AI-written works and reference sources explicitly (Alzakwani et al., 2025).
Qadhi et al. (2024) states the importance of transparency toward students and the academic community. Using AI-generated content without making its origin transparent may lead to concerns about academic integrity and damage the institution’s credibility. The authors recommend standardized policies that specify when and how AI-generated content should be disclosed to promote openness and responsibility.
Shakib Kotamjani et al. (2023) also point to unequal access to advanced AI applications, which influences the quality of educational materials. Educators who lack access to premium versions must rely on less advanced, freely available alternatives. This results in lower levels of personalization and overall quality, potentially disadvantaging students in less developed regions over the long term.
Haroud and Saqri (2025) highlight the dangers of building too great a reliance on AI. Although it can free up teachers’ time by automating laborious tasks, it can take the enthusiasm out of creating new work and deprive students of the skills of exercising their own analysis and judgment. AI is consequently at its best when used as a complement to teaching and never a substitute (Table 4).

3.1.2. Conducting Lectures, Seminars, and Practical Classes

Several AI applications have been introduced into higher education to enhance interaction with students and optimize the delivery of lectures, seminars, and exercises. Retscher (2025), in his study on AI and geomatics education, describes a range of applications. One is in the use of virtual teaching advisors and chatbots, such as ChatGPT, and their immediate ability to respond to students’ questions. They can be used both at and outside of times of classes and can assist in reducing repeated questioning of lecturers and freeing up space for one-to-ones or more substantial face-to-face interactions.
Automated grading systems are also used to evaluate essays and written assignments quickly. These systems make feedback in seminar or lab-based teaching more responsive and support the monitoring of students’ progress. Learning analytics platforms gather data from quizzes, forums, and other interactive elements to help instructors adjust the content and structure of lessons according to the group’s needs. Intelligent tutoring systems can dynamically distribute tasks and provide additional guidance, adjusting materials to students’ individual levels (Retscher, 2025).
Kazimova et al. (2025) offer further examples. Their review highlights intelligent tutoring systems that offer adaptive support during seminars and lab sessions, simulating the effect of personal tutoring. Natural Language Processing software like WriteLab and Grammarly are used to provide real-time corrections of style and grammar in students’ texts. Adaptive learning software like Knewton and Smart Sparrow respond instantly based on students’ results and continue to adapt materials to provide a balanced level of interest and challenge.
Although AI applications can support teaching, they introduce ethical risks that must be considered. Retscher (2025) and Kazimova et al. (2025) discuss several concerns linked to specific applications and their implementation:
Chatbots and virtual assistants offer immediate and convenient responses and may reduce instructor workload. Nevertheless, students better at constructing their questions are likely to receive better responses. This is a concern from the perspective of fairness and makes it undesirable to assign fault when inaccurate or confusing responses are received (Retscher, 2025; Kazimova et al., 2025).
Computerized grading programs can afford rapid turnaround of marks but are reliant on pattern matching that may penalize students employing nonstandard organization or language. This can introduce algorithmic bias and discourage appreciation of innovative or non-Western styles of presentation. Unless the scoring process is clearly explained, teachers and students can both lose faith in the comparative fairness of the marking process (Retscher, 2025; Kazimova et al., 2025).
Learning analytics use online activity data like login details, use of online discussion boards or forums, and performance on quizzes. While this facilitates pedagogic adaptation, it can itself yield a violation of student privacy if data are not anonymized. There is further concern that data-informed profiling may exacerbate uneven allocation of resources or assistance and entrench existing inequalities (Retscher, 2025; Kazimova et al., 2025).
Adaptive learning and intelligent tutoring software individualize content based on students’ performance at the moment. While this is highly personalized, it may disadvantage students with limited access to the internet or those inexperienced with the online environment. Mistakes are made at the level of identifying skills too—the work is too low or too high—and with no clear responsibility for these misalignments (Kazimova et al., 2025).
Natural language processing applications such as Grammarly and WriteLab offer immediate writing corrections, they risk fostering excessive reliance, hindering genuine skill development, and raising concerns about originality and intellectual property (Kazimova et al., 2025) (Table 5).

3.1.3. Student Assessments

Generative AI applications are becoming a frequent component of formative assessment and pedagogical support in higher education. As defined by Ulla et al. (2024), software such as ChatGPT allows students access to real-time and individualized feedback when they are writing. It detects grammatical errors, suggests paraphrasing or elaborating on topics, and this allows students to conduct constant self-assessment and never submit below par work. Generative AI can additionally tailor tasks based on the ability of a student, create personalized exercises, carry out realistic dialogues, and recommend context-specific tasks and support a learner-centred approach.
Williams (2024) carries these applications a step further by describing the ways that students at risk of academic failure can be identified by studying learning styles online. Teachers can then remediate at the right time or generate new work that is appropriate to students’ interests and ability ranges. AI is equally engaged at the institutional level and especially admissions departments where applicant data are handled and used to create the proper study plans.
T. Y. Pang et al. (2024), in a study at RMIT University, describe how large language models (LLMs) such as ChatGPT and Bard accelerate language correction and style editing in student texts. These applications also enable the use of programmable feedback templates based on grading rubrics, reducing the time required for teachers to provide structured feedback and allowing greater attention to qualitative aspects of evaluation.
Isiaku et al. (2024) identify other educational uses of AI, including automatic quiz generation, essay scoring based on preset criteria, and the preparation of lesson plans and case studies that align with students’ learning needs.
Another example comes from Lünich et al. (2024), who discuss predictive Academic Performance Prediction systems. These systems evaluate students past academic performance and other relevant data to forecast their future outcomes. Based on this analysis, support such as mentoring or tutorials is distributed according to predefined fairness models (equality, equity, or need).
The integration of AI into student assessment practices presents several ethical and even legal risks. Both studies by Ulla et al. (2024) and Williams (2024) draw attention to the risk of academic dishonesty linked to the use of generative models. These applications can produce entire essays or partial responses that students may submit as original work. This challenges the validity of assessment and requires university educators to rethink assignment design. Both research teams suggest focusing on the process of student work—such as through staged submissions or reflective components—and recommend the use of AI-detection applications to discourage and detect misuse.
Ulla et al. (2024) and T. Y. Pang et al. (2024) also raise concerns about student data privacy. When student work is processed by cloud-based AI systems, detailed digital records are created, including information about performance and personal learning notes. Although such frameworks technically guarantee the right of access and data erasure of individuals’ information, both papers argue that it is in practice impossible to delete such information entirely from future training of the models. In addressing this issue, the authors emphasize the need of institutional protection steps like anonymization mechanisms, stringent access control and constant vigilance regarding service provider agreements.
T. Y. Pang et al. (2024) highlights the continuation of algorithmic bias in big language models. Since these models are developed using general and ambiguous datasets, they end up unconsciously continuing stereotypical or biased habits within the output they generate. This may lead to biased treatment of students based on race, ethnicity, or gender.
The question of ethical and legal responsibility for inaccurate or misleading feedback is discussed by T. Y. Pang et al. (2024) and Cowling et al. (2023). These authors point out that although AI applications can support evaluation, the final responsibility must always lie with the human educator. They propose that institutional policies should clearly prohibit the use of unverified AI feedback and require all content to be validated before being shared with students, to prevent potential legal disputes or harm to students.
Williams (2024) emphasizes the importance of transparency in academic assessment involving AI. After the review of their work, students should be informed of AI use when work is under review and why. This should go into course syllabi or institutional policy documents. This transparency, says Williams, permits trust to be maintained between the instructors and students and promotes appropriate use of online tools when teaching (Table 6).

3.1.4. Supervising Final Theses

At undergraduate and postgraduate levels, generative AI applications such as ChatGPT are used to aid supervision of undergraduate and postgraduate theses. As defined by Cowling et al. (2023), ChatGPT can provide students with instant feedback at multiple points along the research continuum. This varies from helping students come up with research questions to the writing of literature reviews, editing spelling and style congruency problems, and generating ideas pertaining to the subject. Students use the tool when they peer-review their work before taking it up with a supervisor. ChatGPT is claimed by the authors to contribute to building the autonomy and expertise of students by being their first port of advice. It helps reduce the supervisors’ workload by handling repetitive or technical feedback and simultaneously enhances the student’s confidence and engagement with academic writing and research preparation.
Cowling et al. (2023) make the key point that whilst AI-enhanced supervision has many benefits, a series of ethical and practical problems are caused by generative programs such as ChatGPT. Foremost amongst these is the failure of software program to understand the unique context of a specific researcher student’s work. This can have the effect of producing vastly too generic and possibly inaccurate advice that is non-observant of the subtleties of a specific academic topic.
The authors also point to the biased nature of ChatGPT’s training data. As Cowling et al. (2023) explain, because the model is trained on large datasets that include historically embedded biases, it may reproduce stereotypes—for example, in how gender or culture are represented in generated responses. This can lead to subtle forms of discrimination or unequal framing of research content.
A further issue discussed by Cowling et al. (2023) concerns the application’s lack of connection to ethical principles in academic research. Since ChatGPT does not operate according to established scientific or professional codes of conduct, its suggestions may deviate from the norms of research integrity. The authors argue that this raises questions about the appropriateness of relying on AI in research preparation and stress the need for educators to provide students with clear pedagogical guidance. This should include not only technical instruction but also a deeper understanding of the ethical limits of using AI in thesis writing and supervision (Table 7).

3.1.5. Preparation of Opponent Reviews

In the academic environment, AI applications are increasingly being explored as a form of support in evaluative tasks, including the peer review process. Farber (2025) describes the use of the Claude-3 model, developed by Anthropic, which was tasked with reviewing ten manuscripts in the social sciences and humanities. The model operated under identical instructions and evaluation criteria as human reviewers. According to Farber, Claude-3 was able to assess the structure of academic texts, their methodological rigor, and formal presentation, while offering consistent and time-efficient results. This use of AI accelerated the peer review process and demonstrated potential in terms of replicable evaluation.
In a related context, Francis et al. (2025) examine how generative models such as ChatGPT and Gemini are used to produce feedback, summarize content, and assess written work. Although their study does not focus explicitly on peer review, these activities closely resemble it in function. The authors suggest that educators and academic reviewers may increasingly adopt such applications for preliminary analysis and qualitative assessment of scholarly writing.
Farber (2025) outlines several ethical concerns that emerged during the testing of the Claude-3 model in the peer review process. One prominent issue is the presence of bias inherited from the model’s training data. According to Farber, this bias may lead to the systematic undervaluing of certain research topics or authors, particularly those from underrepresented fields. The author also draws attention to the model’s limited transparency. Because Claude-3 provides no rationale for its evaluative decisions, reviewers and editors cannot determine how the system arrived at specific recommendations.
In addition, Farber (2025) notes that the model often displayed a tendency toward leniency and idealism in its assessments. This created inconsistencies when compared with human reviews and risked undermining the quality and reliability of scholarly evaluation. The system occasionally failed to recognize important studies or recommended unrelated sources. Farber warns that over-reliance on AI-generated reviews may erode human critical thinking if reviewers begin to accept outputs without thorough scrutiny.
Further ethical risks are discussed by Francis et al. (2025), who focus on the broader use of generative AI models in academic assessment. One of the primary issues they raise is the phenomenon of hallucination, where AI applications generate information that is factually incorrect or entirely fabricated. These models operate probabilistically and do not provide verifiable references, which poses a direct threat to the credibility of peer review outputs. The authors also express concern about embedded cultural, gender, and ethnic biases that the models may reproduce unconsciously.
Francis et al. (2025) also raise legal concerns. They contend that AI reviewers may inadvertently violate data protection legislation or intellectual property rights, especially when pre-published work is run on AI programs without the authors’ consent. They go on to identify cognitive offloading—the use of reviewers relying too heavily on automated programs—the other difficulty they identify. This can undercut the strength of the reviewer’s active judgment and threaten academic integrity. They call for the development of clear methodological and ethical guidelines where transparency and accountability are central and where final authority lies with the reviewer (Table 8).

3.2. Scientific Research

3.2.1. Research and Development Activities

A broad range of AI applications is now being applied to academic research processes across higher education. Yaroshenko and Iaroshenko (2023) describe the ways generative models such as ChatGPT, Llama-2, Jasper Chat, Google Bard, and Microsoft Bing are used to help researchers in designing research plans and choosing the variables and processing data. They are equally useful when dealing with data and text mining when using big data.
Alqahtani et al. (2023) characterize the use of natural language processing technologies in research areas of bioinformatics, pharmaceutical science, and public health. In their analysis, large language models like ChatGPT are used to examine electronic health records, interpret clinical information, and convert molecular structures. They also serve as programming assistants when integrated via APIs.
According to Dzogovic et al. (2024), machine and deep learning models are essential in processing immense datasets in genetics, ecology, economics, and health sciences. The authors are clear that natural language processing tools like Amazon Alexa and Google Assistant supplement scholarship by taking advantage of research gaps, carrying out literature reviews, and guiding research questions. These technologies also automate repetitive tasks and help researchers formulate scientifically relevant problems.
Sobaih (2024) emphasizes that generative applications like ChatGPT, Bard (now Google Gemini), Bing Chat, and Ernie are commonly used to support translation, text editing, data analysis, idea generation, and journal selection. They are used by peer reviewers and journal editors too to assist with manuscript evaluation.
Acosta-Enriquez et al. (2025) reference platforms like Claude, Gemini, ScopusAI, Elicit, and ResearchRabbit that are used by researchers to design projects, perform systematic reviews of the literature, and write manuscripts. According to the authors, these applications enhance research planning and increase productivity.
Kurtz et al. (2024) describe how generative AI models including ChatGPT, Midjourney, Microsoft Copilot, and Gemini are used for hypothesis generation, data visualization, and literature synthesis. Butson and Spronken-Smith (2024) add that applications like Rayyan, Scite, Elicit, Covidence, AskYourPDF, and Papers are useful in systematic reviews and editorial processes. Nartey (2024) explains that these technologies also assist with structuring arguments, developing research questions, and drafting academic manuscripts.
Several ethical risks linked to the use of AI applications in research. Yaroshenko and Iaroshenko (2023) highlight the generation of hallucinated or inaccurate content, algorithmic bias, black-box decision-making, reliance on outdated data, and unclear responsibility for errors. According to the authors, these factors can threaten research integrity and undermine trust in AI-supported results.
Alqahtani et al. (2023) point to the generation of misleading or plagiarized content as another major concern. The authors note that AI applications lack contextual understanding and operate without clear attribution, increasing the risk of unintentional academic misconduct. They also emphasize the problem of system opacity and the potential for data privacy breaches during automated processing.
Dzogovic et al. (2024) add that many AI applications rely on biased or non-transparent datasets, making their outputs unreliable or exclusionary. In their view, risks also include weak data protection, unequal access to advanced AI applications, and excessive reliance on automation that may reduce researchers’ engagement and critical thinking.
Sobaih (2024) also lists other risks of utilizing AI applications such as infringement of privacy, blurring of authorship, plagiarism, and information distortion. He goes on to describe how the misuse of AI applications can discourage team collaboration and cause mental fatigue when researchers are asked to read or correct machine output.
According to Acosta-Enriquez et al. (2025), algorithmic opacity, data leakage, falsified outputs, and the erosion of researcher autonomy are major ethical issues. They advise that some uses of AI can violate core academic values if not strictly regulated and recommend that institutional practice of research include ethical awareness.
Kurtz et al. (2024) refer to inaccurate findings of AI-generated work, questionable authorship, compromised critical thinking, and unresolved data and intellectual property issues. They opine that clear institutional guidelines are necessary to ensure responsible use of AI applications within research communities.
Although Al-Zahrani (2024) discusses AI primarily in education, moral issues it addresses are applicable directly to scholarship. They are: nontransparency of algorithm processing; intrinsic biases of training datasets; and indeterminacy of data provenance and ownership, all of which jeopardize scholarship accountability.
Butson and Spronken-Smith (2024) warn about systemic threats posed by AI applications. These include undermined authorship attribution, opacity in decision-making, marginalization of qualitative research methods, ethical issues related to informed consent and data reuse, and the deterioration of originality and peer review quality.
Nartey (2024) also raises concerns about threats to academic integrity caused by hallucinated outputs, distortion of scholarly content, loss of creativity, and insufficient training in ethical AI use. According to the author, the absence of clear institutional guidance risks blurring the boundary between human-generated and AI-generated research (Table 9).

3.2.2. Publication of Research Results

In academic publishing, university educators increasingly integrate generative AI applications across various stages of manuscript preparation and submission. According to Robinson et al. (2025), Yaroshenko and Iaroshenko (2023), Ekundayo et al. (2024), and Giray (2024), large language models (LLMs) such as ChatGPT (versions 3.5 and 4), Google Bard (Gemini), Microsoft Bing Chat, Jasper Chat, and LLaMA-2 are among the most commonly used systems. In addition, specialized applications like WordAI, CopyAI, Wordtune, QuillBot, and Grammarly are employed for generating text, drafting abstracts, translating academic writing, editing grammar and style, paraphrasing, and improving coherence and structure. These applications are also used to prepare responses to peer review comments.
Ekundayo et al. (2024) and Shorey et al. (2024) explained the manner applications like ChatGPT function like editing aides and are particularly effective when authors’ first or native language is not English. They are of great help when compiling complex sections of manuscripts such as introductions, methodology sections, and discussions.
Generative AI is also applied in managing academic sources. As noted by Yaroshenko and Iaroshenko (2023), platforms such as Semantic Scholar, SciFact, Consensus, Research Rabbit, and Semantic Reader facilitate source discovery and evaluate academic relevance. ChatPDF assists with engaging with full-text articles. For visualization purposes, applications like Canva AI, Designs.ai, and DesignerBot are used to produce graphical elements that support research communication.
Giray (2024) adds that Microsoft Office Dictation is used to transcribe spoken language, after which AI-assisted editing improves the written output. According to Sobaih (2024), AI is now present throughout the entire publication process, including in drafting responses to reviewers.
The concept of “human-AI collaboration” is explored by Roxas and Recario (2024), who describe co-authoring practices that include knowledge summarization, outlining, hypothesis formulation, and language editing. They argue that these applications help democratize publishing opportunities for early-career researchers and academics from developing countries.
In terms of ethical concerns: Robinson et al. (2025), Giray (2024), and Mahrishi et al. (2024) warn that AI applications raise serious concerns related to plagiarism, especially source-based plagiarism. This occurs when generated text reflects patterns from training data without proper attribution. The authors emphasize the importance of clearly disclosing which parts of a manuscript were AI-generated and the extent of human input. They also note that AI-reworded content can evade plagiarism detection, undermining editorial oversight.
Yaroshenko and Iaroshenko (2023) and Shorey et al. (2024) address the issue of hallucinated content. They explain that AI systems can fabricate citations or data that appear credible but lack empirical grounding. This practice threatens research objectivity and may spread misinformation in scholarly literature.
Mahrishi et al. (2024) and Roxas and Recario (2024) raise concerns about the opacity of algorithmic decision-making. They note that users cannot identify which training patterns influenced the model’s output, making it difficult to assess the reasoning behind AI-generated text. This lack of transparency complicates academic evaluation and weakens attribution and accountability.
According to Wilkinson et al. (2024) and Giray (2024), another unresolved problem is the question of responsibility. When AI contributes to the writing process, it is unclear who is accountable for the content’s accuracy, originality, and ethical compliance. These authors warn that overreliance on AI may blur authorship and reduce individual academic responsibility.
Sobaih (2024) and Roxas and Recario (2024) introduce the notion of the “AI divide,” referring to the unequal access to advanced AI applications. They argue that subscription-driven models like ChatGPT disadvantage researchers at low-resourced institutions or countries and exacerbate academic publishing disparities globally.
Giray (2024) and Wilkinson et al. (2024) reference the problem of non-acknowledging the use of AI. When researchers fail to disclose the use of AI applications, author transparency is eliminated and the work can give a false impression of being entirely human-generated.
Robinson et al. (2025) further mention that AI paraphrasing tools can be utilized to bypass plagiarism detection tools, which is of concern editorially and ethically.
Giray (2024) details how indistinguishable AI-written papers can be published in questionable journals and aid the proliferation of substandard journals and loss of confidence in academic publishing.
Mahrishi et al. (2024) and Giray (2024) further point to the erosion of academic skills. Excessive reliance on AI-generated work may blur researchers’ skills of critically thinking, constructing arguments, and cooperating.
Shorey et al. (2024) and Sobaih (2024) are concerned about the ambiguous ownership of works created by AI. They believe that the rights of intellectual property are usually not specified in such instances and hence it is not clear who can claim ownership of the work and has the right of publication (Table 10).

3.3. Other Activities

3.3.1. Academic Management

AI applications are increasingly being adopted in the context of academic management, where they support strategic planning, administrative coordination, and operational decision-making. According to Soodan et al. (2024), commonly used tools are predictive analytics platforms, admin chatbots, scheduling software, and performance tracking platforms. University administration is supported with capacity planning, budgeting, and curriculum development by predictive models that yield data-based information and boost the flexibility and responsiveness of decision-making processes.
Chatbots boost administration-stakeholder communications basically by answering frequent questions raised by employees and students. In conjunction with scheduling software applications, these apps assist automation of repetitive tasks and reduction of pressure on human manpower. Performance monitoring systems are another important element in this landscape. Soodan et al. (2024) describe how these applications are used to assess employee productivity and the overall operational efficiency of academic institutions.
Nong et al. (2024) provide a concrete example from university-affiliated medical centres, where externally developed AI models (such as those created by company Epic Systems) are used to support both clinical workflows and institutional strategy. These systems analyse data from electronic records to produce risk assessments or behavioural predictions, which allow academic administrators to respond proactively to various internal and external demands.
The ethical and legal challenges of implementing AI systems in academic management are addressed in detail by two studies Nong et al. (2024) and Alzakwani et al. (2025) while each study identifies specific risks related to transparency, accountability, and data protection that affect the integrity of decision-making processes in higher education institutions. Findings from Nong et al. (2024) can be summed up as follows:
  • Various commercial AI systems used within university environments may suffer from a lack of transparency, which complicates auditing and verification.
  • Decisions about deploying AI models are often made by narrow expert teams without input from broader academic governance structures.
  • Institutions with limited resources often rely on externally developed systems that lack local validation, deepening inequality across the sector.
  • The authors highlight the importance of “equity literacy” among academic managers, defined as the capacity to identify and address structural injustices amplified by AI-supported decision-making.
Findings from Alzakwani et al. (2025) state following:
  • AI software use in higher education has great chances of data privacy breaches, unauthorised data use, leakage, and potential hacking.
  • Overdependence on AI has the potential to undermine academic autonomy by replacing human judgment and blurring institutional responsibilities (Table 11).

3.3.2. Professional Development and Self-Learning

Various texts show that at the higher education level, teachers are increasingly pairing generative AI applications, particularly big language models like ChatGPT, with their autonomous learning processes. Having been prompted by the work of Luckin et al. (2024), ChatGPT is used as both one’s individualized tutor and thinking partner and is used toward simulating conversation and exploring complex ideas and individualized critique based on one’s personalized learning needs. van den Berg and du Plessis (2023), on their side, show teachers using ChatGPT when reflecting and editing their lessons and thus autonomously adjusting their teaching methods.
Teachers in Nikoçeviq-Kurti and Bërdynaj-Syla’s (2024) research utilize generative AI to keep abreast of academic developments and boost their teaching preparation and therefore their professional knowledge. Kamali et al. (2024) build additional empirical evidence confirming that teachers rely on language models to try out their hypotheses and further their knowledge of subject matter. Al-Zahrani (2024) provides quantitative evidence confirming these findings so that transparency of AI system is found significantly and positively related with teachers’ ability to use their AI applications fruitfully for their professional development.
Even though pedagogically generative AI has strengths in professional learning, a range of ethical issues is found throughout the literature. Luckin et al. (2024) and Kamali et al. (2024) observe that programs such as ChatGPT are capable of producing work that is persuasive and grammatically correct yet incorrect at a factual level. This lack of dependability can lead to the interiorization of inaccurate knowledge, particularly for those teachers who are not specialists within every subject area.
Nikoçeviq-Kurti and Bërdynaj-Syla (2024) and van den Berg and du Plessis (2023) emphasize the threat of dependence on AI and loss of pedagogic autonomy and critical thinking skills.
Other authors (Kamali et al., 2024; Nikoçeviq-Kurti & Bërdynaj-Syla, 2024; van den Berg & du Plessis, 2023) cite institutional guidelines not being available as a major issue—the majority of teachers are making their own judgments regarding the use of AI with no knowledge of ethics of use or defined parameters.
Al-Zahrani (2024) further remarks that inadequacy of proper transparency and explainability of AI systems reduces teachers’ confidence and makes them reluctant to apply these applications in their professional development practice. These results highlight the significance of institutional policy and systematic ethical training in preserving appropriate use of AI when implemented within self-directed learning (Table 12).

4. Discussion

4.1. Summarisation of Findings

The Results of this study provide answers to the stated research questions: (1) What types of AI applications are being used to support key academic activities performed by university educators? (2) What are the ethical implications and issues of integrating AI into key educational, research, and academic activities carried out by university educators?
In summary we can say that for category of teaching, we identified 19 types of AI applications that academia identifies as being used by university educators. In scientific research 32, and for other activities that relate to academic management and self-learning 7 applications. It can be said that we expect these numbers will continue to grow as market with AI expands on year-to-year basis. It should be noted that some AI applications we identified as not by name. In studies where the name of the AI application (e.g., ChatGPT) was not directly mentioned, we used a descriptive name that was also referred to by the authors of various studies. These include names such as Automated grading systems, Learning analytics platforms, Intelligent tutoring systems, and the like.
Categorization of the identified AI applications in Table 13 draws upon findings from several open-source studies by Gozalo-Brizuela and Garrido-Merchán (2023), Ye et al. (2024) and Gama and Magistretti (2023). These sources helped us semantically categorize the identified AI applications into six categories: (1) Generative AI models and language models, produce human-like text and serve as foundational engines for diverse educational and research applications; (2) Text generation and editing applications, assist users in refining language, improving clarity, and rephrasing content for tone, grammar, or style; (3) Educational and assessment platforms, use AI to personalize learning, automate grading, and provide feedback, supporting scalable instruction and adaptive teaching; (4) Research support and source management, help academics discover, organize, and synthesize scholarly literature efficiently; (5) Visualization and design applications, leverage AI to generate charts, infographics, and educational visuals from text or data, enhancing communication and learning; (6) Analytical and managerial AI applications, enable prediction, monitoring, and decision support through dashboards, early warning systems, or performance analytics.
Typology of scientific moral issues is a viable system of organizing academic concerns related to AI within the scholarship. Drawing on other works of scholarship, whose works delineate core moral considerations relating to AI development, these concerns can be subsumed within six areas: (1) Privacy and data protection, relating to data leak and anonymization failure (Neyigapula, 2024); (2) Bias and fairness, relating to discriminating algorithms and uneven access (Agrawal, 2024; Sargiotis, 2024); (3) Transparency and accountability, relating to black box decision-making and unnoticed AI assistance (Chopra, 2024); (4) Autonomy and oversight, relating to instructors’ and reviewers’ undue dependence on AI (Kumar, 2024); (5) Governance gaps, relating to institutional policy nonexistence or exclusionary decision-making (Shukla, 2024; Chintoh et al., 2024); and (6) Integrity and plagiarism, relating to concocted quotations and untraceable AI output (Floridi, 2024). This typology asserts common and workable moral concerns rather than hypothetical moral problems and makes workable institutional responses possible.
The categorization of ethical problems in teaching activities of university educators were categorised five domains (Table 14). Bias and fairness included 6 problems, Trans-parency and accountability contained 5 problems, Autonomy and oversight and Integrity and plagiarism each included 3 problems, and Privacy and data protection was assigned to 2 problems.
For ethical problems in scientific research activities of university educators resulted again in five domains (Table 15). Integrity and plagiarism included 10 problems, Transparency and accountability contained 6 problems, Bias and fairness and Autonomy and oversight each included 3 problems, and Privacy and data protection was assigned to 2 problems.
Lastly, ethical problems for other activities of university educators (academic management and self-learning of educators) resulted once again in five domains (Table 16). The category Transparency and accountability included 5 problems, Bias and fairness con-tained 3 problems, while Privacy and data protection and Integrity and plagiarism each included 2 problems. The category Governance gaps was assigned to 1 problem.

4.2. Policy Recommendations and Agenda for Future Research

The findings of this study have several practical implications. From an institutional standpoint, the identification of ethical problems enables the discussion of possible consequences. These consequences drive the development of targeted policies that respond to specific challenges in teaching, research, and other activities, such as academic management and self-learning. It should be noted that proposed policy recommendations are based on secondary data. This means that further research is required to verify their practical implications.

4.2.1. Teaching

Institutions can prevent the harm of AI-related teaching ethics problems by taking a forward-thinking approach founded on openness, inclusivity, and accountability. Below are specific policy recommendations and future research agendas organized by teaching ethics problems and categories (Table 14):
  • Bias and fairness: Higher education institutions should (1) integrate AI literacy modules across curricula to empower all students, regardless of their digital proficiency; (2) require, where feasible, AI application providers to undergo independent audits to assess algorithmic fairness, including cultural, gender, and linguistic biases; (3) mandate regular updates to classroom AI systems to prevent the accumulation of static bias; (4) ensure that AI-generated outputs used for grading are reviewed by humans, particularly for subjective or open-ended tasks; and (5) establish institutional policies that support equitable access to educational AI applications, including open-source alternatives for under-resourced settings. In terms of future research, educators should focus on (1) studying the effects of digital literacy inequality on AI-assisted learning outcomes; (2) examining how students from diverse backgrounds interpret and respond to biased feedback; and (3) exploring the long-term impacts of adaptive learning tools on student performance across a range of disciplines.
  • Transparency and accountability: Higher education institutions should (1) require clear disclosure when AI is used in grading, feedback, or the creation of instructional content; (2) implement institutional policies that ensure AI-driven decisions affecting student outcomes are properly documented; (3) offer dashboards that allow students to see how their personal data is used in learning analytics; and (4) establish ethics review boards or committees responsible for overseeing the use of AI in teaching environments. Future research activities should converge to (1) studying of how different disciplines approach the issue of AI accountability; (2) mapping the institutional practices in documenting AI decisions and the extent to which they align with students’ expectations.
  • Autonomy and oversight: Higher education institutions policy should (1) integrate AI literacy into professional development programs to equip educators with evaluative and oversight capabilities; (2) promote collaborative content creation workflows where educators co-edit AI-generated materials; (3) develop governance frameworks that specify liability in the case of AI errors or content misuse; (4) encourage reflective teaching practices to counterbalance AI-driven decision-making. Researchers, within their future research agenda, should (1) assess how reliance on AI affects educators’ self-efficacy and motivation over time; (2) examine how AI recommendations alter curriculum planning and educator-student interactions.
  • Integrity and plagiarism: Higher education institutions policy recommendations are to (1) create clear guidelines for permissible and impermissible use of generative AI in educational content; (2) provide training on authorship attribution when incorporating AI-generated outputs; (3) require citation or acknowledgment of AI assistance in educational and scholarly deliverables; (4) establish protocols for verifying the originality of AI-influenced materials. From researchers’ perspective future research should (1) evaluate educator understanding of copyright implications when using AI applications; (3) investigate best practices in plagiarism detection adapted to the generative AI environment; (4) assess institutional readiness to respond to violations involving AI-assisted plagiarism.
  • Privacy and Data Protection: Higher education institutions recommendations for policy are to (1) mandate that third-party AI vendors comply with institutional and regional data protection policies; (2) prohibit long-term storage of sensitive educational data beyond necessary use cases; (3) require anonymization and encryption by default in AI learning systems; (4) educate both students and educators on the risks of data sharing and surveillance. In terms of future research agenda, researchers should focus on (1) analysis of institutional compliance levels with GDPR or equivalent national or institutional frameworks in the context of educational AI; (2) exploration of students’ awareness and attitudes toward data usage in AI-assisted learning; (3) tracking of incidents of privacy breaches associated with educational AI in order to identify systemic vulnerabilities.

4.2.2. Scientific Research

In response to the ethical problems derived from the use of AI in scientific research, higher education institutions are encouraged to adopt a forward-looking strategy. This strategy should mitigate consequences of ethical problems associated with scientific research. Following section presents a set of policy recommendations and proposals for future research, systematically organized according to the ethical issues and categories identified within the domain of scientific research activities (Table 15):
  • Autonomy and oversight: University educators performing tasks related to scientific research should (1) establish training programs focused on developing researchers’ critical thinking and independent reasoning in AI-rich environments; (2) encourage team-based research formats that emphasize interpersonal collaboration to counter AI-driven isolation; (3) create institutional guidance for balanced AI use, including norms for acceptable dependency levels; (4) incorporate burnout prevention strategies for monitoring cognitive overload among research staff; (5) promote shared authorship protocols to maintain equitable contribution in AI-assisted writing. Future research should (1) investigate the long-term effects of AI-reliant scholarship on interdisciplinary research culture; (2) assess strategies that protect human autonomy in AI-augmented academic work.
  • Bias and fairness: Policy in condition of universities and higher education institutions should (1) design and enforce institutional standards for detecting and mitigating representational bias in AI outputs; (2) ensure diverse training data inputs for institutional AI applications to reduce cultural and topical bias; (3) implement transparent review protocols for AI-generated content affecting publication and citation equity; (4) include training for educators on recognizing and challenging stereotypical outputs from AI systems. Future research should focus on areas that (1) examine trust in AI systems when known biases are disclosed versus hidden; (2) design tools to evaluate whether AI systems recommend resources and topics in a way that treats less prominent or marginal research fields fairly.
  • Transparency and accountability: Higher education’s policies should be updated to (1) require metadata tagging of AI-generated content and disclosure in all research outputs; (2) create transparent tools that allow human reviewers to see and understand how AI was used in a given academic process; (3) establish independent oversight bodies to review AI-influenced decisions in academic workflows; (4) create protocols for disclosing AI involvement in all stages of manuscript development, peer review, and revision. Future research should be dedicated to (1) examination of how metadata-driven transparency affects scholarly norms of authorship and peer feedback; (2) studying of how AI disclosure practices influence reviewer confidence and editorial acceptance rates.
  • Integrity and plagiarism: Policy for higher education institution should (1) enforce institutional guidelines on the verification of citations produced or recommended by AI applications; (2) clearly define who can be considered an author when AI contributes substantively to content generation; (3) revise anti-plagiarism protocols to include AI-sourced paraphrasing, reuse, and disguised authorship; (4) regulate AI use in environments vulnerable to predatory publication schemes; (5) develop a taxonomy of academic misconduct scenarios involving AI manipulation; (6) provide institutional support for educators and researchers encountering ambiguous or borderline use cases. Future research should focus on (1) tracking how forged citations are spreading across academic disciplines and uncovering their sources; (2) investigating how plagiarism tactics involving AI vary between fields and how effective detection methods are; (3) examining the influence of AI on editorial standards, particularly regarding originality and rigor in peer-reviewed journals; and (4) identifying key intervention points to help uphold academic integrity in research areas with high exposure to AI applications.
  • Privacy and data protection: Policy recommendations for higher education institutions should (1) establish clear rules for the ethical handling of student and researcher data by AI systems; (2) implement consent mechanisms and provide opt-out options for AI-driven profiling or usage tracking; (3) create anonymization and encryption protocols specifically designed for academic settings; and (4) ensure that third-party tools comply with institutional data governance standards. Future research should (1) explore how students and faculty perceive transparency in the way AI applications handle their data.

4.2.3. Other Activities

For other activities that we in this chapter categorize as academic management and self-learning activities of university educators, we again encourage to promote forward-looking strategies. List of policy recommendations and agenda future research is as follows:
  • Transparency and accountability: Policy should focus on (1) requiring mandatory AI disclosure statements for all academic outputs assisted by AI; (2) developing institutional guidelines for acceptable and unacceptable AI uses in academic management; (3) promoting human verification for all content produced or augmented by AI applications. Research should focus on (1) study of the effectiveness of human oversight models in mitigating misinformation introduced by AI.
  • Bias and fairness: Policy should focus on (1) implementing bias testing protocols before institutional adoption of generative AI applications; (2) ensuring that procurement criteria for AI applications include fairness audits and equity assessments. Research should be dedicated to (1) developing the methods for measuring distributive impacts of AI adoption in under-resourced institution; (2) analysing disparities in access to high-quality AI applications across disciplines and institutions.
  • Integrity and plagiarism: Policy should focus on (1) introduce integrity training that addresses AI authorship, citation, and originality; (2) clarifying copyright and data licensing rules for AI-assisted outputs at institutional level. Future research should be dedicated to (1) investigation of educator perceptions of academic misconduct linked to AI use.
  • Privacy and data protection: Policy should focus on (1) inclusion of AI-related data risks in institutional ethics review protocols; (2) requirement of institutional approval and risk assessment before the use of cloud-based AI applications for sensitive projects; (3) educating researchers on anonymization, secure data entry, and limits of data deletion in cloud-based AI services. Future research should focus on (1) study of how long-term storage of data by AI systems may affect the protection of intellectual property and the confidentiality of sensitive research; (2) analysis of how the use of AI in academic settings is shaped by GDPR requirements and institutional data protection policies—and where gaps may arise.
  • Autonomy and oversight: Policy should focus on (1) reinforcing that AI should serve as a support tool, not a replacement for scholarly decision-making; (2) creating institutional guidelines on healthy AI usage levels and potential cognitive risks; (3) provision of psychological and organizational support for researchers navigating AI-induced work changes. Future research should focus on (1) analysis of how AI affects educator’s autonomy and cognitive engagement; (2) exploring the psychological impacts of prolonged reliance on generative AI applications; (3) investigating the role of institutional governance in preserving academic autonomy in the age of AI.

4.3. Theoretical Implications

The findings of this review, which identified both the scope of AI applications in higher education and the ethical challenges associated with them, can be further interpreted through established educational theory frameworks.
From the perspective of transformative learning theory, education involves more than knowledge transmission. It requires critical reflection that can lead to a transformation of meaning perspectives (Mezirow, 1991). Such applications of AI as computerized feedback programs and cognitive tutoring programs can cause this process to be initiated by exposing students to novel ways of thinking and, hence challenging existing assumptions. However, algorithmic bias, obfuscation, and data privacy can thwart this transformative capability. Rather than cause profound reflection, uncritical acceptance of AI-outputs can create shallow level engagement and work to highlight both facilitating and limiting function of AI in supporting transformative learning.
Equally relevant are implications for teacher professional identity. Professional identity reflects how educators perceive and construct their roles in relation to teaching, research, and institutional responsibilities (Beijaard et al., 2004). As AI increasingly contributes to assessment, thesis supervision, and even research writing, educators are shifting from being the sole producers of academic outputs to becoming curators, validators, and ethical gatekeepers of AI-generated content. This change, while not explicitly mentioned by Kelchtermans, finds conceptual resonance with his argument that “who teachers are as persons is inseparable from how they teach” (Kelchtermans, 2009, p. 257). The incorporation of AI puts new strains on teachers’ self-consciousness since they now have to confront a sophisticated problem regarding autonomy, integrity, and authorship, which has become central to a teacher’s professional consciousness in the age of digital technology.
These findings can further be encapsulated within Technological Pedagogical Content Knowledge (TPACK) theory, which emphasizes that technology-mediated instruction demands a triadic interactive balance between pedagogic knowledge, subject-matter knowledge, and technological knowledge (Mishra & Koehler, 2006). While technology’s strength definitely increases the technological dimension, this study shows that without pedagogic and ethical alignment, such tools become lone applications. The ethical issues implied—the likes of openness, fairness, and even plagiarism—are a testament to how AI will need to be incorporated technically but pedagogically and ethically as well in order to bring about valuable learning while maintaining scholarship integrity.
Considered in the context of a perspective upon sociocultural learning theory and Vygotsky’s (1978) construct regarding mediational tools especially, this new technology can be understood as a new cultural instrument supporting learning and interaction within higher education. Just as symbols and languages enable cognitive growth, this technology now supports knowledge accessibility, collaboration, and decision-making. However, as this study shows, inequity within accessibility to advanced systems and risks entailed within overreliance raise questions about whether these new technologies will democratize or further entrench structural inequalities within higher education.

4.4. Limitations of the Study

We acknowledge the limitations of this study and potential subjective biases that may have occurred. We state them as following:
  • The analysis focused solely on open-access publications listed in the Web of Science and Scopus databases, covering the period from 2022 to 2025. This scope may have excluded region-specific insights, grey literature, and relevant research published in non-indexed or non-English sources. Same applies for background analysis we conducted in introduction section. This means that study may be affected by potential language-bias (inclusion of predominantly English-based publications) and publication bias (inclusion of open-access publications).
  • A methodological limitation of this study is in its timeframe. Study analyses publications published since 2022, coinciding with the moment when ChatGPT went public and the fact that the trend of use of AI in general was becoming dominant topic of interest within academia. With the exclusion of research papers published prior to 2022, the review fails to depict earlier trend of AI-use within higher education institutions. While ethical concerns are expected to be the similar even before the 2022, it should be acknowledged that larger reviews of wider timeframes could more accurately depict the evolution of ethical concerns of AI-use within higher education.
  • Study relied on title and abstract screening supported by ChatGPT to identify records that explicitly or implicitly addressed ethical concerns. While this approach enhanced scalability, it may have overlooked subtler forms of ethical discussion embedded in full texts.
  • We as authors acknowledge the role of subjective judgment during the screening and inclusion phases, particularly in determining whether individual articles were relevant to the defined research questions. Although the process followed a structured methodology, our human interpretation may have influenced decisions about inclusion of records into final sample of articles that entered our analysis.
  • The policy recommendations and directions for future research presented in Section 4.2, again as in case of point 3, represent a subjective construct developed by the authors of this article. We acknowledge this and therefore present it as a limitation. These claims are not empirically substantiated and should be regarded as proposals for future research, in which they could be examined through empirical methods such as expert panel evaluations or focus group discussions involving professionals specializing in education ethics and law or triangulation with fieldwork approaches including interviews with university educators and institutional case studies.

5. Conclusions

This study addressed the research gap concerning the ethical problems of AI integration and use in higher education. While prior research often emphasized technological efficiency and pedagogical applications, our study explicitly targeted the ethical dimension in terms of ethical problems of AI use across the full spectrum of academic responsibilities undertaken by university educators. By focusing on this area, we contribute to the current knowledge with a comprehensive and structured review that identifies both the types of AI applications currently in use and the associated ethical challenges across three domains: teaching, scientific research, and other activities dedicated to academic management and self-learning of university educators.
In response to our first research question, “What types of AI applications are being used to support key academic activities performed by university educators?”, we identified a wide range of applications categorized into six functional groups: generative AI models and language models, text generation and editing applications, educational and assessment platforms, research support and source management applications, visualization and design applications, and analytical and managerial AI applications. Their application was observed in diverse educator activities such as preparing study materials, supervising theses, conducting research, publishing scholarly outputs, and engaging in institutional planning.
With respect to the second research question, “What are the ethical implications and issues of integrating AI into key educational, research, and academic activities carried out by university educators?”, our analysis revealed a typology of recurring ethical risks. These were designed systematically within six areas: privacy and data protection; fairness and bias; transparency and accountability; oversight and autonomy; governance gaps; and integrity and plagiarism. Each of these sets encompasses context-specific concerns such as opaque algorithmic decision-making within students’ assessment, data protection violation within researches, and uneven access to AI applications within admin decision-making.
Results reveal that ethics risks do not arise from a singular set of academic work but are pervasive within many educator tasks. For example, integrity and plagiarism issues were more prominent within science research, yet transparency and accountability requirements were widespread within teaching, research, and academic management. These findings reveal the requirement of higher education institutions’ system-wide AI governance using special-purpose policies, institutional infrastructure, and teaching personnel ethics training.
In fulfilling the stated aim of providing a general review, our study delivers a thematically organized analysis of AI applications and ethical problems. This dual mapping provides the analytical foundation required to design ethical frameworks, foster responsible AI adoption, and identify priority areas for policy intervention. Findings also suggest that the implementation of ethical safeguards should be differentiated based on the type of educator activity and the function of the AI application employed.

Funding

This research was funded by Slovak Research and Development Agency grant number VV-MVP-24-0375 and APC was funded by Slovak Research and Development Agency.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Acknowledgments

During the preparation of this study, the authors used ChatGPT and model version GPT-4.5 for the purposes of screening and filtration phase of selecting records suitable for the needs of study’s analysis. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

ActivitySub-ActivityWoS Query
TeachingPreparation of study materialsKP=(“AI” OR “generative AI” OR “AI applications in education” OR “educational technology” OR “intelligent tutoring systems” OR “teaching materials” OR “instructional materials” OR “syllabus design” OR “curriculum development” OR “course planning” OR “lecture slides” OR “lesson planning” OR worksheets OR “learning resources” OR “content creation in education”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR “bias” OR “fairness” OR “responsible AI” OR “accountability” OR “transparency”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Conducting lectures, seminars, and practical classesKP=(“AI” OR “generative AI” OR “AI in teaching” OR “AI-powered instruction” OR “university lectures” OR “classroom teaching” OR “seminar facilitation” OR “in-person teaching” OR “face-to-face instruction” OR “online teaching” OR “remote instruction” OR “virtual classrooms” OR “synchronous teaching” OR “hybrid learning” OR “lecture delivery” OR “student engagement” OR “digital pedagogy”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR “bias” OR “fairness” OR “responsible AI” OR “accountability” OR “transparency”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Student assessmentKP=(“AI” OR “generative AI” OR “AI-assisted assessment” OR “automated grading” OR “AI in student evaluation” OR “digital assessment applications” OR “exam scoring” OR “essay grading” OR “test correction” OR “academic assessment” OR “formative assessment” OR “summative assessment” OR “e-assessment” OR “online exams” OR “electronic grading” OR “feedback automation” OR “university assessment”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “fairness” OR “bias” OR “responsible AI” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Providing student consultationsKP=(“AI” OR “generative AI” OR “AI tutoring” OR “academic advising” OR “student consultations” OR “AI-assisted feedback” OR “support for academic writing” OR “thesis guidance” OR “assignment help” OR “digital tutoring” OR “intelligent tutoring systems” OR “personalized support” OR “academic mentoring” OR “learning support” OR “one-on-one teaching”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “student privacy” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Supervising final thesesKP=(“AI” OR “generative AI” OR “thesis supervision” OR “academic advising” OR “AI-assisted thesis writing” OR “research project mentoring” OR “student supervision” OR “academic writing support” OR “dissertation guidance” OR “thesis topic generation” OR “AI applications in academic writing” OR “supervisor-student interaction” OR “guidance for final projects” OR “digital support in thesis writing” OR “AI in research training”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Writing reviewer reports for final thesesKP=(“AI” OR “generative AI” OR “thesis evaluation” OR “academic review” OR “AI-assisted assessment” OR “reviewing student theses” OR “peer evaluation in higher education” OR “dissertation feedback” OR “AI applications for academic assessment” OR “academic critique” OR “quality assessment of final projects” OR “AI in academic writing evaluation” OR “academic judgment” OR “expert review of theses”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “academic integrity” OR “responsible AI” OR “fairness” OR “bias” OR “accountability” OR “transparency”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Coordinating internships, collaboration with professional practice, and field tripsKP=(“AI” OR “generative AI” OR “work-based learning” OR “internships in higher education” OR “professional practice coordination” OR “AI in experiential learning” OR “university-industry collaboration” OR “field trips in education” OR “AI-supported student placement” OR “practical training in university” OR “vocational training” OR “practice-based learning” OR “AI in curriculum integration” OR “academic-industry partnership”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Scientific researchConducting research and development activitiesKP=(“AI” OR “generative AI” OR “AI in research” OR “AI-supported scientific research” OR “AI applications for data analysis” OR “academic research with AI” OR “applied research” OR “basic research” OR “research methodology” OR “AI in study design” OR “AI-driven analysis” OR “scholarly writing” OR “AI in academic publishing” OR “scientific development” OR “university research activities”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Publishing research findingsKP=(“AI” OR “generative AI” OR “AI-assisted academic writing” OR “AI in scientific publishing” OR “AI in manuscript preparation” OR “AI applications for literature review” OR “scholarly communication” OR “academic publishing” OR “scientific writing with AI” OR “AI-supported paper writing” OR “AI in research dissemination” OR “AI in monograph writing” OR “publication process in higher education” OR “research communication applications” OR “university research outputs”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “plagiarism” OR “academic integrity” OR “responsible AI” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Submitting and managing research project proposalsKP=(“AI” OR “generative AI” OR “AI-assisted grant writing” OR “AI in research proposal development” OR “research funding applications” OR “AI applications for project writing” OR “grant proposal preparation” OR “academic funding support” OR “digital applications for research planning” OR “university research funding” OR “AI in academic project design”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “transparency” OR “accountability” OR “intellectual property”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Cooperation with industry and practiceKP=(“AI” OR “generative AI” OR “university-industry collaboration” OR “knowledge transfer” OR “AI in practice-based research” OR “applied research partnerships” OR “AI-supported technology transfer” OR “collaboration between academia and industry” OR “AI in research-practice integration” OR “research impact” OR “academic-industry cooperation” OR “translational research with AI” OR “real-world applications of academic research”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Organizing research eventsKP=(“AI” OR “generative AI” OR “academic event organization” OR “scientific event planning” OR “AI-supported conference management” OR “research seminars” OR “academic workshops” OR “digital applications for academic event coordination” OR “AI in event logistics” OR “university research events” OR “organizing academic symposia” OR “technology-enhanced academic events” OR “higher education research dissemination”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “fairness” OR “transparency” OR “accountability”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Other
activities
Academic managementKP=(“AI” OR “generative AI” OR “academic governance” OR “university management” OR “AI in academic leadership” OR “decision-making in higher education” OR “faculty administration” OR “academic councils” OR “academic senate” OR “scientific boards” OR “AI-supported institutional management” OR “higher education administration” OR “strategic planning in academia” OR “digital applications in university governance”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR “transparency” OR “accountability” OR “fairness” OR “bias”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)
Professional developmentKP=(“AI” OR “generative AI” OR “professional development” OR “academic upskilling” OR “AI in teacher training” OR “self-directed learning” OR “lifelong learning in academia” OR “faculty development programs” OR “online courses for educators” OR “AI-supported professional learning” OR “higher education staff training” OR “digital competencies” OR “university teacher education” OR “technology-enhanced learning”)
AND
TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR “fairness” OR “bias” OR “transparency” OR “accountability” OR “data protection”)
AND
TS=(“higher education” OR “university” OR “college” OR “tertiary education”)

Appendix B

ActivitySub-ActivityScopus Query
TeachingPreparation of study materials(KEY(“AI” OR “generative AI” OR “AI applications in education” OR “educational technology” OR “intelligent tutoring systems” OR “teaching materials” OR “instructional materials” OR “syllabus design” OR “curriculum development” OR “course planning” OR “lecture slides” OR “lesson planning” OR worksheets OR “learning resources” OR “content creation in education”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR bias OR fairness OR “responsible AI” OR accountability OR transparency))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Conducting lectures, seminars, and practical classes(KEY(“AI” OR “generative AI” OR “AI in teaching” OR “AI-powered instruction” OR “university lectures” OR “classroom teaching” OR “seminar facilitation” OR “in-person teaching” OR “face-to-face instruction” OR “online teaching” OR “remote instruction” OR “virtual classrooms” OR “synchronous teaching” OR “hybrid learning” OR “lecture delivery” OR “student engagement” OR “digital pedagogy”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR bias OR fairness OR “responsible AI” OR accountability OR transparency))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Student assessment(KEY(“AI” OR “generative AI” OR “AI-assisted assessment” OR “automated grading” OR “AI in student evaluation” OR “digital assessment applications” OR “exam scoring” OR “essay grading” OR “test correction” OR “academic assessment” OR “formative assessment” OR “summative assessment” OR “e-assessment” OR “online exams” OR “electronic grading” OR “feedback automation” OR “university assessment”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR fairness OR bias OR “responsible AI” OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Providing student consultations(KEY(“AI” OR “generative AI” OR “AI tutoring” OR “academic advising” OR “student consultations” OR “AI-assisted feedback” OR “support for academic writing” OR “thesis guidance” OR “assignment help” OR “digital tutoring” OR “intelligent tutoring systems” OR “personalized support” OR “academic mentoring” OR “learning support” OR “one-on-one teaching”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “student privacy” OR “responsible AI” OR bias OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Supervising final theses(KEY(“AI” OR “generative AI” OR “thesis supervision” OR “academic advising” OR “AI-assisted thesis writing” OR “research project mentoring” OR “student supervision” OR “academic writing support” OR “dissertation guidance” OR “thesis topic generation” OR “AI applications in academic writing” OR “supervisor-student interaction” OR “guidance for final projects” OR “digital support in thesis writing” OR “AI in research training”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “responsible AI” OR bias OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Writing reviewer reports for final theses(KEY(“AI” OR “generative AI” OR “thesis evaluation” OR “academic review” OR “AI-assisted assessment” OR “reviewing student theses” OR “peer evaluation in higher education” OR “dissertation feedback” OR “AI applications for academic assessment” OR “academic critique” OR “quality assessment of final projects” OR “AI in academic writing evaluation” OR “academic judgment” OR “expert review of theses”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “academic integrity” OR “responsible AI” OR fairness OR bias OR accountability OR transparency))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Coordinating internships, collaboration with professional practice, and field trips(KEY(“AI” OR “generative AI” OR “work-based learning” OR “internships in higher education” OR “professional practice coordination” OR “AI in experiential learning” OR “university-industry collaboration” OR “field trips in education” OR “AI-supported student placement” OR “practical training in university” OR “vocational training” OR “practice-based learning” OR “AI in curriculum integration” OR “academic-industry partnership”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR bias OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Scientific researchConducting research and development activities(KEY(“AI” OR “generative AI” OR “AI in research” OR “AI-supported scientific research” OR “AI applications for data analysis” OR “academic research with AI” OR “applied research” OR “basic research” OR “research methodology” OR “AI in study design” OR “AI-driven analysis” OR “scholarly writing” OR “AI in academic publishing” OR “scientific development” OR “university research activities”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR bias OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Publishing research findings(KEY(“AI” OR “generative AI” OR “AI-assisted academic writing” OR “AI in scientific publishing” OR “AI in manuscript preparation” OR “AI applications for literature review” OR “scholarly communication” OR “academic publishing” OR “scientific writing with AI” OR “AI-supported paper writing” OR “AI in research dissemination” OR “AI in monograph writing” OR “publication process in higher education” OR “research communication applications” OR “university research outputs”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR plagiarism OR “academic integrity” OR “responsible AI” OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Submitting and managing research project proposals(KEY(“AI” OR “generative AI” OR “AI-assisted grant writing” OR “AI in research proposal development” OR “research funding applications” OR “AI applications for project writing” OR “grant proposal preparation” OR “academic funding support” OR “digital applications for research planning” OR “university research funding” OR “AI in academic project design”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR transparency OR accountability OR “intellectual property”))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Cooperation with industry and practice(KEY(“AI” OR “generative AI” OR “university-industry collaboration” OR “knowledge transfer” OR “AI in practice-based research” OR “applied research partnerships” OR “AI-supported technology transfer” OR “collaboration between academia and industry” OR “AI in research-practice integration” OR “research impact” OR “academic-industry cooperation” OR “translational research with AI” OR “real-world applications of academic research”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Organizing research events(KEY(“AI” OR “generative AI” OR “academic event organization” OR “scientific event planning” OR “AI-supported conference management” OR “research seminars” OR “academic workshops” OR “digital applications for academic event coordination” OR “AI in event logistics” OR “university research events” OR “organizing academic symposia” OR “technology-enhanced academic events” OR “higher education research dissemination”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR fairness OR transparency OR accountability))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Other activitiesAcademic management(KEY(“AI” OR “generative AI” OR “academic governance” OR “university management” OR “AI in academic leadership” OR “decision-making in higher education” OR “faculty administration” OR “academic councils” OR “academic senate” OR “scientific boards” OR “AI-supported institutional management” OR “higher education administration” OR “strategic planning in academia” OR “digital applications in university governance”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR transparency OR accountability OR fairness OR bias))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))
Professional development(KEY(“AI” OR “generative AI” OR “professional development” OR “academic upskilling” OR “AI in teacher training” OR “self-directed learning” OR “lifelong learning in academia” OR “faculty development programs” OR “online courses for educators” OR “AI-supported professional learning” OR “higher education staff training” OR “digital competencies” OR “university teacher education” OR “technology-enhanced learning”))
AND
(TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR fairness OR bias OR transparency OR accountability OR “data protection”))
AND
(TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”))
AND PUBYEAR > 2021 AND PUBYEAR < 2026
AND (LIMIT-TO(OA, “all”))
AND (LIMIT-TO(PUBSTAGE, “final”))

References

  1. Acosta-Enríquez, B. G., Arbulu Ballesteros, M., Vilcapoma Pérez, C. R., Huamaní Jordan, O., Martín Vergara, J. A., Martel Acosta, R., Arbulú Pérez Vargas, C. G., & Arbulú Castillo, J. C. (2025). AI in academia: How do social influence, self-efficacy, and integrity influence researchers’ use of AI models. Social Sciences and Humanities Open, 1(1), 100579. [Google Scholar] [CrossRef]
  2. Agrawal, T. S. (2024). Ethical implications of AI in decision-making: Exploring bias, accountability, and transparency in autonomous systems. International Journal of Science and Research (IJSR), 13, 20–21. [Google Scholar] [CrossRef]
  3. Aljuaid, H. (2024). The impact of AI applications on academic writing instruction in higher education: A systematic review. Arab World English Journal, 26–55. [Google Scholar] [CrossRef]
  4. Allam, H. M., Gyamfi, B., & Al Omar, B. (2025). Sustainable innovation: Harnessing AI and living intelligence to transform higher education. Education Sciences, 15(4), 398. [Google Scholar] [CrossRef]
  5. Alqahtani, T., Badreldin, H. A., Alrashed, M., Alshaya, A. I., Alghamdi, S. S., bin Saleh, K., Alowais, S. A., Alshaya, O. A., Rahman, I., Al Yami, M. S., & Albekairy, A. M. (2023). The emergent role of AI, natural learning processing, and large language models in higher education and research. Research in Social and Administrative Pharmacy, 19(8), 1236–1242. [Google Scholar] [CrossRef]
  6. Alrayes, A., Henari, T. F., & Ahmed, D. A. (2024). ChatGPT in education—Understanding the Bahraini academics’ perspective. The Electronic Journal of e-Learning, 22(2), 112–134. [Google Scholar] [CrossRef]
  7. Al-Zahrani, A. M. (2024). Unveiling the shadows: Beyond the hype of AI in education. Heliyon, 10(9), e30696. [Google Scholar] [CrossRef]
  8. Alzakwani, M. H. H., Zabri, S. M., & Ali, R. R. (2025). Enhancing university teaching and learning through integration of AI in information and communication technology. Edelweiss Applied Science and Technology, 9(1), 1345–1357. [Google Scholar] [CrossRef]
  9. Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2024). Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Education and Information Technologies, 29(9), 11281–11321. [Google Scholar] [CrossRef]
  10. Beijaard, D., Meijer, P. C., & Verloop, N. (2004). Reconsidering research on teachers’ professional identity. Teaching and Teacher Education, 20(2), 107–128. [Google Scholar] [CrossRef]
  11. Bozkurt, A. (2023). Unleashing the potential of generative AI, conversational agents and Chatbots in educational praxis: A systematic review and bibliometric analysis of GenAI in education. Open Praxis, 15(4), 261–270. [Google Scholar] [CrossRef]
  12. Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: A critical dialogue. Higher Education Research and Development, 43(3), 563–577. [Google Scholar] [CrossRef]
  13. Celik, I., Dindar, M., Muukkonen, H., & Jarvela, S. (2022). The promises and challenges of AI for teachers: A systematic review of research. TechTrends, 66(4), 616–630. [Google Scholar] [CrossRef]
  14. Chee, H., Ahn, S., & Lee, J. (2024). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology, 56(5), 2146–2182. [Google Scholar] [CrossRef]
  15. Chintoh, G. A., Segun-Falade, O. D., Odionu, C. S., & Ekeh, A. H. (2024). Legal and ethical challenges in AI governance: A conceptual approach to developing ethical compliance models in the U.S. International Journal of Social Science Exceptional Research, 3(1), 103–109. [Google Scholar] [CrossRef]
  16. Chopra, P. (2024). Ethical implications of AI in financial services: Bias, transparency, and accountability. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 10(5), 306–314. [Google Scholar] [CrossRef]
  17. Colnerud, G. (2013). Brief report: Ethical problems in research practice. Journal of Empirical Research on Human Research Ethics, 8(4), 37–41. [Google Scholar] [CrossRef]
  18. Cowling, M., Crawford, J., Allen, K.-A., & Wehmeyer, M. (2023). Using leadership to leverage ChatGPT and AI for undergraduate and postgraduate research supervision. Australasian Journal of Educational Technology, 39(4), 89–103. [Google Scholar] [CrossRef]
  19. Deep, P. D., Martirosyan, N., Ghosh, N., & Rahaman, M. S. (2025). ChatGPT in ESL higher education: Enhancing writing, engagement, and learning outcomes. Information, 16(4), 316. [Google Scholar] [CrossRef]
  20. Demiröz, H., & Tıkız-Ertürk, G. (2025). A review on conversational AI as a application in academic writing. Eskiyeni, 56, 469–496. [Google Scholar] [CrossRef]
  21. Dempere, J., Modugu, K., Allam, H., & Ramasamy, L. K. (2023). The impact of ChatGPT on higher education. Frontiers in Education, 8, 1206936. [Google Scholar] [CrossRef]
  22. Drugova, E., Zhuravleva, I., Zakharova, U., & Latipov, A. (2024). Learning analytics driven improvements in learning design in higher education: A systematic literature review. Journal of Computer Assisted Learning, 40(2), 510–524. [Google Scholar] [CrossRef]
  23. Dzogovic, S. A., Zdravkovska-Adamova, B., & Serpil, H. (2024). From theory to practice: A holistic study of the application of AI methods and techniques in higher education and science. Human Research in Rehabilitation, 14(2), 293–311. [Google Scholar] [CrossRef]
  24. Ekundayo, T., Khan, Z., & Ali Chaudhry, S. (2024). ChatGPT’s integration in GCC higher education: Bibliometric analysis of trends. Educational Process: International Journal, 13(3), 69–84. [Google Scholar] [CrossRef]
  25. Farber, S. (2025). Comparing human and AI expertise in the academic peer review process: Towards a hybrid approach. Higher Education Research and Development, 44(4), 871–885. [Google Scholar] [CrossRef]
  26. Farrelly, T., & Baker, N. (2023). Generative AI: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. [Google Scholar] [CrossRef]
  27. Floridi, L. (2024). The ethics of artificial intelligence: Exacerbated problems, renewed problems, unprecedented problems—Introduction to the special issue of the American Philosophical Quarterly dedicated to the ethics of AI. SSRN Electronic Journal. [Google Scholar] [CrossRef]
  28. Francis, N. J., Jones, S., & Smith, D. P. (2025). Generative AI in higher education: Balancing innovation and integrity. British Journal of Biomedical Science, 81(1), 152. [Google Scholar] [CrossRef]
  29. Gama, F., & Magistretti, P. (2023). A review of innovation capabilities and a taxonomy of AI applications. Journal of Product Innovation Management, 42(1), 76–111. [Google Scholar] [CrossRef]
  30. Garlinska, M., Osial, M., Proniewska, K., & Pregowska, A. (2023). The influence of emerging technologies on distance education. Electronics, 12(7), 1550. [Google Scholar] [CrossRef]
  31. Giray, L. (2024). Negative effects of generative AI on researchers: Publishing addiction, Dunning-Kruger effect and skill erosion. Journal of Applied Learning and Teaching, 7(2), 398–405. [Google Scholar] [CrossRef]
  32. Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). A taxonomy of generative AI applications. arXiv, arXiv:2306.02781. [Google Scholar] [CrossRef]
  33. Haroud, S., & Saqri, N. (2025). Generative AI in higher education: Teachers’ and students’ perspectives on support, replacement, and digital literacy. Education Sciences, 15(4), 396. [Google Scholar] [CrossRef]
  34. Isiaku, L., Muhammad, A. S., Kefas, H. I., & Ukaegbu, F. C. (2024). Enhancing technological sustainability in academia: Leveraging ChatGPT for teaching, learning and evaluation. Quality Education for All, 1(1), 385–416. [Google Scholar] [CrossRef]
  35. Kamali, J., Alpat, M. F., & Bozkurt, A. (2024). AI ethics as a complex and multifaceted challenge: Decoding educators’ AI ethics alignment through the lens of activity theory. International Journal of Educational Technology in Higher Education, 21(1), 62. [Google Scholar] [CrossRef]
  36. Kazimova, D., Tazhigulova, G., Shraimanova, G., Zatyneyko, A., & Sharzadin, A. (2025). Transforming university education with AI: A systematic review of technologies, applications, and implications. International Journal of Engineering Pedagogy, 15(1), 4–24. [Google Scholar] [CrossRef]
  37. Kelchtermans, G. (2009). Who I am in how I teach is the message: Self-understanding, vulnerability and reflection. Teachers and Teaching, 15(2), 257–272. [Google Scholar] [CrossRef]
  38. Khairullah, S. A., Harris, S., Hadi, H. J., Sandhu, R. A., Ahmad, N., & Alshara, M. A. (2025). Implementing AI in academic and administrative processes through responsible strategic leadership in the higher education institutions. Frontiers in Education, 10, 1548104. [Google Scholar] [CrossRef]
  39. Kovac, J. (2018). Ethical problem solving. In The ethical chemist. Oxford University Press. [Google Scholar] [CrossRef]
  40. Kumar, R. (2024). Ethics of artificial intelligence and automation: Balancing innovation and responsibility. Journal of Computer, Signal, and System Research, 1, 1–8. [Google Scholar] [CrossRef]
  41. Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [Google Scholar] [CrossRef]
  42. Lee, S. S., & Moore, R. L. (2024). Harnessing Generative AI (GenAI) for automated feedback in higher education: A systematic review. Online Learning, 28(3), 82–106. [Google Scholar] [CrossRef]
  43. Llurba, C., & Palau, R. (2024). Real-time emotion recognition for improving the teaching–learning process: A Scoping review. Journal of Imaging, 10(12), 313. [Google Scholar] [CrossRef] [PubMed]
  44. Lopez-Regalado, O., Nunez-Rojas, N., Lopez-Gil, O. R., & Sanchez-Rodriguez, J. (2024). Analysis of the use of AI in university education: A systematic review. Pixel-Bit: Revista de Medios y Educación, 70, 97–122. [Google Scholar] [CrossRef]
  45. Luckin, R., Rudolph, J., Grünert, M., & Tan, S. (2024). Exploring the future of learning and the relationship between human intelligence and AI: An interview with professor Rose Luckin. Journal of Applied Learning and Teaching, 7(1), 346–363. [Google Scholar] [CrossRef]
  46. Lünich, M., Keller, B., & Marcinkowski, F. (2024). Fairness of academic performance prediction for the distribution of support measures for students: Differences in perceived fairness of distributive justice norms. Technology, Knowledge and Learning, 29(2), 1079–1107. [Google Scholar] [CrossRef]
  47. Ma, J., Wen, J., Qiu, Y., Wang, Y., Xiao, Q., Liu, T., Zhang, D., Zhao, Y., Lu, Z., & Sun, Z. (2025). The Role of AI in shaping nursing education: A comprehensive systematic review. Nurse Education in Practice, 84, 104345. [Google Scholar] [CrossRef] [PubMed]
  48. Maeda, Y., Caskurlu, S., Kenney, R. H., Kozan, K., & Richardson, J. C. (2022). Moving qualitative synthesis research forward in education: A methodological systematic review. Educational Research Review, 35, 100424. [Google Scholar] [CrossRef]
  49. Mahrishi, M., Abbas, A., Radovanović, D., & Hosseini, S. (2024). Emerging dynamics of ChatGPT in academia: A scoping review. Journal of University Teaching and Learning Practice, 21(1), 8. [Google Scholar] [CrossRef]
  50. Mezirow, J. (1991). Transformative dimensions of adult learning. Adult Education Quarterly, 42(3), 195–197. [Google Scholar] [CrossRef]
  51. Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
  52. Nartey, E. K. (2024). Guiding principles of generative AI for employability and learning in UK universities. Cogent Education, 11(1), 2357898. [Google Scholar] [CrossRef]
  53. National Council of the Slovak Republic. (2002). Act No. 131/2002 Coll. on higher education institutions and on amendments and supplements to certain Acts. Ministry of Justice of the Slovak Republic. Available online: https://www.slov-lex.sk/pravne-predpisy/SK/ZZ/2002/131/ (accessed on 2 July 2025).
  54. Neyigapula, B. S. (2024). Ethical considerations in AI development: Balancing autonomy and accountability. Journal of Advances in Artificial Intelligence, 2, 138–148. [Google Scholar] [CrossRef]
  55. Nikoçeviq-Kurti, E., & Bërdynaj-Syla, L. (2024). ChatGPT integration in higher education: Impacts on teaching and professional development of university professors. Educational Process: International Journal, 13(3), 22–39. [Google Scholar] [CrossRef]
  56. Nong, P., Hamasha, R., & Platt, J. (2024). Equity and AI governance at academic medical centers. American Journal of Managed Care, 30, 468–472. [Google Scholar] [CrossRef]
  57. Ocen, S., Elasu, J., Aarakit, S. M., & Olupot, C. (2025). AI in higher education institutions: Review of innovations, opportunities and challenges. Frontiers in Education, 10, 1530247. [Google Scholar] [CrossRef]
  58. Pai, R. Y., Shetty, A., Dinesh, T. K., Shetty, A. D., & Pillai, N. (2024). Effectiveness of social robots as a tutoring and learning companion: A bibliometric analysis. Cogent Business & Management, 11(1). [Google Scholar] [CrossRef]
  59. Pang, T. Y., Kootsookos, A., & Cheng, C.-T. (2024). AI use in feedback: A qualitative analysis. Journal of University Teaching and Learning Practice, 21(6), 108–125. [Google Scholar] [CrossRef]
  60. Pang, W., & Wei, Z. (2025). Shaping the future of higher education: A technology usage study on generative AI innovations. Information, 16(2), 95. [Google Scholar] [CrossRef]
  61. Phokoye, S. P., Epizitone, A., Nkomo, N., Mthalane, P. P., Moyane, S. P., Khumalo, M. M., & Luthuli, M. (2024). Exploring the adoption of robotics in teaching and learning in higher education institutions. Informatics, 11(4), 91. [Google Scholar] [CrossRef]
  62. Pudasaini, S., Miralles-Pechuan, L., Lillis, D., & Llorens Salvador, M. (2024). Survey on AI-generated plagiarism detection: The impact of large language models on academic integrity. Journal of Academic Ethics, 23, 1137–1170. [Google Scholar] [CrossRef]
  63. Qadhi, S. M., Alduais, A., Chaaban, Y., & Khraisheh, M. (2024). Generative AI, research ethics, and higher education research: Insights from a scientometric analysis. Information, 15(6), 325. [Google Scholar] [CrossRef]
  64. Retscher, G. (2025). Exploring the intersection of AI and higher education: Opportunities and challenges in the context of geomatics education. Applied Geomatics, 17(1), 49–61. [Google Scholar] [CrossRef]
  65. Robinson, J. R., Stey, A., Schneider, D. F., Kothari, A. N., Lindeman, B., Kaafarani, H. M., & Haines, K. L. (2025). Generative AI in academic surgery: Ethical implications and transformative potential. Journal of Surgical Research, 307, 212–220. [Google Scholar] [CrossRef] [PubMed]
  66. Roxas, R. E. O., & Recario, R. N. C. (2024). Scientific landscape on opportunities and challenges of large language models and natural language processing. Indonesian Journal of Electrical Engineering and Computer Science, 36(1), 252–263. [Google Scholar] [CrossRef]
  67. Salas-Pilco, S. Z., & Yang, Y. (2022). AI applications in Latin American higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19(1), 21. [Google Scholar] [CrossRef]
  68. Sargiotis, D. (2024). Ethical AI in information technology: Navigating bias, privacy, transparency, and accountability. Advances in Machine Learning & Artificial Intelligence, 5(3), 1–14. [Google Scholar] [CrossRef]
  69. Sembey, R., Hoda, R., & Grundy, J. (2024). Emerging technologies in higher education assessment and feedback practices: A systematic literature review. Journal of Systems and Software, 211, 111988. [Google Scholar] [CrossRef]
  70. Sengul, C., Neykova, R., & Destefanis, G. (2024). Software engineering education in the era of conversational AI: Current trends and future directions. Frontiers in AI, 7, 1436350. [Google Scholar] [CrossRef]
  71. Shakib Kotamjani, S., Shirinova, S., & Fahimirad, M. (2023). Lecturers’ perceptions of using AI in tertiary education in Uzbekistan. In Proceedings of the 2023 International Conference on Innovation and Technology in Education (pp. 570–578). ACM. [Google Scholar] [CrossRef]
  72. Sharadgah, T. A., & Sa’di, R. A. (2022). A systematic review of research on the use of AI in English language teaching and learning (2015–2021): What are the current effects? Journal of Information Technology Education: Research, 21, 337–377. [Google Scholar] [CrossRef]
  73. Shorey, S., Mattar, C., Pereira, T. L.-B., & Choolani, M. (2024). A scoping review of ChatGPT’s role in healthcare education and research. Nurse Education Today, 135, 106121. [Google Scholar] [CrossRef]
  74. Shukla, S. (2024). Principles governing ethical development and deployment of AI. International Journal of Engineering, Business and Management, 8(2), 26–46. [Google Scholar] [CrossRef]
  75. Sobaih, A. E. E. (2024). Ethical concerns for using AI chatbots in research and publication: Evidences from Saudi Arabia. Journal of Applied Learning and Teaching, 7(1), 17. [Google Scholar] [CrossRef]
  76. Soodan, V., Rana, A., Jain, A., & Sharma, D. (2024). AI chatbot adoption in academia: Task fit, usefulness, and collegial ties. Journal of Information Technology Education: Innovations in Practice, 23, 1. [Google Scholar] [CrossRef] [PubMed]
  77. Tapalova, O., & Zhiyenbayeva, N. (2022). AI in education: AIEd for personalised learning pathways. Electronic Journal of e-Learning, 20(5), 639–653. [Google Scholar] [CrossRef]
  78. Tapullima-Mori, C., Mamani-Benito, O., Turpo-Chaparro, J. E., Olivas-Ugarte, L. O., & Carranza-Esteban, R. F. (2024). AI in university education: Bibliometric review in Scopus and Web of Science. Revista Electrónica Educare, 28(S), 18489. [Google Scholar] [CrossRef]
  79. Tong, A., Flemming, K., McInnes, E., Oliver, S., & Craig, J. (2012). Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology, 12(181), 181. [Google Scholar] [CrossRef]
  80. Ulla, M. B., Advincula, M. J. C., Mombay, C. D. S., Mercullo, H. M. A., Nacionales, J. P., & Entino-Señorita, A. D. (2024). How can GenAI foster an inclusive language classroom? A critical language pedagogy perspective from Philippine university teachers. Computers and Education: AI, 7, 100314. [Google Scholar] [CrossRef]
  81. van den Berg, G., & du Plessis, E. (2023). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences, 13(10), 998. [Google Scholar] [CrossRef]
  82. Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar]
  83. Wilkinson, C., Oppert, M., & Owen, M. (2024). Investigating academics’ attitudes towards ChatGPT: A qualitative study. Australasian Journal of Educational Technology, 40(4), 104–119. [Google Scholar] [CrossRef]
  84. Williams, R. T. (2024). The ethical implications of using generative chatbots in higher education. Frontiers in Education, 8, 1331607. [Google Scholar] [CrossRef]
  85. Yaroshenko, T. O., & Iaroshenko, O. I. (2023). Artificial Intelligence (AI) for research lifecycle: Challenges and opportunities. University Library at a New Stage of Social Communications Development. Conference Proceedings, 2023(8), 194–201. [Google Scholar] [CrossRef]
  86. Ye, J., Wang, H., Wu, Y., Wang, X., Wang, J., Liu, S., & Qu, H. (2024). A survey of generative AI for visualization. arXiv, arXiv:2404.18144. [Google Scholar] [CrossRef]
Table 1. Identified records.
Table 1. Identified records.
ActivitySub-ActivityWeb of ScienceScopus
TeachingPreparation of study materials46360
Conducting lectures, seminars, and practical classes51362
Student assessment71451
Providing student consultations45331
Supervising final theses57431
Preparation of opponent reviews54411
Coordinating internships, collaboration with professional practice, and field trips46335
Scientific researchConducting research and development activities50344
Publishing research findings54374
Submitting and managing research scientific project proposals38224
Cooperation with industry and practice71261
Organizing research events40243
Other activitiesAcademic management42316
Professional development52357
Table 2. Screening of records.
Table 2. Screening of records.
ActivitySub-ActivityWeb of ScienceScopus
TeachingPreparation of study materials215
Conducting lectures, seminars, and practical classes96
Student assessment26
Providing student consultations42
Supervising final theses01
Preparation of opponent reviews62
Coordinating internships, collaboration with professional practice, and field trips03
Scientific researchConducting research and development activities510
Publishing research findings113
Submitting and managing research scientific project proposals02
Cooperation with industry and practice02
Organizing research events01
Other activitiesAcademic management04
Professional development25
Table 3. Inclusion of records.
Table 3. Inclusion of records.
ActivitySub-ActivityRecords
TeachingPreparation of study materials6
Conducting lectures, seminars, and practical classes2
Student assessment6
Providing student consultations0
Supervising final theses1
Preparation of opponent reviews2
Coordinating internships, collaboration with professional practice, and field trips0
Scientific researchConducting research and development activities8
Publishing research findings9
Submitting and managing research scientific project proposals0
Cooperation with industry and practice0
Organizing research events0
Other activitiesAcademic management3
Professional development5
Table 4. Summarisation for activity: teaching, sub-activity: preparation of study materials.
Table 4. Summarisation for activity: teaching, sub-activity: preparation of study materials.
Ethical ProblemsAI ApplicationsReferences
Responsibility for the quality and accuracy of AI-generated educational content.ChatGPTAlzakwani et al. (2025)
Ambiguity around copyright and unclear data licensing can lead to unauthorized use of protected material.ChatGPT and other generative AI applicationsAlzakwani et al. (2025)
Absence of transparency in disclosing AI-assisted content may raise concerns about academic integrity.AI applications in general 1Qadhi et al. (2024)
Unequal access to premium AI applications may lower personalization and content quality in less-resourced settings.ChatGPT (premium vs. free), basic alternativesShakib Kotamjani et al. (2023)
Overuse of AI might reduce educators’ motivation in content creation.ChatGPT and other generative AI applicationsHaroud and Saqri (2025)
1 Study by Qadhi et al. (2024) does not state any type of AI application. Therefore, by AI applications in general, we mean all AI applications available on the market.
Table 5. Summarisation for activity: teaching, sub-activity: conducting lectures, seminars, and practical classes.
Table 5. Summarisation for activity: teaching, sub-activity: conducting lectures, seminars, and practical classes.
Ethical ProblemsAI ApplicationsReferences
Students with better query skills may receive more helpful responses, which raises fairness concerns and blurs responsibility for incorrect outputs.ChatGPT and other chatbotsRetscher (2025); Kazimova et al. (2025)
Automated grading may introduce algorithmic bias and penalize creative or culturally diverse responses; unclear scoring may undermine trust.Automated grading systemsRetscher (2025); Kazimova et al. (2025)
Learning analytics may compromise privacy and lead to profiling if anonymization is insufficient.Learning analytics platformsRetscher (2025); Kazimova et al. (2025)
Adaptive systems may misjudge skill levels or disadvantage students with limited access or technical familiarity.Intelligent tutoring systems, adaptive learning platformsKazimova et al. (2025)
Automated writing applications may cause overreliance and reduce originality, raising concerns about learning outcomes and intellectual property.Grammarly, WriteLabKazimova et al. (2025)
Table 6. Summarisation for activity: teaching, sub-activity: student assessments.
Table 6. Summarisation for activity: teaching, sub-activity: student assessments.
Ethical ProblemsAI ApplicationsReferences
Students may submit AI-generated texts as their own, increasing the risk of cheating and plagiarism.ChatGPT and other generative modelsUlla et al. (2024); Williams (2024)
Use of AI cloud services creates sensitive digital records; full data deletion is practically unachievable.Cloud-based AI writing applications, feedback generatorsUlla et al. (2024); T. Y. Pang et al. (2024)
Language models may reproduce bias in feedback, leading to unequal treatment of students.Large language modelsT. Y. Pang et al. (2024)
Educators remain legally responsible for flawed AI-generated feedback; AI must not replace human judgment.AI feedback applications, rubric-based generatorsT. Y. Pang et al. (2024); Cowling et al. (2023)
Students must be informed about how and why AI is used in assessment to ensure transparency and trust.Any AI applications used in assessment workflowsWilliams (2024)
Table 7. Summarisation for activity: teaching, sub-activity: supervising final theses.
Table 7. Summarisation for activity: teaching, sub-activity: supervising final theses.
Ethical ProblemsAI ApplicationsReferences
Inability of the application to understand the specific context of an individual research project.ChatGPTCowling et al. (2023)
Reproduction of historical biases and stereotypes, e.g., related to gender or culture.
Lack of grounding in research ethics; generated suggestions may contradict principles of research integrity.
Table 8. Summarisation for activity: teaching, sub-activity: preparation of opponent reviews.
Table 8. Summarisation for activity: teaching, sub-activity: preparation of opponent reviews.
Ethical ProblemsAI ApplicationsReferences
Transfer of bias from training data may disadvantage underrepresented topics or authors.Claude-3Farber (2025)
Lack of transparency in how the model generates evaluations makes it difficult to justify recommendations.Claude-3Farber (2025)
Leniency and idealism in reviews can lead to inconsistency with human assessments and risk undermining review quality.Claude-3Farber (2025)
Overlooked key studies and irrelevant literature reduce the reliability of scholarly evaluation.Claude-3Farber (2025)
Over-reliance on AI applications may reduce the reviewer’s critical engagement.Claude-3Farber (2025)
Hallucinations (fabricated or inaccurate content) undermine the credibility of the review.ChatGPT, GeminiFrancis et al. (2025)
Generative models may reproduce gender, cultural, or ethnic stereotypes.ChatGPT, GeminiFrancis et al. (2025)
Processing unpublished manuscripts in AI applications may breach data protection laws or intellectual property rights.ChatGPT, GeminiFrancis et al. (2025)
Reviewers may offload decision-making to AI, weakening professional responsibility and evaluative integrity.ChatGPT, GeminiFrancis et al. (2025)
Table 9. Summarisation for activity: scientific research, sub-activity: research and development activities.
Table 9. Summarisation for activity: scientific research, sub-activity: research and development activities.
Ethical ProblemsAI ApplicationsReferences
AI applications may produce hallucinated content, exhibit algorithmic bias, rely on outdated data, and make decisions without transparency or clear accountability.ChatGPT, Llama-2, Jasper Chat, Google Bard, Microsoft BingYaroshenko and Iaroshenko (2023)
AI applications can generate misleading outputs, plagiarized content, or lack contextual understanding, while posing risks related to system opacity and data privacy.ChatGPTAlqahtani et al. (2023)
The use of biased datasets and opaque algorithms can compromise the reliability of results, expose sensitive data, limit access, and lead to overreliance on automation.Google Assistant, Amazon AlexaDzogovic et al. (2024)
Generative AI may lead to privacy violations, confusion over authorship, plagiarism, and misinformation, while also diminishing collaboration and contributing to mental fatigue.ChatGPT, Bard, Bing Chat, ErnieSobaih (2024)
AI platforms may operate opaquely, leak data, produce falsified results, reduce researcher autonomy, and enable unethical research practices.Claude, Gemini, ScopusAI, Elicit, ResearchRabbitAcosta-Enriquez et al. (2025)
AI applications may generate inaccurate content, blur authorship, weaken critical thinking, violate privacy, and raise unresolved intellectual property questions.ChatGPT, Midjourney, Copilot, GeminiKurtz et al. (2024)
AI models may rely on hidden processes, embed biased assumptions, obscure the origin of content, and pose legal challenges in data protection.AI applications in general 1Al-Zahrani (2024)
The use of AI may undermine clear authorship, hide decision processes, marginalize qualitative research, complicate data consent, and reduce originality in peer review.Rayyan, Scite, Elicit, Covidence, AskYourPDF, PapersButson and Spronken-Smith (2024)
Without institutional guidance, AI applications may threaten academic integrity, generate hallucinations, distort scholarly content, reduce creativity, and be used without ethical training.AI applications in general 1Nartey (2024)
1 Studies by Al-Zahrani (2024) and Nartey (2024) does not state any type of AI application. Therefore, by AI applications in general, we mean all AI applications available on the market.
Table 10. Summarisation for activity: scientific research, sub-activity: publication of research results.
Table 10. Summarisation for activity: scientific research, sub-activity: publication of research results.
Ethical ProblemsAI ApplicationsReferences
AI-generated text may reproduce source-based plagiarism patterns from training data without proper attribution.ChatGPT, Jasper Chat, Gemini, LLaMA-2, WordAI, CopyAI, Wordtune, QuillBotRobinson et al. (2025); Giray (2024); Mahrishi et al. (2024)
Plagiarism detection systems may fail to detect AI-paraphrased text, enabling unethical manuscript practices.QuillBot, CopyAIRobinson et al. (2025)
AI applications may fabricate data or citations (hallucinations), presenting unverifiable information as fact.ChatGPT, BardYaroshenko and Iaroshenko (2023); Shorey et al. (2024)
The lack of algorithmic processes prevents users from understanding how outputs are generated, complicating attribution and academic responsibility.Jasper Chat, Gemini, ChatGPT, LLaMA-2Mahrishi et al. (2024); Roxas and Recario (2024)
It remains unclear who bears academic, legal, or ethical responsibility for AI-generated content, raising authorship and accountability concerns.Generative AI applicationsWilkinson et al. (2024); Giray (2024)
Unequal access to paid AI applications may deepen global inequalities in research productivity and publishing capacity.ChatGPTSobaih (2024); Roxas and Recario (2024)
Failure to disclose AI assistance can mislead readers about the human contribution to the work.Generative AI applicationsGiray (2024); Wilkinson et al. (2024)
AI applications may be used to bypass plagiarism detection through automated paraphrasing.QuillBot, WordAIRobinson et al. (2025)
AI-generated manuscripts submitted to predatory journals may contribute to the spread of unverifiable or low-quality academic content.Generative AI applicationsGiray (2024)
Excessive dependence on AI content generation may weaken researchers’ critical thinking, reasoning, and collaboration.ChatGPT, Bard, GrammarlyMahrishi et al. (2024); Giray (2024)
Intellectual property ownership of AI-generated content is unclear, raising legal concerns about who may claim authorship and publication rights.ChatGPT, Ernie, BardShorey et al. (2024); Sobaih (2024)
Table 11. Summarisation for activity: other activities, sub-activity: academic management.
Table 11. Summarisation for activity: other activities, sub-activity: academic management.
Ethical ProblemAI ApplicationsReferences
Lack of transparency in AI models complicates auditing and verification processes.Predictive analytics (e.g., Epic Systems AI)Nong et al. (2024)
Decisions about AI deployment are made by narrow expert teams, excluding broader governance structures.Predictive analytics systemsNong et al. (2024)
Institutions lacking resources adopt pre-packaged systems without local validation, reinforcing inequalities.Predictive analytics systemsNong et al. (2024)
Absence of “equity literacy” hinders recognition and correction of unfair AI-driven decisions.Predictive analytics systemsNong et al. (2024)
AI software present risks of data leakage, hacking, and unauthorized data processing.Predictive models, performance monitoring applicationsAlzakwani et al. (2025)
Overdependence on AI undermines academic autonomy and weakens accountability structures.Chatbots, scheduling algorithms, performance monitoring applicationsAlzakwani et al. (2025)
Table 12. Summarisation for activity: other activities, sub-activity: professional development and self-learning.
Table 12. Summarisation for activity: other activities, sub-activity: professional development and self-learning.
Ethical ProblemAI ApplicationsReferences
AI-generated content may be factually incorrect, leading to internalization of false knowledge.ChatGPTLuckin et al. (2024); Kamali et al. (2024)
Overreliance on AI may weaken educators’ critical thinking and pedagogical autonomy.ChatGPTNikoçeviq-Kurti & Bërdynaj-Syla (2024); van den Berg and du Plessis (2023)
Absence of institutional guidelines results in ethically problematic individual decision-making.ChatGPTKamali et al. (2024); Nikoçeviq-Kurti and Bërdynaj-Syla (2024); van den Berg and du Plessis (2023)
Lack of transparency in AI systems reduces trust and hinders adoption for professional development.ChatGPTAl-Zahrani (2024)
Table 13. Summarisation of findings—AI applications.
Table 13. Summarisation of findings—AI applications.
Category of AITeaching 1
Generative AI models and language modelsChatGPT, Claude-3, Gemini, Other generative AI models.
Text generation and editing applicationsGrammarly, WriteLab, Other cloud-based AI writing applications, feedback generators.
Educational and assessment platformsGradescope, Knewton Alta, Knowji, Duolingo, Smart Sparrow, Automated grading systems, Learning analytics platforms, Intelligent tutoring systems, AI feedback applications, rubric-based generators, Academic Performance Prediction systems.
Category of AIScientific Research 2
Generative AI models and language modelsChatGPT, LLaMA-2, Jasper Chat, Gemini (Google Bard), Bing Chat (Microsoft Bing), Ernie, Claude, Copilot.
Text generation and editing applicationsWordAI, CopyAI, Wordtune, QuillBot, Grammarly, Microsoft Office Dictation.
Research support and source managementSemantic Scholar, SciFact, Consensus, Research Rabbit, Semantic Reader, ChatPDF, Elicit, ScopusAI, AskYourPDF, Papers, Ryyan, Scite, Covidence.
Visualization and design applicationsCanva AI, Designs.ai, DesignerBot, Midjourney.
Category of AIOther Activities 3
Generative models and language modelsChatGPT, Gemini, Claude-3
Analytical and managerial AI applicationsPredictive analytics systems, Performance monitoring applications, Scheduling algorithms, Administrative chatbots
1 Activities include: preparation of study materials, conducting lectures, seminars, and practical classes, student assessments, supervising final theses, preparation of opponent reviews; 2 Activities include: research and development activities, publication of research results; 3 Activities include: academic management, self-learning.
Table 14. Summarisation of findings—Ethical problems, activity: Teaching.
Table 14. Summarisation of findings—Ethical problems, activity: Teaching.
Ethical ProblemEthical Category
Learning analytics may compromise privacy and lead to profiling if anonymization is insufficient.Privacy and data protection
Use of AI cloud services creates sensitive digital records; full data deletion is practically unachievable.Privacy and data protection
Students with better query skills may receive more helpful responses, which raises fairness concerns and blurs responsibility for incorrect outputs.Bias and fairness, Transparency and accountability
Automated grading may introduce algorithmic bias and penalize creative or culturally diverse responses; unclear scoring may undermine trust.Bias and fairness
Language models may reproduce bias in feedback, leading to unequal treatment of students.Bias and fairness
Reproduction of historical biases and stereotypes, e.g., related to gender or culture.Bias and fairness
Adaptive systems may misjudge skill levels or disadvantage students with limited access or technical familiarity.Bias and fairness
Unequal access to premium AI applications may lower personalization and content quality in less-resourced settings.Bias and fairness
Overuse of AI might reduce educators’ motivation in content creation.Autonomy and oversight
Overreliance on AI may weaken educators’ critical thinking and pedagogical autonomy. Autonomy and oversight
Responsibility for the quality and accuracy of AI-generated educational content.Transparency and accountability
Absence of transparency in disclosing AI-assisted content may raise concerns about academic integrity.Transparency and accountability, Integrity and plagiarism
Students must be informed about how and why AI is used in assessment to ensure transparency and trust.Transparency and accountability
Ambiguity around copyright and unclear data licensing can lead to unauthorized use of protected material.Integrity and plagiarism
Educators remain legally responsible for flawed AI-generated feedback; AI must not replace human judgment.Autonomy and oversight
Lack of grounding in research ethics; generated suggestions may contradict principles of research integrity.Integrity and plagiarism
Inability of the application to understand the specific context of an individual research project.Transparency and accountability
Absence of institutional guidelines results in ethically problematic individual decision-making.Governance gaps
Table 15. Summarisation of findings—Ethical problems, activity: Scientific Research.
Table 15. Summarisation of findings—Ethical problems, activity: Scientific Research.
Ethical ProblemEthical Category
Transfer of bias from training data may disadvantage underrepresented topics or authors.Bias and fairness
Generative AI models may reproduce gender, cultural, or ethnic stereotypes.Bias and fairness
Excessive dependence on AI content generation may weaken researchers’ critical thinking, reasoning, and collaboration.Autonomy and oversight
Overuse of AI applications may reduce collaboration among researchers and contribute to mental fatigue.Autonomy and oversight
AI applications may fabricate data or citations (hallucinations), presenting unverifiable information as fact.Integrity and plagiarism
AI applications may produce hallucinated content, exhibit algorithmic bias, rely on outdated data, and make decisions without transparency or clear accountability.Bias and fairness, Transparency and accountability
AI applications can generate misleading outputs, plagiarized content, or lack contextual understanding, while posing risks related to system opacity and data privacy.Privacy and data protection
AI platforms may operate opaquely, leak data, produce falsified results, reduce researcher autonomy, and enable unethical research practices.Transparency and accountability, Autonomy and oversight
AI applications may generate inaccurate content, blur authorship, weaken critical thinking, violate privacy, and raise unresolved intellectual property questions.Privacy and data protection, Integrity and plagiarism
The lack
of algorithmic processes prevents users from understanding how outputs are generated, complicating attribution and academic responsibility.
Transparency and accountability
Failure to disclose AI assistance can mislead readers about the human contribution to the work.Transparency and accountability
It remains unclear who bears academic, legal, or ethical responsibility for AI-generated content, raising authorship and accountability concerns.Transparency and accountability, Integrity and plagiarism
Intellectual property ownership of AI-generated content is unclear, raising legal concerns about who may claim authorship and publication rights.Integrity and plagiarism
AI-rewritten content may evade plagiarism detection, enabling unethical manuscript practices.Integrity and plagiarism
AI-generated text may reproduce source-based plagiarism patterns from training data without proper attribution.Integrity and plagiarism
Plagiarism detection systems may fail to detect AI-paraphrased text, enabling unethical manuscript practices.Integrity and plagiarism
AI-generated manuscripts submitted to predatory journals may contribute to the spread of unverifiable or low-quality academic content.Integrity and plagiarism
The use of AI may undermine clear authorship, hide decision processes, marginalize qualitative research, complicate data consent, and reduce originality in peer review.Integrity and plagiarism
Important works may be omitted; irrelevant literature reduce the reliability of scholarly evaluation.Transparency and accountability
Without institutional guidance, AI applications may threaten academic integrity, generate hallucinations, distort scholarly content, reduce creativity, and be used without ethical training.Integrity and plagiarism
Table 16. Summarisation of findings—Ethical problems, activity: Other activities.
Table 16. Summarisation of findings—Ethical problems, activity: Other activities.
Ethical ProblemEthical Category
AI systems present risks of data leakage, hacking, and unauthorized data processingPrivacy and data protection
Processing unpublished manuscripts in AI applications may breach data protection laws or intellectual property rightsPrivacy and data protection
Institutions lacking resources adopt pre-packaged systems without local validation, reinforcing inequalitiesBias and fairness
Absence of “equity literacy” hinders recognition and correction of unfair AI-driven decisionsBias and fairness
Unequal access to paid AI applications may deepen global inequalities in research productivity and publishing capacityBias and fairness
Reviewers may offload decision-making to AI, weakening professional responsibility and evaluative integrityTransparency and accountability, Integrity and plagiarism
Hallucinations (fabricated or inaccurate content) undermine the credibility of the reviewIntegrity and plagiarism
Lack of transparency in AI models complicates auditing and verification processesTransparency and accountability
Lack of transparency in how the model generates evaluations makes it difficult to justify recommendationsTransparency and accountability
Lack of transparency in AI systems reduces trust and hinders adoption for professional developmentTransparency and accountability
Decisions about AI deployment are made by narrow expert teams, excluding broader governance structuresGovernance gaps
Overdependence on AI undermines academic autonomy and weakens accountability structuresTransparency and accountability, Autonomy and oversight
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Chinoracky, R.; Stalmasekova, N. Ethical Problems in the Use of Artificial Intelligence by University Educators. Educ. Sci. 2025, 15, 1322. https://doi.org/10.3390/educsci15101322

AMA Style

Chinoracky R, Stalmasekova N. Ethical Problems in the Use of Artificial Intelligence by University Educators. Education Sciences. 2025; 15(10):1322. https://doi.org/10.3390/educsci15101322

Chicago/Turabian Style

Chinoracky, Roman, and Natalia Stalmasekova. 2025. "Ethical Problems in the Use of Artificial Intelligence by University Educators" Education Sciences 15, no. 10: 1322. https://doi.org/10.3390/educsci15101322

APA Style

Chinoracky, R., & Stalmasekova, N. (2025). Ethical Problems in the Use of Artificial Intelligence by University Educators. Education Sciences, 15(10), 1322. https://doi.org/10.3390/educsci15101322

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop