Ethical Problems in the Use of Artificial Intelligence by University Educators
Abstract
1. Introduction
- A lack of pedagogic and subject-centred thought about using AI teaching within higher education (Garlinska et al., 2023; Lopez-Regalado et al., 2024; Demiröz & Tıkız-Ertürk, 2025; Khairullah et al., 2025)
- Inability to categorize AI applications and vague guidelines on their use pedagogically within teaching at the higher education level (Celik et al., 2022; Dempere et al., 2023; Ma et al., 2025; Drugova et al., 2024)
- Overlooking the pedagogical decision-making role of teachers in AI-integrated education (Bozkurt, 2023; Chee et al., 2024; Khairullah et al., 2025)
- Lack of support for teachers in working with AI and absence of pedagogically focused professional development (Celik et al., 2022; Ocen et al., 2025; Ansari et al., 2024; Chee et al., 2024; Khairullah et al., 2025)
- Limited research on students’ interpretation, trust, and the impact of AI-generated feedback on learning and self-regulation (Lee & Moore, 2024; Sembey et al., 2024; Sharadgah & Sa’di, 2022; Aljuaid, 2024)
- Lack of methodological and ethical requirements and low accountability of AI-based complex academic work output assessment (Allam et al., 2025; Celik et al., 2022; Llurba & Palau, 2024)
- Low effectiveness and transparency of AI-based plagiarism detection applications (Pudasaini et al., 2024)
- Insufficient work on AI education technologies’ inclusivity and accessibility considering student diversity and global inequities (Chee et al., 2024; Deep et al., 2025; W. Pang & Wei, 2025; Salas-Pilco & Yang, 2022; Phokoye et al., 2024; Tapullima-Mori et al., 2024)
- Limited systematic review of the ethical problems associated with AI integration in higher education (Aljuaid, 2024; Ocen et al., 2025; Bozkurt, 2023; Farrelly & Baker, 2023; Dempere et al., 2023; Sharadgah & Sa’di, 2022; Sengul et al., 2024)
- Limited exploration of the influence of AI on institutional governance and decision-making processes in higher education (Khairullah et al., 2025; Ocen et al., 2025; Tapullima-Mori et al., 2024)
- Absence of systematic long-term research evaluating the influence of AI applications on teaching, learning, and students’ performance (Bozkurt, 2023; Llurba & Palau, 2024)
- Preparation of study materials—Developing syllabi, presentations, worksheets, and supporting materials used in lectures, seminars, and practical sessions.
- Conducting lectures, seminars, and practical classes—Delivering in-person or distance-based lectures, seminars, and exercises, during which the educator explains course content and assigns tasks for students to complete.
- Student assessment—Evaluating tests, term papers, oral and written exams, and recording grades in electronic gradebooks.
- Providing student consultations—Offering support at designated times to explain course content, assignment instructions, and requirements for term and final theses.
- Supervising final theses—Designing topics and annotations for final theses, announcing available topics, and guiding students through regular consultations, supervision, and academic support during the writing process.
- Writing opponent reviewer reports for final theses—Preparing assessments of thesis quality from a scientific and professional perspective based on in-depth reading of students’ work. The reviewer provides a written evaluation along with a final grade.
- Coordinating internships, collaboration with professional practice, and field trips—Organizing and facilitating internships, partnerships with professional environments, and excursions as supporting components of lectures, seminars, and practical training.
- Conducting research and development activities—Engaging in basic and applied research based on the educator’s area of expertise, including the preparation of studies and analyses, as well as participation in team-based research projects.
- Publishing research findings—presentation of research outputs through peer-reviewed journals, conference proceedings, monographs, academic books, and teaching texts.
- Submitting and managing scientific research project proposals—Applying for research grants and managing related administrative agendas.
- Cooperation with industry and practice—Conducting research activities in collaboration with external partners and facilitating knowledge transfer between academia and practice.
- Organizing research events—Planning and coordinating activities aimed at supporting scientific research, such as conferences, workshops, and symposia.
- Academic management—Serving on academic senates, scientific boards, faculty or university leadership bodies, and participating in departmental, faculty, or institutional management.
- Professional development—Engaging in self-education and training in teaching and research, both in-person and online, including attending courses and workshops.
2. Materials and Methods
3. Results
3.1. Teaching
3.1.1. Preparation of Study Materials
3.1.2. Conducting Lectures, Seminars, and Practical Classes
3.1.3. Student Assessments
3.1.4. Supervising Final Theses
3.1.5. Preparation of Opponent Reviews
3.2. Scientific Research
3.2.1. Research and Development Activities
3.2.2. Publication of Research Results
3.3. Other Activities
3.3.1. Academic Management
- Various commercial AI systems used within university environments may suffer from a lack of transparency, which complicates auditing and verification.
- Decisions about deploying AI models are often made by narrow expert teams without input from broader academic governance structures.
- Institutions with limited resources often rely on externally developed systems that lack local validation, deepening inequality across the sector.
- The authors highlight the importance of “equity literacy” among academic managers, defined as the capacity to identify and address structural injustices amplified by AI-supported decision-making.
- AI software use in higher education has great chances of data privacy breaches, unauthorised data use, leakage, and potential hacking.
- Overdependence on AI has the potential to undermine academic autonomy by replacing human judgment and blurring institutional responsibilities (Table 11).
3.3.2. Professional Development and Self-Learning
4. Discussion
4.1. Summarisation of Findings
4.2. Policy Recommendations and Agenda for Future Research
4.2.1. Teaching
- Bias and fairness: Higher education institutions should (1) integrate AI literacy modules across curricula to empower all students, regardless of their digital proficiency; (2) require, where feasible, AI application providers to undergo independent audits to assess algorithmic fairness, including cultural, gender, and linguistic biases; (3) mandate regular updates to classroom AI systems to prevent the accumulation of static bias; (4) ensure that AI-generated outputs used for grading are reviewed by humans, particularly for subjective or open-ended tasks; and (5) establish institutional policies that support equitable access to educational AI applications, including open-source alternatives for under-resourced settings. In terms of future research, educators should focus on (1) studying the effects of digital literacy inequality on AI-assisted learning outcomes; (2) examining how students from diverse backgrounds interpret and respond to biased feedback; and (3) exploring the long-term impacts of adaptive learning tools on student performance across a range of disciplines.
- Transparency and accountability: Higher education institutions should (1) require clear disclosure when AI is used in grading, feedback, or the creation of instructional content; (2) implement institutional policies that ensure AI-driven decisions affecting student outcomes are properly documented; (3) offer dashboards that allow students to see how their personal data is used in learning analytics; and (4) establish ethics review boards or committees responsible for overseeing the use of AI in teaching environments. Future research activities should converge to (1) studying of how different disciplines approach the issue of AI accountability; (2) mapping the institutional practices in documenting AI decisions and the extent to which they align with students’ expectations.
- Autonomy and oversight: Higher education institutions policy should (1) integrate AI literacy into professional development programs to equip educators with evaluative and oversight capabilities; (2) promote collaborative content creation workflows where educators co-edit AI-generated materials; (3) develop governance frameworks that specify liability in the case of AI errors or content misuse; (4) encourage reflective teaching practices to counterbalance AI-driven decision-making. Researchers, within their future research agenda, should (1) assess how reliance on AI affects educators’ self-efficacy and motivation over time; (2) examine how AI recommendations alter curriculum planning and educator-student interactions.
- Integrity and plagiarism: Higher education institutions policy recommendations are to (1) create clear guidelines for permissible and impermissible use of generative AI in educational content; (2) provide training on authorship attribution when incorporating AI-generated outputs; (3) require citation or acknowledgment of AI assistance in educational and scholarly deliverables; (4) establish protocols for verifying the originality of AI-influenced materials. From researchers’ perspective future research should (1) evaluate educator understanding of copyright implications when using AI applications; (3) investigate best practices in plagiarism detection adapted to the generative AI environment; (4) assess institutional readiness to respond to violations involving AI-assisted plagiarism.
- Privacy and Data Protection: Higher education institutions recommendations for policy are to (1) mandate that third-party AI vendors comply with institutional and regional data protection policies; (2) prohibit long-term storage of sensitive educational data beyond necessary use cases; (3) require anonymization and encryption by default in AI learning systems; (4) educate both students and educators on the risks of data sharing and surveillance. In terms of future research agenda, researchers should focus on (1) analysis of institutional compliance levels with GDPR or equivalent national or institutional frameworks in the context of educational AI; (2) exploration of students’ awareness and attitudes toward data usage in AI-assisted learning; (3) tracking of incidents of privacy breaches associated with educational AI in order to identify systemic vulnerabilities.
4.2.2. Scientific Research
- Autonomy and oversight: University educators performing tasks related to scientific research should (1) establish training programs focused on developing researchers’ critical thinking and independent reasoning in AI-rich environments; (2) encourage team-based research formats that emphasize interpersonal collaboration to counter AI-driven isolation; (3) create institutional guidance for balanced AI use, including norms for acceptable dependency levels; (4) incorporate burnout prevention strategies for monitoring cognitive overload among research staff; (5) promote shared authorship protocols to maintain equitable contribution in AI-assisted writing. Future research should (1) investigate the long-term effects of AI-reliant scholarship on interdisciplinary research culture; (2) assess strategies that protect human autonomy in AI-augmented academic work.
- Bias and fairness: Policy in condition of universities and higher education institutions should (1) design and enforce institutional standards for detecting and mitigating representational bias in AI outputs; (2) ensure diverse training data inputs for institutional AI applications to reduce cultural and topical bias; (3) implement transparent review protocols for AI-generated content affecting publication and citation equity; (4) include training for educators on recognizing and challenging stereotypical outputs from AI systems. Future research should focus on areas that (1) examine trust in AI systems when known biases are disclosed versus hidden; (2) design tools to evaluate whether AI systems recommend resources and topics in a way that treats less prominent or marginal research fields fairly.
- Transparency and accountability: Higher education’s policies should be updated to (1) require metadata tagging of AI-generated content and disclosure in all research outputs; (2) create transparent tools that allow human reviewers to see and understand how AI was used in a given academic process; (3) establish independent oversight bodies to review AI-influenced decisions in academic workflows; (4) create protocols for disclosing AI involvement in all stages of manuscript development, peer review, and revision. Future research should be dedicated to (1) examination of how metadata-driven transparency affects scholarly norms of authorship and peer feedback; (2) studying of how AI disclosure practices influence reviewer confidence and editorial acceptance rates.
- Integrity and plagiarism: Policy for higher education institution should (1) enforce institutional guidelines on the verification of citations produced or recommended by AI applications; (2) clearly define who can be considered an author when AI contributes substantively to content generation; (3) revise anti-plagiarism protocols to include AI-sourced paraphrasing, reuse, and disguised authorship; (4) regulate AI use in environments vulnerable to predatory publication schemes; (5) develop a taxonomy of academic misconduct scenarios involving AI manipulation; (6) provide institutional support for educators and researchers encountering ambiguous or borderline use cases. Future research should focus on (1) tracking how forged citations are spreading across academic disciplines and uncovering their sources; (2) investigating how plagiarism tactics involving AI vary between fields and how effective detection methods are; (3) examining the influence of AI on editorial standards, particularly regarding originality and rigor in peer-reviewed journals; and (4) identifying key intervention points to help uphold academic integrity in research areas with high exposure to AI applications.
- Privacy and data protection: Policy recommendations for higher education institutions should (1) establish clear rules for the ethical handling of student and researcher data by AI systems; (2) implement consent mechanisms and provide opt-out options for AI-driven profiling or usage tracking; (3) create anonymization and encryption protocols specifically designed for academic settings; and (4) ensure that third-party tools comply with institutional data governance standards. Future research should (1) explore how students and faculty perceive transparency in the way AI applications handle their data.
4.2.3. Other Activities
- Transparency and accountability: Policy should focus on (1) requiring mandatory AI disclosure statements for all academic outputs assisted by AI; (2) developing institutional guidelines for acceptable and unacceptable AI uses in academic management; (3) promoting human verification for all content produced or augmented by AI applications. Research should focus on (1) study of the effectiveness of human oversight models in mitigating misinformation introduced by AI.
- Bias and fairness: Policy should focus on (1) implementing bias testing protocols before institutional adoption of generative AI applications; (2) ensuring that procurement criteria for AI applications include fairness audits and equity assessments. Research should be dedicated to (1) developing the methods for measuring distributive impacts of AI adoption in under-resourced institution; (2) analysing disparities in access to high-quality AI applications across disciplines and institutions.
- Integrity and plagiarism: Policy should focus on (1) introduce integrity training that addresses AI authorship, citation, and originality; (2) clarifying copyright and data licensing rules for AI-assisted outputs at institutional level. Future research should be dedicated to (1) investigation of educator perceptions of academic misconduct linked to AI use.
- Privacy and data protection: Policy should focus on (1) inclusion of AI-related data risks in institutional ethics review protocols; (2) requirement of institutional approval and risk assessment before the use of cloud-based AI applications for sensitive projects; (3) educating researchers on anonymization, secure data entry, and limits of data deletion in cloud-based AI services. Future research should focus on (1) study of how long-term storage of data by AI systems may affect the protection of intellectual property and the confidentiality of sensitive research; (2) analysis of how the use of AI in academic settings is shaped by GDPR requirements and institutional data protection policies—and where gaps may arise.
- Autonomy and oversight: Policy should focus on (1) reinforcing that AI should serve as a support tool, not a replacement for scholarly decision-making; (2) creating institutional guidelines on healthy AI usage levels and potential cognitive risks; (3) provision of psychological and organizational support for researchers navigating AI-induced work changes. Future research should focus on (1) analysis of how AI affects educator’s autonomy and cognitive engagement; (2) exploring the psychological impacts of prolonged reliance on generative AI applications; (3) investigating the role of institutional governance in preserving academic autonomy in the age of AI.
4.3. Theoretical Implications
4.4. Limitations of the Study
- The analysis focused solely on open-access publications listed in the Web of Science and Scopus databases, covering the period from 2022 to 2025. This scope may have excluded region-specific insights, grey literature, and relevant research published in non-indexed or non-English sources. Same applies for background analysis we conducted in introduction section. This means that study may be affected by potential language-bias (inclusion of predominantly English-based publications) and publication bias (inclusion of open-access publications).
- A methodological limitation of this study is in its timeframe. Study analyses publications published since 2022, coinciding with the moment when ChatGPT went public and the fact that the trend of use of AI in general was becoming dominant topic of interest within academia. With the exclusion of research papers published prior to 2022, the review fails to depict earlier trend of AI-use within higher education institutions. While ethical concerns are expected to be the similar even before the 2022, it should be acknowledged that larger reviews of wider timeframes could more accurately depict the evolution of ethical concerns of AI-use within higher education.
- Study relied on title and abstract screening supported by ChatGPT to identify records that explicitly or implicitly addressed ethical concerns. While this approach enhanced scalability, it may have overlooked subtler forms of ethical discussion embedded in full texts.
- We as authors acknowledge the role of subjective judgment during the screening and inclusion phases, particularly in determining whether individual articles were relevant to the defined research questions. Although the process followed a structured methodology, our human interpretation may have influenced decisions about inclusion of records into final sample of articles that entered our analysis.
- The policy recommendations and directions for future research presented in Section 4.2, again as in case of point 3, represent a subjective construct developed by the authors of this article. We acknowledge this and therefore present it as a limitation. These claims are not empirically substantiated and should be regarded as proposals for future research, in which they could be examined through empirical methods such as expert panel evaluations or focus group discussions involving professionals specializing in education ethics and law or triangulation with fieldwork approaches including interviews with university educators and institutional case studies.
5. Conclusions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Activity | Sub-Activity | WoS Query |
Teaching | Preparation of study materials | KP=(“AI” OR “generative AI” OR “AI applications in education” OR “educational technology” OR “intelligent tutoring systems” OR “teaching materials” OR “instructional materials” OR “syllabus design” OR “curriculum development” OR “course planning” OR “lecture slides” OR “lesson planning” OR worksheets OR “learning resources” OR “content creation in education”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR “bias” OR “fairness” OR “responsible AI” OR “accountability” OR “transparency”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) |
Conducting lectures, seminars, and practical classes | KP=(“AI” OR “generative AI” OR “AI in teaching” OR “AI-powered instruction” OR “university lectures” OR “classroom teaching” OR “seminar facilitation” OR “in-person teaching” OR “face-to-face instruction” OR “online teaching” OR “remote instruction” OR “virtual classrooms” OR “synchronous teaching” OR “hybrid learning” OR “lecture delivery” OR “student engagement” OR “digital pedagogy”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR “bias” OR “fairness” OR “responsible AI” OR “accountability” OR “transparency”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Student assessment | KP=(“AI” OR “generative AI” OR “AI-assisted assessment” OR “automated grading” OR “AI in student evaluation” OR “digital assessment applications” OR “exam scoring” OR “essay grading” OR “test correction” OR “academic assessment” OR “formative assessment” OR “summative assessment” OR “e-assessment” OR “online exams” OR “electronic grading” OR “feedback automation” OR “university assessment”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “fairness” OR “bias” OR “responsible AI” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Providing student consultations | KP=(“AI” OR “generative AI” OR “AI tutoring” OR “academic advising” OR “student consultations” OR “AI-assisted feedback” OR “support for academic writing” OR “thesis guidance” OR “assignment help” OR “digital tutoring” OR “intelligent tutoring systems” OR “personalized support” OR “academic mentoring” OR “learning support” OR “one-on-one teaching”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “student privacy” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Supervising final theses | KP=(“AI” OR “generative AI” OR “thesis supervision” OR “academic advising” OR “AI-assisted thesis writing” OR “research project mentoring” OR “student supervision” OR “academic writing support” OR “dissertation guidance” OR “thesis topic generation” OR “AI applications in academic writing” OR “supervisor-student interaction” OR “guidance for final projects” OR “digital support in thesis writing” OR “AI in research training”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Writing reviewer reports for final theses | KP=(“AI” OR “generative AI” OR “thesis evaluation” OR “academic review” OR “AI-assisted assessment” OR “reviewing student theses” OR “peer evaluation in higher education” OR “dissertation feedback” OR “AI applications for academic assessment” OR “academic critique” OR “quality assessment of final projects” OR “AI in academic writing evaluation” OR “academic judgment” OR “expert review of theses”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “academic integrity” OR “responsible AI” OR “fairness” OR “bias” OR “accountability” OR “transparency”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Coordinating internships, collaboration with professional practice, and field trips | KP=(“AI” OR “generative AI” OR “work-based learning” OR “internships in higher education” OR “professional practice coordination” OR “AI in experiential learning” OR “university-industry collaboration” OR “field trips in education” OR “AI-supported student placement” OR “practical training in university” OR “vocational training” OR “practice-based learning” OR “AI in curriculum integration” OR “academic-industry partnership”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Scientific research | Conducting research and development activities | KP=(“AI” OR “generative AI” OR “AI in research” OR “AI-supported scientific research” OR “AI applications for data analysis” OR “academic research with AI” OR “applied research” OR “basic research” OR “research methodology” OR “AI in study design” OR “AI-driven analysis” OR “scholarly writing” OR “AI in academic publishing” OR “scientific development” OR “university research activities”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR “bias” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) |
Publishing research findings | KP=(“AI” OR “generative AI” OR “AI-assisted academic writing” OR “AI in scientific publishing” OR “AI in manuscript preparation” OR “AI applications for literature review” OR “scholarly communication” OR “academic publishing” OR “scientific writing with AI” OR “AI-supported paper writing” OR “AI in research dissemination” OR “AI in monograph writing” OR “publication process in higher education” OR “research communication applications” OR “university research outputs”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “plagiarism” OR “academic integrity” OR “responsible AI” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Submitting and managing research project proposals | KP=(“AI” OR “generative AI” OR “AI-assisted grant writing” OR “AI in research proposal development” OR “research funding applications” OR “AI applications for project writing” OR “grant proposal preparation” OR “academic funding support” OR “digital applications for research planning” OR “university research funding” OR “AI in academic project design”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “transparency” OR “accountability” OR “intellectual property”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Cooperation with industry and practice | KP=(“AI” OR “generative AI” OR “university-industry collaboration” OR “knowledge transfer” OR “AI in practice-based research” OR “applied research partnerships” OR “AI-supported technology transfer” OR “collaboration between academia and industry” OR “AI in research-practice integration” OR “research impact” OR “academic-industry cooperation” OR “translational research with AI” OR “real-world applications of academic research”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Organizing research events | KP=(“AI” OR “generative AI” OR “academic event organization” OR “scientific event planning” OR “AI-supported conference management” OR “research seminars” OR “academic workshops” OR “digital applications for academic event coordination” OR “AI in event logistics” OR “university research events” OR “organizing academic symposia” OR “technology-enhanced academic events” OR “higher education research dissemination”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR “fairness” OR “transparency” OR “accountability”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) | |
Other activities | Academic management | KP=(“AI” OR “generative AI” OR “academic governance” OR “university management” OR “AI in academic leadership” OR “decision-making in higher education” OR “faculty administration” OR “academic councils” OR “academic senate” OR “scientific boards” OR “AI-supported institutional management” OR “higher education administration” OR “strategic planning in academia” OR “digital applications in university governance”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR “transparency” OR “accountability” OR “fairness” OR “bias”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) |
Professional development | KP=(“AI” OR “generative AI” OR “professional development” OR “academic upskilling” OR “AI in teacher training” OR “self-directed learning” OR “lifelong learning in academia” OR “faculty development programs” OR “online courses for educators” OR “AI-supported professional learning” OR “higher education staff training” OR “digital competencies” OR “university teacher education” OR “technology-enhanced learning”) AND TS=(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR “fairness” OR “bias” OR “transparency” OR “accountability” OR “data protection”) AND TS=(“higher education” OR “university” OR “college” OR “tertiary education”) |
Appendix B
Activity | Sub-Activity | Scopus Query |
Teaching | Preparation of study materials | (KEY(“AI” OR “generative AI” OR “AI applications in education” OR “educational technology” OR “intelligent tutoring systems” OR “teaching materials” OR “instructional materials” OR “syllabus design” OR “curriculum development” OR “course planning” OR “lecture slides” OR “lesson planning” OR worksheets OR “learning resources” OR “content creation in education”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR bias OR fairness OR “responsible AI” OR accountability OR transparency)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) |
Conducting lectures, seminars, and practical classes | (KEY(“AI” OR “generative AI” OR “AI in teaching” OR “AI-powered instruction” OR “university lectures” OR “classroom teaching” OR “seminar facilitation” OR “in-person teaching” OR “face-to-face instruction” OR “online teaching” OR “remote instruction” OR “virtual classrooms” OR “synchronous teaching” OR “hybrid learning” OR “lecture delivery” OR “student engagement” OR “digital pedagogy”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “societal implications” OR bias OR fairness OR “responsible AI” OR accountability OR transparency)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Student assessment | (KEY(“AI” OR “generative AI” OR “AI-assisted assessment” OR “automated grading” OR “AI in student evaluation” OR “digital assessment applications” OR “exam scoring” OR “essay grading” OR “test correction” OR “academic assessment” OR “formative assessment” OR “summative assessment” OR “e-assessment” OR “online exams” OR “electronic grading” OR “feedback automation” OR “university assessment”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR fairness OR bias OR “responsible AI” OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Providing student consultations | (KEY(“AI” OR “generative AI” OR “AI tutoring” OR “academic advising” OR “student consultations” OR “AI-assisted feedback” OR “support for academic writing” OR “thesis guidance” OR “assignment help” OR “digital tutoring” OR “intelligent tutoring systems” OR “personalized support” OR “academic mentoring” OR “learning support” OR “one-on-one teaching”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “student privacy” OR “responsible AI” OR bias OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Supervising final theses | (KEY(“AI” OR “generative AI” OR “thesis supervision” OR “academic advising” OR “AI-assisted thesis writing” OR “research project mentoring” OR “student supervision” OR “academic writing support” OR “dissertation guidance” OR “thesis topic generation” OR “AI applications in academic writing” OR “supervisor-student interaction” OR “guidance for final projects” OR “digital support in thesis writing” OR “AI in research training”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “academic integrity” OR “responsible AI” OR bias OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Writing reviewer reports for final theses | (KEY(“AI” OR “generative AI” OR “thesis evaluation” OR “academic review” OR “AI-assisted assessment” OR “reviewing student theses” OR “peer evaluation in higher education” OR “dissertation feedback” OR “AI applications for academic assessment” OR “academic critique” OR “quality assessment of final projects” OR “AI in academic writing evaluation” OR “academic judgment” OR “expert review of theses”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “academic integrity” OR “responsible AI” OR fairness OR bias OR accountability OR transparency)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Coordinating internships, collaboration with professional practice, and field trips | (KEY(“AI” OR “generative AI” OR “work-based learning” OR “internships in higher education” OR “professional practice coordination” OR “AI in experiential learning” OR “university-industry collaboration” OR “field trips in education” OR “AI-supported student placement” OR “practical training in university” OR “vocational training” OR “practice-based learning” OR “AI in curriculum integration” OR “academic-industry partnership”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR bias OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Scientific research | Conducting research and development activities | (KEY(“AI” OR “generative AI” OR “AI in research” OR “AI-supported scientific research” OR “AI applications for data analysis” OR “academic research with AI” OR “applied research” OR “basic research” OR “research methodology” OR “AI in study design” OR “AI-driven analysis” OR “scholarly writing” OR “AI in academic publishing” OR “scientific development” OR “university research activities”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR bias OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) |
Publishing research findings | (KEY(“AI” OR “generative AI” OR “AI-assisted academic writing” OR “AI in scientific publishing” OR “AI in manuscript preparation” OR “AI applications for literature review” OR “scholarly communication” OR “academic publishing” OR “scientific writing with AI” OR “AI-supported paper writing” OR “AI in research dissemination” OR “AI in monograph writing” OR “publication process in higher education” OR “research communication applications” OR “university research outputs”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR plagiarism OR “academic integrity” OR “responsible AI” OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Submitting and managing research project proposals | (KEY(“AI” OR “generative AI” OR “AI-assisted grant writing” OR “AI in research proposal development” OR “research funding applications” OR “AI applications for project writing” OR “grant proposal preparation” OR “academic funding support” OR “digital applications for research planning” OR “university research funding” OR “AI in academic project design”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR transparency OR accountability OR “intellectual property”)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Cooperation with industry and practice | (KEY(“AI” OR “generative AI” OR “university-industry collaboration” OR “knowledge transfer” OR “AI in practice-based research” OR “applied research partnerships” OR “AI-supported technology transfer” OR “collaboration between academia and industry” OR “AI in research-practice integration” OR “research impact” OR “academic-industry cooperation” OR “translational research with AI” OR “real-world applications of academic research”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “intellectual property” OR “responsible AI” OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Organizing research events | (KEY(“AI” OR “generative AI” OR “academic event organization” OR “scientific event planning” OR “AI-supported conference management” OR “research seminars” OR “academic workshops” OR “digital applications for academic event coordination” OR “AI in event logistics” OR “university research events” OR “organizing academic symposia” OR “technology-enhanced academic events” OR “higher education research dissemination”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “data protection” OR “responsible AI” OR fairness OR transparency OR accountability)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) | |
Other activities | Academic management | (KEY(“AI” OR “generative AI” OR “academic governance” OR “university management” OR “AI in academic leadership” OR “decision-making in higher education” OR “faculty administration” OR “academic councils” OR “academic senate” OR “scientific boards” OR “AI-supported institutional management” OR “higher education administration” OR “strategic planning in academia” OR “digital applications in university governance”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR transparency OR accountability OR fairness OR bias)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) |
Professional development | (KEY(“AI” OR “generative AI” OR “professional development” OR “academic upskilling” OR “AI in teacher training” OR “self-directed learning” OR “lifelong learning in academia” OR “faculty development programs” OR “online courses for educators” OR “AI-supported professional learning” OR “higher education staff training” OR “digital competencies” OR “university teacher education” OR “technology-enhanced learning”)) AND (TITLE-ABS-KEY(“ethical implications” OR “AI ethics” OR “ethical concerns” OR “legal implications” OR “AI regulation” OR “responsible AI” OR fairness OR bias OR transparency OR accountability OR “data protection”)) AND (TITLE-ABS-KEY(“higher education” OR university OR college OR “tertiary education”)) AND PUBYEAR > 2021 AND PUBYEAR < 2026 AND (LIMIT-TO(OA, “all”)) AND (LIMIT-TO(PUBSTAGE, “final”)) |
References
- Acosta-Enríquez, B. G., Arbulu Ballesteros, M., Vilcapoma Pérez, C. R., Huamaní Jordan, O., Martín Vergara, J. A., Martel Acosta, R., Arbulú Pérez Vargas, C. G., & Arbulú Castillo, J. C. (2025). AI in academia: How do social influence, self-efficacy, and integrity influence researchers’ use of AI models. Social Sciences and Humanities Open, 1(1), 100579. [Google Scholar] [CrossRef]
- Agrawal, T. S. (2024). Ethical implications of AI in decision-making: Exploring bias, accountability, and transparency in autonomous systems. International Journal of Science and Research (IJSR), 13, 20–21. [Google Scholar] [CrossRef]
- Aljuaid, H. (2024). The impact of AI applications on academic writing instruction in higher education: A systematic review. Arab World English Journal, 26–55. [Google Scholar] [CrossRef]
- Allam, H. M., Gyamfi, B., & Al Omar, B. (2025). Sustainable innovation: Harnessing AI and living intelligence to transform higher education. Education Sciences, 15(4), 398. [Google Scholar] [CrossRef]
- Alqahtani, T., Badreldin, H. A., Alrashed, M., Alshaya, A. I., Alghamdi, S. S., bin Saleh, K., Alowais, S. A., Alshaya, O. A., Rahman, I., Al Yami, M. S., & Albekairy, A. M. (2023). The emergent role of AI, natural learning processing, and large language models in higher education and research. Research in Social and Administrative Pharmacy, 19(8), 1236–1242. [Google Scholar] [CrossRef]
- Alrayes, A., Henari, T. F., & Ahmed, D. A. (2024). ChatGPT in education—Understanding the Bahraini academics’ perspective. The Electronic Journal of e-Learning, 22(2), 112–134. [Google Scholar] [CrossRef]
- Al-Zahrani, A. M. (2024). Unveiling the shadows: Beyond the hype of AI in education. Heliyon, 10(9), e30696. [Google Scholar] [CrossRef]
- Alzakwani, M. H. H., Zabri, S. M., & Ali, R. R. (2025). Enhancing university teaching and learning through integration of AI in information and communication technology. Edelweiss Applied Science and Technology, 9(1), 1345–1357. [Google Scholar] [CrossRef]
- Ansari, A. N., Ahmad, S., & Bhutta, S. M. (2024). Mapping the global evidence around the use of ChatGPT in higher education: A systematic scoping review. Education and Information Technologies, 29(9), 11281–11321. [Google Scholar] [CrossRef]
- Beijaard, D., Meijer, P. C., & Verloop, N. (2004). Reconsidering research on teachers’ professional identity. Teaching and Teacher Education, 20(2), 107–128. [Google Scholar] [CrossRef]
- Bozkurt, A. (2023). Unleashing the potential of generative AI, conversational agents and Chatbots in educational praxis: A systematic review and bibliometric analysis of GenAI in education. Open Praxis, 15(4), 261–270. [Google Scholar] [CrossRef]
- Butson, R., & Spronken-Smith, R. (2024). AI and its implications for research in higher education: A critical dialogue. Higher Education Research and Development, 43(3), 563–577. [Google Scholar] [CrossRef]
- Celik, I., Dindar, M., Muukkonen, H., & Jarvela, S. (2022). The promises and challenges of AI for teachers: A systematic review of research. TechTrends, 66(4), 616–630. [Google Scholar] [CrossRef]
- Chee, H., Ahn, S., & Lee, J. (2024). A competency framework for AI literacy: Variations by different learner groups and an implied learning pathway. British Journal of Educational Technology, 56(5), 2146–2182. [Google Scholar] [CrossRef]
- Chintoh, G. A., Segun-Falade, O. D., Odionu, C. S., & Ekeh, A. H. (2024). Legal and ethical challenges in AI governance: A conceptual approach to developing ethical compliance models in the U.S. International Journal of Social Science Exceptional Research, 3(1), 103–109. [Google Scholar] [CrossRef]
- Chopra, P. (2024). Ethical implications of AI in financial services: Bias, transparency, and accountability. International Journal of Scientific Research in Computer Science, Engineering and Information Technology, 10(5), 306–314. [Google Scholar] [CrossRef]
- Colnerud, G. (2013). Brief report: Ethical problems in research practice. Journal of Empirical Research on Human Research Ethics, 8(4), 37–41. [Google Scholar] [CrossRef]
- Cowling, M., Crawford, J., Allen, K.-A., & Wehmeyer, M. (2023). Using leadership to leverage ChatGPT and AI for undergraduate and postgraduate research supervision. Australasian Journal of Educational Technology, 39(4), 89–103. [Google Scholar] [CrossRef]
- Deep, P. D., Martirosyan, N., Ghosh, N., & Rahaman, M. S. (2025). ChatGPT in ESL higher education: Enhancing writing, engagement, and learning outcomes. Information, 16(4), 316. [Google Scholar] [CrossRef]
- Demiröz, H., & Tıkız-Ertürk, G. (2025). A review on conversational AI as a application in academic writing. Eskiyeni, 56, 469–496. [Google Scholar] [CrossRef]
- Dempere, J., Modugu, K., Allam, H., & Ramasamy, L. K. (2023). The impact of ChatGPT on higher education. Frontiers in Education, 8, 1206936. [Google Scholar] [CrossRef]
- Drugova, E., Zhuravleva, I., Zakharova, U., & Latipov, A. (2024). Learning analytics driven improvements in learning design in higher education: A systematic literature review. Journal of Computer Assisted Learning, 40(2), 510–524. [Google Scholar] [CrossRef]
- Dzogovic, S. A., Zdravkovska-Adamova, B., & Serpil, H. (2024). From theory to practice: A holistic study of the application of AI methods and techniques in higher education and science. Human Research in Rehabilitation, 14(2), 293–311. [Google Scholar] [CrossRef]
- Ekundayo, T., Khan, Z., & Ali Chaudhry, S. (2024). ChatGPT’s integration in GCC higher education: Bibliometric analysis of trends. Educational Process: International Journal, 13(3), 69–84. [Google Scholar] [CrossRef]
- Farber, S. (2025). Comparing human and AI expertise in the academic peer review process: Towards a hybrid approach. Higher Education Research and Development, 44(4), 871–885. [Google Scholar] [CrossRef]
- Farrelly, T., & Baker, N. (2023). Generative AI: Implications and considerations for higher education practice. Education Sciences, 13(11), 1109. [Google Scholar] [CrossRef]
- Floridi, L. (2024). The ethics of artificial intelligence: Exacerbated problems, renewed problems, unprecedented problems—Introduction to the special issue of the American Philosophical Quarterly dedicated to the ethics of AI. SSRN Electronic Journal. [Google Scholar] [CrossRef]
- Francis, N. J., Jones, S., & Smith, D. P. (2025). Generative AI in higher education: Balancing innovation and integrity. British Journal of Biomedical Science, 81(1), 152. [Google Scholar] [CrossRef]
- Gama, F., & Magistretti, P. (2023). A review of innovation capabilities and a taxonomy of AI applications. Journal of Product Innovation Management, 42(1), 76–111. [Google Scholar] [CrossRef]
- Garlinska, M., Osial, M., Proniewska, K., & Pregowska, A. (2023). The influence of emerging technologies on distance education. Electronics, 12(7), 1550. [Google Scholar] [CrossRef]
- Giray, L. (2024). Negative effects of generative AI on researchers: Publishing addiction, Dunning-Kruger effect and skill erosion. Journal of Applied Learning and Teaching, 7(2), 398–405. [Google Scholar] [CrossRef]
- Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2023). A taxonomy of generative AI applications. arXiv, arXiv:2306.02781. [Google Scholar] [CrossRef]
- Haroud, S., & Saqri, N. (2025). Generative AI in higher education: Teachers’ and students’ perspectives on support, replacement, and digital literacy. Education Sciences, 15(4), 396. [Google Scholar] [CrossRef]
- Isiaku, L., Muhammad, A. S., Kefas, H. I., & Ukaegbu, F. C. (2024). Enhancing technological sustainability in academia: Leveraging ChatGPT for teaching, learning and evaluation. Quality Education for All, 1(1), 385–416. [Google Scholar] [CrossRef]
- Kamali, J., Alpat, M. F., & Bozkurt, A. (2024). AI ethics as a complex and multifaceted challenge: Decoding educators’ AI ethics alignment through the lens of activity theory. International Journal of Educational Technology in Higher Education, 21(1), 62. [Google Scholar] [CrossRef]
- Kazimova, D., Tazhigulova, G., Shraimanova, G., Zatyneyko, A., & Sharzadin, A. (2025). Transforming university education with AI: A systematic review of technologies, applications, and implications. International Journal of Engineering Pedagogy, 15(1), 4–24. [Google Scholar] [CrossRef]
- Kelchtermans, G. (2009). Who I am in how I teach is the message: Self-understanding, vulnerability and reflection. Teachers and Teaching, 15(2), 257–272. [Google Scholar] [CrossRef]
- Khairullah, S. A., Harris, S., Hadi, H. J., Sandhu, R. A., Ahmad, N., & Alshara, M. A. (2025). Implementing AI in academic and administrative processes through responsible strategic leadership in the higher education institutions. Frontiers in Education, 10, 1548104. [Google Scholar] [CrossRef]
- Kovac, J. (2018). Ethical problem solving. In The ethical chemist. Oxford University Press. [Google Scholar] [CrossRef]
- Kumar, R. (2024). Ethics of artificial intelligence and automation: Balancing innovation and responsibility. Journal of Computer, Signal, and System Research, 1, 1–8. [Google Scholar] [CrossRef]
- Kurtz, G., Amzalag, M., Shaked, N., Zaguri, Y., Kohen-Vacs, D., Gal, E., Zailer, G., & Barak-Medina, E. (2024). Strategies for integrating generative AI into higher education: Navigating challenges and leveraging opportunities. Education Sciences, 14(5), 503. [Google Scholar] [CrossRef]
- Lee, S. S., & Moore, R. L. (2024). Harnessing Generative AI (GenAI) for automated feedback in higher education: A systematic review. Online Learning, 28(3), 82–106. [Google Scholar] [CrossRef]
- Llurba, C., & Palau, R. (2024). Real-time emotion recognition for improving the teaching–learning process: A Scoping review. Journal of Imaging, 10(12), 313. [Google Scholar] [CrossRef] [PubMed]
- Lopez-Regalado, O., Nunez-Rojas, N., Lopez-Gil, O. R., & Sanchez-Rodriguez, J. (2024). Analysis of the use of AI in university education: A systematic review. Pixel-Bit: Revista de Medios y Educación, 70, 97–122. [Google Scholar] [CrossRef]
- Luckin, R., Rudolph, J., Grünert, M., & Tan, S. (2024). Exploring the future of learning and the relationship between human intelligence and AI: An interview with professor Rose Luckin. Journal of Applied Learning and Teaching, 7(1), 346–363. [Google Scholar] [CrossRef]
- Lünich, M., Keller, B., & Marcinkowski, F. (2024). Fairness of academic performance prediction for the distribution of support measures for students: Differences in perceived fairness of distributive justice norms. Technology, Knowledge and Learning, 29(2), 1079–1107. [Google Scholar] [CrossRef]
- Ma, J., Wen, J., Qiu, Y., Wang, Y., Xiao, Q., Liu, T., Zhang, D., Zhao, Y., Lu, Z., & Sun, Z. (2025). The Role of AI in shaping nursing education: A comprehensive systematic review. Nurse Education in Practice, 84, 104345. [Google Scholar] [CrossRef] [PubMed]
- Maeda, Y., Caskurlu, S., Kenney, R. H., Kozan, K., & Richardson, J. C. (2022). Moving qualitative synthesis research forward in education: A methodological systematic review. Educational Research Review, 35, 100424. [Google Scholar] [CrossRef]
- Mahrishi, M., Abbas, A., Radovanović, D., & Hosseini, S. (2024). Emerging dynamics of ChatGPT in academia: A scoping review. Journal of University Teaching and Learning Practice, 21(1), 8. [Google Scholar] [CrossRef]
- Mezirow, J. (1991). Transformative dimensions of adult learning. Adult Education Quarterly, 42(3), 195–197. [Google Scholar] [CrossRef]
- Mishra, P., & Koehler, M. J. (2006). Technological pedagogical content knowledge: A framework for teacher knowledge. Teachers College Record, 108(6), 1017–1054. [Google Scholar] [CrossRef]
- Nartey, E. K. (2024). Guiding principles of generative AI for employability and learning in UK universities. Cogent Education, 11(1), 2357898. [Google Scholar] [CrossRef]
- National Council of the Slovak Republic. (2002). Act No. 131/2002 Coll. on higher education institutions and on amendments and supplements to certain Acts. Ministry of Justice of the Slovak Republic. Available online: https://www.slov-lex.sk/pravne-predpisy/SK/ZZ/2002/131/ (accessed on 2 July 2025).
- Neyigapula, B. S. (2024). Ethical considerations in AI development: Balancing autonomy and accountability. Journal of Advances in Artificial Intelligence, 2, 138–148. [Google Scholar] [CrossRef]
- Nikoçeviq-Kurti, E., & Bërdynaj-Syla, L. (2024). ChatGPT integration in higher education: Impacts on teaching and professional development of university professors. Educational Process: International Journal, 13(3), 22–39. [Google Scholar] [CrossRef]
- Nong, P., Hamasha, R., & Platt, J. (2024). Equity and AI governance at academic medical centers. American Journal of Managed Care, 30, 468–472. [Google Scholar] [CrossRef]
- Ocen, S., Elasu, J., Aarakit, S. M., & Olupot, C. (2025). AI in higher education institutions: Review of innovations, opportunities and challenges. Frontiers in Education, 10, 1530247. [Google Scholar] [CrossRef]
- Pai, R. Y., Shetty, A., Dinesh, T. K., Shetty, A. D., & Pillai, N. (2024). Effectiveness of social robots as a tutoring and learning companion: A bibliometric analysis. Cogent Business & Management, 11(1). [Google Scholar] [CrossRef]
- Pang, T. Y., Kootsookos, A., & Cheng, C.-T. (2024). AI use in feedback: A qualitative analysis. Journal of University Teaching and Learning Practice, 21(6), 108–125. [Google Scholar] [CrossRef]
- Pang, W., & Wei, Z. (2025). Shaping the future of higher education: A technology usage study on generative AI innovations. Information, 16(2), 95. [Google Scholar] [CrossRef]
- Phokoye, S. P., Epizitone, A., Nkomo, N., Mthalane, P. P., Moyane, S. P., Khumalo, M. M., & Luthuli, M. (2024). Exploring the adoption of robotics in teaching and learning in higher education institutions. Informatics, 11(4), 91. [Google Scholar] [CrossRef]
- Pudasaini, S., Miralles-Pechuan, L., Lillis, D., & Llorens Salvador, M. (2024). Survey on AI-generated plagiarism detection: The impact of large language models on academic integrity. Journal of Academic Ethics, 23, 1137–1170. [Google Scholar] [CrossRef]
- Qadhi, S. M., Alduais, A., Chaaban, Y., & Khraisheh, M. (2024). Generative AI, research ethics, and higher education research: Insights from a scientometric analysis. Information, 15(6), 325. [Google Scholar] [CrossRef]
- Retscher, G. (2025). Exploring the intersection of AI and higher education: Opportunities and challenges in the context of geomatics education. Applied Geomatics, 17(1), 49–61. [Google Scholar] [CrossRef]
- Robinson, J. R., Stey, A., Schneider, D. F., Kothari, A. N., Lindeman, B., Kaafarani, H. M., & Haines, K. L. (2025). Generative AI in academic surgery: Ethical implications and transformative potential. Journal of Surgical Research, 307, 212–220. [Google Scholar] [CrossRef] [PubMed]
- Roxas, R. E. O., & Recario, R. N. C. (2024). Scientific landscape on opportunities and challenges of large language models and natural language processing. Indonesian Journal of Electrical Engineering and Computer Science, 36(1), 252–263. [Google Scholar] [CrossRef]
- Salas-Pilco, S. Z., & Yang, Y. (2022). AI applications in Latin American higher education: A systematic review. International Journal of Educational Technology in Higher Education, 19(1), 21. [Google Scholar] [CrossRef]
- Sargiotis, D. (2024). Ethical AI in information technology: Navigating bias, privacy, transparency, and accountability. Advances in Machine Learning & Artificial Intelligence, 5(3), 1–14. [Google Scholar] [CrossRef]
- Sembey, R., Hoda, R., & Grundy, J. (2024). Emerging technologies in higher education assessment and feedback practices: A systematic literature review. Journal of Systems and Software, 211, 111988. [Google Scholar] [CrossRef]
- Sengul, C., Neykova, R., & Destefanis, G. (2024). Software engineering education in the era of conversational AI: Current trends and future directions. Frontiers in AI, 7, 1436350. [Google Scholar] [CrossRef]
- Shakib Kotamjani, S., Shirinova, S., & Fahimirad, M. (2023). Lecturers’ perceptions of using AI in tertiary education in Uzbekistan. In Proceedings of the 2023 International Conference on Innovation and Technology in Education (pp. 570–578). ACM. [Google Scholar] [CrossRef]
- Sharadgah, T. A., & Sa’di, R. A. (2022). A systematic review of research on the use of AI in English language teaching and learning (2015–2021): What are the current effects? Journal of Information Technology Education: Research, 21, 337–377. [Google Scholar] [CrossRef]
- Shorey, S., Mattar, C., Pereira, T. L.-B., & Choolani, M. (2024). A scoping review of ChatGPT’s role in healthcare education and research. Nurse Education Today, 135, 106121. [Google Scholar] [CrossRef]
- Shukla, S. (2024). Principles governing ethical development and deployment of AI. International Journal of Engineering, Business and Management, 8(2), 26–46. [Google Scholar] [CrossRef]
- Sobaih, A. E. E. (2024). Ethical concerns for using AI chatbots in research and publication: Evidences from Saudi Arabia. Journal of Applied Learning and Teaching, 7(1), 17. [Google Scholar] [CrossRef]
- Soodan, V., Rana, A., Jain, A., & Sharma, D. (2024). AI chatbot adoption in academia: Task fit, usefulness, and collegial ties. Journal of Information Technology Education: Innovations in Practice, 23, 1. [Google Scholar] [CrossRef] [PubMed]
- Tapalova, O., & Zhiyenbayeva, N. (2022). AI in education: AIEd for personalised learning pathways. Electronic Journal of e-Learning, 20(5), 639–653. [Google Scholar] [CrossRef]
- Tapullima-Mori, C., Mamani-Benito, O., Turpo-Chaparro, J. E., Olivas-Ugarte, L. O., & Carranza-Esteban, R. F. (2024). AI in university education: Bibliometric review in Scopus and Web of Science. Revista Electrónica Educare, 28(S), 18489. [Google Scholar] [CrossRef]
- Tong, A., Flemming, K., McInnes, E., Oliver, S., & Craig, J. (2012). Enhancing transparency in reporting the synthesis of qualitative research: ENTREQ. BMC Medical Research Methodology, 12(181), 181. [Google Scholar] [CrossRef]
- Ulla, M. B., Advincula, M. J. C., Mombay, C. D. S., Mercullo, H. M. A., Nacionales, J. P., & Entino-Señorita, A. D. (2024). How can GenAI foster an inclusive language classroom? A critical language pedagogy perspective from Philippine university teachers. Computers and Education: AI, 7, 100314. [Google Scholar] [CrossRef]
- van den Berg, G., & du Plessis, E. (2023). ChatGPT and generative AI: Possibilities for its contribution to lesson planning, critical thinking and openness in teacher education. Education Sciences, 13(10), 998. [Google Scholar] [CrossRef]
- Vygotsky, L. S. (1978). Mind in society: The development of higher psychological processes. Harvard University Press. [Google Scholar]
- Wilkinson, C., Oppert, M., & Owen, M. (2024). Investigating academics’ attitudes towards ChatGPT: A qualitative study. Australasian Journal of Educational Technology, 40(4), 104–119. [Google Scholar] [CrossRef]
- Williams, R. T. (2024). The ethical implications of using generative chatbots in higher education. Frontiers in Education, 8, 1331607. [Google Scholar] [CrossRef]
- Yaroshenko, T. O., & Iaroshenko, O. I. (2023). Artificial Intelligence (AI) for research lifecycle: Challenges and opportunities. University Library at a New Stage of Social Communications Development. Conference Proceedings, 2023(8), 194–201. [Google Scholar] [CrossRef]
- Ye, J., Wang, H., Wu, Y., Wang, X., Wang, J., Liu, S., & Qu, H. (2024). A survey of generative AI for visualization. arXiv, arXiv:2404.18144. [Google Scholar] [CrossRef]
Activity | Sub-Activity | Web of Science | Scopus |
---|---|---|---|
Teaching | Preparation of study materials | 46 | 360 |
Conducting lectures, seminars, and practical classes | 51 | 362 | |
Student assessment | 71 | 451 | |
Providing student consultations | 45 | 331 | |
Supervising final theses | 57 | 431 | |
Preparation of opponent reviews | 54 | 411 | |
Coordinating internships, collaboration with professional practice, and field trips | 46 | 335 | |
Scientific research | Conducting research and development activities | 50 | 344 |
Publishing research findings | 54 | 374 | |
Submitting and managing research scientific project proposals | 38 | 224 | |
Cooperation with industry and practice | 71 | 261 | |
Organizing research events | 40 | 243 | |
Other activities | Academic management | 42 | 316 |
Professional development | 52 | 357 |
Activity | Sub-Activity | Web of Science | Scopus |
---|---|---|---|
Teaching | Preparation of study materials | 2 | 15 |
Conducting lectures, seminars, and practical classes | 9 | 6 | |
Student assessment | 2 | 6 | |
Providing student consultations | 4 | 2 | |
Supervising final theses | 0 | 1 | |
Preparation of opponent reviews | 6 | 2 | |
Coordinating internships, collaboration with professional practice, and field trips | 0 | 3 | |
Scientific research | Conducting research and development activities | 5 | 10 |
Publishing research findings | 1 | 13 | |
Submitting and managing research scientific project proposals | 0 | 2 | |
Cooperation with industry and practice | 0 | 2 | |
Organizing research events | 0 | 1 | |
Other activities | Academic management | 0 | 4 |
Professional development | 2 | 5 |
Activity | Sub-Activity | Records |
---|---|---|
Teaching | Preparation of study materials | 6 |
Conducting lectures, seminars, and practical classes | 2 | |
Student assessment | 6 | |
Providing student consultations | 0 | |
Supervising final theses | 1 | |
Preparation of opponent reviews | 2 | |
Coordinating internships, collaboration with professional practice, and field trips | 0 | |
Scientific research | Conducting research and development activities | 8 |
Publishing research findings | 9 | |
Submitting and managing research scientific project proposals | 0 | |
Cooperation with industry and practice | 0 | |
Organizing research events | 0 | |
Other activities | Academic management | 3 |
Professional development | 5 |
Ethical Problems | AI Applications | References |
---|---|---|
Responsibility for the quality and accuracy of AI-generated educational content. | ChatGPT | Alzakwani et al. (2025) |
Ambiguity around copyright and unclear data licensing can lead to unauthorized use of protected material. | ChatGPT and other generative AI applications | Alzakwani et al. (2025) |
Absence of transparency in disclosing AI-assisted content may raise concerns about academic integrity. | AI applications in general 1 | Qadhi et al. (2024) |
Unequal access to premium AI applications may lower personalization and content quality in less-resourced settings. | ChatGPT (premium vs. free), basic alternatives | Shakib Kotamjani et al. (2023) |
Overuse of AI might reduce educators’ motivation in content creation. | ChatGPT and other generative AI applications | Haroud and Saqri (2025) |
Ethical Problems | AI Applications | References |
---|---|---|
Students with better query skills may receive more helpful responses, which raises fairness concerns and blurs responsibility for incorrect outputs. | ChatGPT and other chatbots | Retscher (2025); Kazimova et al. (2025) |
Automated grading may introduce algorithmic bias and penalize creative or culturally diverse responses; unclear scoring may undermine trust. | Automated grading systems | Retscher (2025); Kazimova et al. (2025) |
Learning analytics may compromise privacy and lead to profiling if anonymization is insufficient. | Learning analytics platforms | Retscher (2025); Kazimova et al. (2025) |
Adaptive systems may misjudge skill levels or disadvantage students with limited access or technical familiarity. | Intelligent tutoring systems, adaptive learning platforms | Kazimova et al. (2025) |
Automated writing applications may cause overreliance and reduce originality, raising concerns about learning outcomes and intellectual property. | Grammarly, WriteLab | Kazimova et al. (2025) |
Ethical Problems | AI Applications | References |
---|---|---|
Students may submit AI-generated texts as their own, increasing the risk of cheating and plagiarism. | ChatGPT and other generative models | Ulla et al. (2024); Williams (2024) |
Use of AI cloud services creates sensitive digital records; full data deletion is practically unachievable. | Cloud-based AI writing applications, feedback generators | Ulla et al. (2024); T. Y. Pang et al. (2024) |
Language models may reproduce bias in feedback, leading to unequal treatment of students. | Large language models | T. Y. Pang et al. (2024) |
Educators remain legally responsible for flawed AI-generated feedback; AI must not replace human judgment. | AI feedback applications, rubric-based generators | T. Y. Pang et al. (2024); Cowling et al. (2023) |
Students must be informed about how and why AI is used in assessment to ensure transparency and trust. | Any AI applications used in assessment workflows | Williams (2024) |
Ethical Problems | AI Applications | References |
---|---|---|
Inability of the application to understand the specific context of an individual research project. | ChatGPT | Cowling et al. (2023) |
Reproduction of historical biases and stereotypes, e.g., related to gender or culture. | ||
Lack of grounding in research ethics; generated suggestions may contradict principles of research integrity. |
Ethical Problems | AI Applications | References |
---|---|---|
Transfer of bias from training data may disadvantage underrepresented topics or authors. | Claude-3 | Farber (2025) |
Lack of transparency in how the model generates evaluations makes it difficult to justify recommendations. | Claude-3 | Farber (2025) |
Leniency and idealism in reviews can lead to inconsistency with human assessments and risk undermining review quality. | Claude-3 | Farber (2025) |
Overlooked key studies and irrelevant literature reduce the reliability of scholarly evaluation. | Claude-3 | Farber (2025) |
Over-reliance on AI applications may reduce the reviewer’s critical engagement. | Claude-3 | Farber (2025) |
Hallucinations (fabricated or inaccurate content) undermine the credibility of the review. | ChatGPT, Gemini | Francis et al. (2025) |
Generative models may reproduce gender, cultural, or ethnic stereotypes. | ChatGPT, Gemini | Francis et al. (2025) |
Processing unpublished manuscripts in AI applications may breach data protection laws or intellectual property rights. | ChatGPT, Gemini | Francis et al. (2025) |
Reviewers may offload decision-making to AI, weakening professional responsibility and evaluative integrity. | ChatGPT, Gemini | Francis et al. (2025) |
Ethical Problems | AI Applications | References |
---|---|---|
AI applications may produce hallucinated content, exhibit algorithmic bias, rely on outdated data, and make decisions without transparency or clear accountability. | ChatGPT, Llama-2, Jasper Chat, Google Bard, Microsoft Bing | Yaroshenko and Iaroshenko (2023) |
AI applications can generate misleading outputs, plagiarized content, or lack contextual understanding, while posing risks related to system opacity and data privacy. | ChatGPT | Alqahtani et al. (2023) |
The use of biased datasets and opaque algorithms can compromise the reliability of results, expose sensitive data, limit access, and lead to overreliance on automation. | Google Assistant, Amazon Alexa | Dzogovic et al. (2024) |
Generative AI may lead to privacy violations, confusion over authorship, plagiarism, and misinformation, while also diminishing collaboration and contributing to mental fatigue. | ChatGPT, Bard, Bing Chat, Ernie | Sobaih (2024) |
AI platforms may operate opaquely, leak data, produce falsified results, reduce researcher autonomy, and enable unethical research practices. | Claude, Gemini, ScopusAI, Elicit, ResearchRabbit | Acosta-Enriquez et al. (2025) |
AI applications may generate inaccurate content, blur authorship, weaken critical thinking, violate privacy, and raise unresolved intellectual property questions. | ChatGPT, Midjourney, Copilot, Gemini | Kurtz et al. (2024) |
AI models may rely on hidden processes, embed biased assumptions, obscure the origin of content, and pose legal challenges in data protection. | AI applications in general 1 | Al-Zahrani (2024) |
The use of AI may undermine clear authorship, hide decision processes, marginalize qualitative research, complicate data consent, and reduce originality in peer review. | Rayyan, Scite, Elicit, Covidence, AskYourPDF, Papers | Butson and Spronken-Smith (2024) |
Without institutional guidance, AI applications may threaten academic integrity, generate hallucinations, distort scholarly content, reduce creativity, and be used without ethical training. | AI applications in general 1 | Nartey (2024) |
Ethical Problems | AI Applications | References |
---|---|---|
AI-generated text may reproduce source-based plagiarism patterns from training data without proper attribution. | ChatGPT, Jasper Chat, Gemini, LLaMA-2, WordAI, CopyAI, Wordtune, QuillBot | Robinson et al. (2025); Giray (2024); Mahrishi et al. (2024) |
Plagiarism detection systems may fail to detect AI-paraphrased text, enabling unethical manuscript practices. | QuillBot, CopyAI | Robinson et al. (2025) |
AI applications may fabricate data or citations (hallucinations), presenting unverifiable information as fact. | ChatGPT, Bard | Yaroshenko and Iaroshenko (2023); Shorey et al. (2024) |
The lack of algorithmic processes prevents users from understanding how outputs are generated, complicating attribution and academic responsibility. | Jasper Chat, Gemini, ChatGPT, LLaMA-2 | Mahrishi et al. (2024); Roxas and Recario (2024) |
It remains unclear who bears academic, legal, or ethical responsibility for AI-generated content, raising authorship and accountability concerns. | Generative AI applications | Wilkinson et al. (2024); Giray (2024) |
Unequal access to paid AI applications may deepen global inequalities in research productivity and publishing capacity. | ChatGPT | Sobaih (2024); Roxas and Recario (2024) |
Failure to disclose AI assistance can mislead readers about the human contribution to the work. | Generative AI applications | Giray (2024); Wilkinson et al. (2024) |
AI applications may be used to bypass plagiarism detection through automated paraphrasing. | QuillBot, WordAI | Robinson et al. (2025) |
AI-generated manuscripts submitted to predatory journals may contribute to the spread of unverifiable or low-quality academic content. | Generative AI applications | Giray (2024) |
Excessive dependence on AI content generation may weaken researchers’ critical thinking, reasoning, and collaboration. | ChatGPT, Bard, Grammarly | Mahrishi et al. (2024); Giray (2024) |
Intellectual property ownership of AI-generated content is unclear, raising legal concerns about who may claim authorship and publication rights. | ChatGPT, Ernie, Bard | Shorey et al. (2024); Sobaih (2024) |
Ethical Problem | AI Applications | References |
---|---|---|
Lack of transparency in AI models complicates auditing and verification processes. | Predictive analytics (e.g., Epic Systems AI) | Nong et al. (2024) |
Decisions about AI deployment are made by narrow expert teams, excluding broader governance structures. | Predictive analytics systems | Nong et al. (2024) |
Institutions lacking resources adopt pre-packaged systems without local validation, reinforcing inequalities. | Predictive analytics systems | Nong et al. (2024) |
Absence of “equity literacy” hinders recognition and correction of unfair AI-driven decisions. | Predictive analytics systems | Nong et al. (2024) |
AI software present risks of data leakage, hacking, and unauthorized data processing. | Predictive models, performance monitoring applications | Alzakwani et al. (2025) |
Overdependence on AI undermines academic autonomy and weakens accountability structures. | Chatbots, scheduling algorithms, performance monitoring applications | Alzakwani et al. (2025) |
Ethical Problem | AI Applications | References |
---|---|---|
AI-generated content may be factually incorrect, leading to internalization of false knowledge. | ChatGPT | Luckin et al. (2024); Kamali et al. (2024) |
Overreliance on AI may weaken educators’ critical thinking and pedagogical autonomy. | ChatGPT | Nikoçeviq-Kurti & Bërdynaj-Syla (2024); van den Berg and du Plessis (2023) |
Absence of institutional guidelines results in ethically problematic individual decision-making. | ChatGPT | Kamali et al. (2024); Nikoçeviq-Kurti and Bërdynaj-Syla (2024); van den Berg and du Plessis (2023) |
Lack of transparency in AI systems reduces trust and hinders adoption for professional development. | ChatGPT | Al-Zahrani (2024) |
Category of AI | Teaching 1 |
---|---|
Generative AI models and language models | ChatGPT, Claude-3, Gemini, Other generative AI models. |
Text generation and editing applications | Grammarly, WriteLab, Other cloud-based AI writing applications, feedback generators. |
Educational and assessment platforms | Gradescope, Knewton Alta, Knowji, Duolingo, Smart Sparrow, Automated grading systems, Learning analytics platforms, Intelligent tutoring systems, AI feedback applications, rubric-based generators, Academic Performance Prediction systems. |
Category of AI | Scientific Research 2 |
Generative AI models and language models | ChatGPT, LLaMA-2, Jasper Chat, Gemini (Google Bard), Bing Chat (Microsoft Bing), Ernie, Claude, Copilot. |
Text generation and editing applications | WordAI, CopyAI, Wordtune, QuillBot, Grammarly, Microsoft Office Dictation. |
Research support and source management | Semantic Scholar, SciFact, Consensus, Research Rabbit, Semantic Reader, ChatPDF, Elicit, ScopusAI, AskYourPDF, Papers, Ryyan, Scite, Covidence. |
Visualization and design applications | Canva AI, Designs.ai, DesignerBot, Midjourney. |
Category of AI | Other Activities 3 |
Generative models and language models | ChatGPT, Gemini, Claude-3 |
Analytical and managerial AI applications | Predictive analytics systems, Performance monitoring applications, Scheduling algorithms, Administrative chatbots |
Ethical Problem | Ethical Category |
---|---|
Learning analytics may compromise privacy and lead to profiling if anonymization is insufficient. | Privacy and data protection |
Use of AI cloud services creates sensitive digital records; full data deletion is practically unachievable. | Privacy and data protection |
Students with better query skills may receive more helpful responses, which raises fairness concerns and blurs responsibility for incorrect outputs. | Bias and fairness, Transparency and accountability |
Automated grading may introduce algorithmic bias and penalize creative or culturally diverse responses; unclear scoring may undermine trust. | Bias and fairness |
Language models may reproduce bias in feedback, leading to unequal treatment of students. | Bias and fairness |
Reproduction of historical biases and stereotypes, e.g., related to gender or culture. | Bias and fairness |
Adaptive systems may misjudge skill levels or disadvantage students with limited access or technical familiarity. | Bias and fairness |
Unequal access to premium AI applications may lower personalization and content quality in less-resourced settings. | Bias and fairness |
Overuse of AI might reduce educators’ motivation in content creation. | Autonomy and oversight |
Overreliance on AI may weaken educators’ critical thinking and pedagogical autonomy. | Autonomy and oversight |
Responsibility for the quality and accuracy of AI-generated educational content. | Transparency and accountability |
Absence of transparency in disclosing AI-assisted content may raise concerns about academic integrity. | Transparency and accountability, Integrity and plagiarism |
Students must be informed about how and why AI is used in assessment to ensure transparency and trust. | Transparency and accountability |
Ambiguity around copyright and unclear data licensing can lead to unauthorized use of protected material. | Integrity and plagiarism |
Educators remain legally responsible for flawed AI-generated feedback; AI must not replace human judgment. | Autonomy and oversight |
Lack of grounding in research ethics; generated suggestions may contradict principles of research integrity. | Integrity and plagiarism |
Inability of the application to understand the specific context of an individual research project. | Transparency and accountability |
Absence of institutional guidelines results in ethically problematic individual decision-making. | Governance gaps |
Ethical Problem | Ethical Category |
---|---|
Transfer of bias from training data may disadvantage underrepresented topics or authors. | Bias and fairness |
Generative AI models may reproduce gender, cultural, or ethnic stereotypes. | Bias and fairness |
Excessive dependence on AI content generation may weaken researchers’ critical thinking, reasoning, and collaboration. | Autonomy and oversight |
Overuse of AI applications may reduce collaboration among researchers and contribute to mental fatigue. | Autonomy and oversight |
AI applications may fabricate data or citations (hallucinations), presenting unverifiable information as fact. | Integrity and plagiarism |
AI applications may produce hallucinated content, exhibit algorithmic bias, rely on outdated data, and make decisions without transparency or clear accountability. | Bias and fairness, Transparency and accountability |
AI applications can generate misleading outputs, plagiarized content, or lack contextual understanding, while posing risks related to system opacity and data privacy. | Privacy and data protection |
AI platforms may operate opaquely, leak data, produce falsified results, reduce researcher autonomy, and enable unethical research practices. | Transparency and accountability, Autonomy and oversight |
AI applications may generate inaccurate content, blur authorship, weaken critical thinking, violate privacy, and raise unresolved intellectual property questions. | Privacy and data protection, Integrity and plagiarism |
The lack of algorithmic processes prevents users from understanding how outputs are generated, complicating attribution and academic responsibility. | Transparency and accountability |
Failure to disclose AI assistance can mislead readers about the human contribution to the work. | Transparency and accountability |
It remains unclear who bears academic, legal, or ethical responsibility for AI-generated content, raising authorship and accountability concerns. | Transparency and accountability, Integrity and plagiarism |
Intellectual property ownership of AI-generated content is unclear, raising legal concerns about who may claim authorship and publication rights. | Integrity and plagiarism |
AI-rewritten content may evade plagiarism detection, enabling unethical manuscript practices. | Integrity and plagiarism |
AI-generated text may reproduce source-based plagiarism patterns from training data without proper attribution. | Integrity and plagiarism |
Plagiarism detection systems may fail to detect AI-paraphrased text, enabling unethical manuscript practices. | Integrity and plagiarism |
AI-generated manuscripts submitted to predatory journals may contribute to the spread of unverifiable or low-quality academic content. | Integrity and plagiarism |
The use of AI may undermine clear authorship, hide decision processes, marginalize qualitative research, complicate data consent, and reduce originality in peer review. | Integrity and plagiarism |
Important works may be omitted; irrelevant literature reduce the reliability of scholarly evaluation. | Transparency and accountability |
Without institutional guidance, AI applications may threaten academic integrity, generate hallucinations, distort scholarly content, reduce creativity, and be used without ethical training. | Integrity and plagiarism |
Ethical Problem | Ethical Category |
---|---|
AI systems present risks of data leakage, hacking, and unauthorized data processing | Privacy and data protection |
Processing unpublished manuscripts in AI applications may breach data protection laws or intellectual property rights | Privacy and data protection |
Institutions lacking resources adopt pre-packaged systems without local validation, reinforcing inequalities | Bias and fairness |
Absence of “equity literacy” hinders recognition and correction of unfair AI-driven decisions | Bias and fairness |
Unequal access to paid AI applications may deepen global inequalities in research productivity and publishing capacity | Bias and fairness |
Reviewers may offload decision-making to AI, weakening professional responsibility and evaluative integrity | Transparency and accountability, Integrity and plagiarism |
Hallucinations (fabricated or inaccurate content) undermine the credibility of the review | Integrity and plagiarism |
Lack of transparency in AI models complicates auditing and verification processes | Transparency and accountability |
Lack of transparency in how the model generates evaluations makes it difficult to justify recommendations | Transparency and accountability |
Lack of transparency in AI systems reduces trust and hinders adoption for professional development | Transparency and accountability |
Decisions about AI deployment are made by narrow expert teams, excluding broader governance structures | Governance gaps |
Overdependence on AI undermines academic autonomy and weakens accountability structures | Transparency and accountability, Autonomy and oversight |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Chinoracky, R.; Stalmasekova, N. Ethical Problems in the Use of Artificial Intelligence by University Educators. Educ. Sci. 2025, 15, 1322. https://doi.org/10.3390/educsci15101322
Chinoracky R, Stalmasekova N. Ethical Problems in the Use of Artificial Intelligence by University Educators. Education Sciences. 2025; 15(10):1322. https://doi.org/10.3390/educsci15101322
Chicago/Turabian StyleChinoracky, Roman, and Natalia Stalmasekova. 2025. "Ethical Problems in the Use of Artificial Intelligence by University Educators" Education Sciences 15, no. 10: 1322. https://doi.org/10.3390/educsci15101322
APA StyleChinoracky, R., & Stalmasekova, N. (2025). Ethical Problems in the Use of Artificial Intelligence by University Educators. Education Sciences, 15(10), 1322. https://doi.org/10.3390/educsci15101322