From Theoretical Navigation to Intelligent Prevention: Constructing a Full-Cycle AI Ethics Education System in Higher Education
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThe paper presents a strong and timely contribution, but there are critical areas that require improvement before it can reach publication quality. The following points should be addressed:
-
Abstract – Overly technical and dense with statistics. It should be rewritten for clarity, focusing first on the significance of the study and then presenting the empirical results more concisely. Readers should immediately understand the novelty and contribution without being lost in data.
-
Introduction – The policy framing is useful, but the true research gap emerges too late. The introduction must emphasize earlier that current AI ethics education is fragmented and practice-light. At the same time, it would be strengthened by drawing on recent literature that links AI to pedagogical change and leadership in higher education. For example, Al Nabhani, Hamzah, and Abuhassna [1] highlight how AI reshapes personalized learning and the teacher’s role, while Li, Yusof, Abuhassna, and Pan [3] map AI’s impact on higher education trends and horizons. These would anchor the study in more current scholarly debates.
-
Literature Review – The review is repetitive, often restating the same problem of “theory-practice disconnect.” It should be streamlined and refocused to highlight what is missing in the literature. At the same time, the authors should integrate broader perspectives on leadership and values, which are currently underrepresented. For instance, Alsubeia, Yusof, and Murshed [2] explore AI’s role in values-based leadership and its implications for engagement, which is directly relevant to discussions of ethics governance and institutional adoption. Including such works would help the literature review move beyond technical frameworks and into socio-ethical dimensions.
-
Methodology – The section is innovative but overloaded with jargon (“transparency gradient model,” “dual-loop system,” “three-level mechanism”). The narrative risks alienating non-specialist readers. The visual (Figure 1) is particularly dense and needs simplification—possibly breaking into smaller sub-figures. The models themselves should be explained with clearer practical classroom examples to ensure accessibility.
-
Results – The quantitative results are clear, but the interpretation is shallow. The authors should elaborate on why algorithmic fairness showed the highest gains and what it implies for AI ethics curriculum design. The disciplinary differences (computer science vs. social science) are interesting but only briefly mentioned; these should be unpacked in terms of implications for interdisciplinary curriculum development.
-
Discussion – While contributions are acknowledged, the discussion is too self-validating. The paper needs a deeper critical reflection on feasibility and risks. For example, the proposed intelligent monitoring system raises serious questions about surveillance and student autonomy, yet this is glossed over. A more nuanced discussion is needed here, recognizing that such monitoring could undermine the very ethical culture the paper aims to promote.
-
Limitations – Currently underdeveloped. In addition to noting short study durations, the authors should also recognize cultural bias (single-site study), possible self-reporting bias in user feedback, and contradictions between ethical teaching and intrusive monitoring systems. These limitations must be openly addressed.
-
Conclusion – The conclusion repeats too much of the abstract. Instead, it should highlight forward-looking recommendations: how universities might adopt the system in phases, how policymakers could integrate it into accreditation, and what practical steps institutions should take for scaling.
-
Language and Style – At times the paper reads more like a technical manual than an educational research study. The authors should soften the jargon and emphasize the human dimension of ethics education—highlighting students, teachers, and learning outcomes.
-
References – The reference list is strong but very framework-heavy. The authors should expand with more diverse and contemporary studies, especially from non-Western contexts. In particular, the following should be cited:
[1] F. Al Nabhani, M. B. Hamzah, and H. Abuhassna, “The role of artificial intelligence in personalizing educational content: Enhancing the learning experience and developing the teacher’s role in an integrated educational environment,” Contemporary Educational Technology, vol. 17, no. 2, p. ep573, 2025, doi: 10.30935/cedtech/16089.
[2] T. Li, S. B. Yusof, H. Abuhassna, and Q. Pan, “The Impact of AI on Higher Education Trends and Educational Horizons,” in AI in Education, Governance, and Leadership: Adoption, Impact, and Ethics, B. Edwards, H. Abuhassna, D. Olugbade, O. Ojo, and W. Jaafar Wan Yahaya, Eds. Hershey, PA: IGI Global Scientific Publishing, 2026, pp. 1–32, doi: 10.4018/979-8-3373-5550-4.ch001.
Author Response
Responses to Reviewer 1
Manuscript ID: education-3829123
Title of the Paper: From Theoretical Navigation to Intelligent Prevention: Constructing a Full-Cycle AI Ethics Education System in Higher Education
Dear Editors and Reviewers,
Thank you and the two reviewers for your valuable time and insightful, constructive comments on our manuscript. We carefully considered each piece of feedback. Their insightful and comprehensive reviews significantly enhanced the rigor, clarity, and depth of our research. We have thoroughly and systematically revised the manuscript in response to all of their suggestions.
We believe that this substantial revision has significantly improved the quality of our manuscript. Below is our response to each of the reviewers' comments.
Responses for reviewer 1
Comment 1:
Abstract – Overly technical and dense with statistics. It should be rewritten for clarity, focusing first on the significance of the study and then presenting the empirical results more concisely. Readers should immediately understand the novelty and contribution without being lost in data.
Response 1:
We thank the reviewer for this valuable feedback and completely agree with their assessment. The original abstract did not adequately convey the core contribution of our research. We have rewritten the entire abstract. It now begins by highlighting the urgent problem of the theory-practice disconnect in AI ethics education, then briefly introduces the full-cycle system we constructed, and concludes by concisely summarizing the key empirical findings. This revised structure is designed to allow readers to quickly grasp the novelty and significance of our study.
Location of revision: Abstract, Lines 10-28.
Comment 2:
Introduction – The policy framing is useful, but the true research gap emerges too late. The introduction must emphasize earlier that current AI ethics education is fragmented and practice-light. At the same time, it would be strengthened by drawing on recent literature that links AI to pedagogical change and leadership in higher education. For example, Al Nabhani, Hamzah, and Abuhassna [1] highlight how AI reshapes personalized learning and the teacher’s role, while Li, Yusof, Abuhassna, and Pan [3] map AI’s impact on higher education trends and horizons. These would anchor the study in more current scholarly debates.
Response 2:
We are grateful for the reviewer's valuable suggestion. We have restructured the introduction to front-load the core research gap—the fragmented and practice-light nature of current AI ethics education. Additionally, we have incorporated the suggested literature by Al Nabhani et al. and Li et al., positioning our work within the broader scholarly discussion on how AI is reshaping personalized learning and future trends in higher education. This has significantly strengthened the context and timeliness of our research.
Location of revision: Introduction, Lines 43-47, 52-57.
Comment 3:
Literature Review – The review is repetitive, often restating the same problem of “theory-practice disconnect.” It should be streamlined and refocused to highlight what is missing in the literature. At the same time, the authors should integrate broader perspectives on leadership and values, which are currently underrepresented. For instance, Alsubeia, Yusof, and Murshed [2] explore AI’s role in values-based leadership and its implications for engagement, which is directly relevant to discussions of ethics governance and institutional adoption. Including such works would help the literature review move beyond technical frameworks and into socio-ethical dimensions.
Response 3:
We thank the reviewer for keenly identifying the issue of repetition in the literature review. We have streamlined and reorganized Section 2 (Related Work), removing redundant statements. More importantly, following the reviewer's advice, we have added a new paragraph that explores the role of AI in values-based leadership and its implications for ethical governance and institutional adoption, citing the work of Alsubeia et al. This revision allows our literature review to move beyond a purely technical framework and engage with broader socio-ethical dimensions.
Location of revision: Related Work section, Lines 94-102, 116-127.
Comment 4:
Methodology – The section is innovative but overloaded with jargon (“transparency gradient model,” “dual-loop system,” “three-level mechanism”). The narrative risks alienating non-specialist readers. The visual (Figure 1) is particularly dense and needs simplification—possibly breaking into smaller sub-figures. The models themselves should be explained with clearer practical classroom examples to ensure accessibility.
Response 4:
We are very grateful to both reviewers for pointing out the clarity issues with Figure 1. We have carefully considered the suggestion to break down the figure but wish to maintain its integrity to visually convey the integrated, "full-cycle" nature of our system. To thoroughly address the readability concern, we have implemented the following key revisions: we have added a detailed "textual guide" paragraph immediately preceding Figure 1. This paragraph provides a step-by-step walkthrough of the figure's structure, clarifying the implicit logical flow and connections between the modules.
Location of revision: Methodology section, Lines 165-174, 177-187.
Comment 5:
Results – The quantitative results are clear, but the interpretation is shallow. The authors should elaborate on why algorithmic fairness showed the highest gains and what it implies for AI ethics curriculum design. The disciplinary differences (computer science vs. social science) are interesting but only briefly mentioned; these should be unpacked in terms of implications for interdisciplinary curriculum development.
Response 5:
This is a very insightful observation. We acknowledge that our initial interpretation did not sufficiently explore the implications of our findings. We have now added a deeper interpretation of the results. Specifically, we elaborate on the likely reasons for the significant improvement in the 'algorithmic fairness' dimension (e.g., the novelty of the topic for students and the effectiveness of our hands-on tools like the IBM AI Fairness 360). Furthermore, we delve into the implications of the performance differences between computer science and social science students, proposing concrete suggestions for interdisciplinary curriculum design.
Location of revision: Discussion section, Lines 469-476, 484-490.
Comment 6:
Discussion – While contributions are acknowledged, the discussion is too self-validating. The paper needs a deeper critical reflection on feasibility and risks. For example, the proposed intelligent monitoring system raises serious questions about surveillance and student autonomy, yet this is glossed over. A more nuanced discussion is needed here, recognizing that such monitoring could undermine the very ethical culture the paper aims to promote.
Response 6:
This is a crucial critique, and we thank the reviewer for pointing out this significant omission. We have added a new subsection to the Discussion titled "Ethical Considerations and Feasibility Challenges." In this section, we directly address the serious ethical concerns that the 'intelligent monitoring system' could raise regarding student privacy, academic freedom, and excessive surveillance. We discuss the potential conflict between such monitoring and the ethical culture we aim to foster, and we propose several mitigation strategies, such as ensuring informed consent and using data for formative feedback rather than punitive measures. This addition has greatly deepened the critical reflection in our study.
Location of revision: New subsection 5.4.
Comment 7:
Limitations – Currently underdeveloped. In addition to noting short study durations, the authors should also recognize cultural bias (single-site study), possible self-reporting bias in user feedback, and contradictions between ethical teaching and intrusive monitoring systems. These limitations must be openly addressed.
Response 7:
We fully accept this criticism; the original limitations section was indeed too brief. We have completely rewritten and expanded this section. It now explicitly and systematically acknowledges the following limitations: (1) the short duration of the study; (2) the single-site nature of the research, which may introduce cultural or institutional bias; (3) the potential for self-reporting bias in user feedback; and (4) the inherent ethical tension between our educational goals and the monitoring methods used, which echoes the new critical discussion.
Location of revision: New subsection 5.5.
Comment 8:
Conclusion – The conclusion repeats too much of the abstract. Instead, it should highlight forward-looking recommendations: how universities might adopt the system in phases, how policymakers could integrate it into accreditation, and what practical steps institutions should take for scaling.
Response 8:
Thank you for this guidance. We have rewritten the conclusion, removing the repetitive summary and instead providing more forward-looking and practical recommendations. The new conclusion offers concrete, actionable suggestions for higher education institutions, policymakers, and accreditation bodies, such as how to implement the system in phases and integrate it with existing academic assessment frameworks to better promote responsible AI governance.
Location of revision: Conclusion section.
Comment 9:
Language and Style – At times the paper reads more like a technical manual than an educational research study. The authors should soften the jargon and emphasize the human dimension of ethics education—highlighting students, teachers, and learning outcomes.
Response 9:
We thank the reviewer for this reminder about our writing style. We have polished the language throughout the manuscript, replacing technical jargon with more accessible, education-focused terminology where possible. We have also placed a greater emphasis on the roles and experiences of students and teachers in ethics education to enhance the paper's humanistic dimension and readability.
Comment 10:
References – The reference list is strong but very framework-heavy. The authors should expand with more diverse and contemporary studies, especially from non-Western contexts. In particular, the following should be cited:
[1] F. Al Nabhani, M. B. Hamzah, and H. Abuhassna, “The role of artificial intelligence in personalizing educational content: Enhancing the learning experience and developing the teacher’s role in an integrated educational environment,” Contemporary Educational Technology, vol. 17, no. 2, p. ep573, 2025, doi: 10.30935/cedtech/16089.
[2] T. Li, S. B. Yusof, H. Abuhassna, and Q. Pan, “The Impact of AI on Higher Education Trends and Educational Horizons,” in AI in Education, Governance, and Leadership: Adoption, Impact, and Ethics, B. Edwards, H. Abuhassna, D. Olugbade, O. Ojo, and W. Jaafar Wan Yahaya, Eds. Hershey, PA: IGI Global Scientific Publishing, 2026, pp. 1–32, doi: 10.4018/979-8-3373-5550-4.ch001.
Response 10:
Thank you for these valuable literature suggestions. We have added all the recommended references (Al Nabhani et al., Li et al., Alsubeia et al.) to our reference list and have cited them in the appropriate places in the text. The inclusion of these works has indeed broadened the theoretical scope of our research.
Location of revision: New citations 38, 39.
Once again, we thank all reviewers for their diligent work and insightful guidance. We hope that with these extensive revisions, the manuscript is now suitable for publication.
Sincerely,
Reviewer 2 Report
Comments and Suggestions for AuthorsOverview
The paper addresses an exceptionally timely and critical topic in contemporary higher education: AI ethics. The authors propose a full-cycle AI ethics education system and support its effectiveness with quantitative data on students’ ethical competencies. While the work is highly relevant and conceptually ambitious, it requires significant revisions to improve clarity in defining the system’s core components and to ensure methodological transparency.
Comments
- The paper positions itself as addressing AI ethics in general, yet the main focus is on issues related to generative AI (Gen-AI), particularly in the context of academic integrity and the use of tools such as ChatGPT. The authors do not make a clear conceptual distinction between classical AI systems and Gen-AI. The authors should clarify which type of AI the system is targeting and possibly adjust the terminology used in the paper.
- The theoretical foundation of the system (Section 3.1) relies predominantly on sources from 2014–2019, which raises doubts about the relevance of the proposed framework in the context of the rapidly developing field of generative AI ethics. At the same time, key statements in Sections 2.1 and 2.2.2 regarding teaching practices and faculty preparation remain insufficiently supported by references to recent empirical studies.
- The paper lacks clear definitions of central concepts — “AI ethics”, “AI technologies/tools”, and “full-cycle education”. Although their meaning is partially revealed through context and system structure, the absence of explicit definitions reduces conceptual clarity and terminological transparency.
- The figure “Overall architecture of our methods” contains a large number of terms that appear for the first time in the diagram but are not explained until later in the text. In addition, the absence of connections between blocks makes it difficult to understand the logic of the system. This approach reduces the clarity and accessibility of the material. The presentation of the diagram in the paper needs substantial revision. It is recommended to start with a simplified, concise architecture showing the key components of the system, and then insert more detailed diagrams as each module is introduced in the text.
- The results section contains significant data but suffers from insufficient methodological transparency. Descriptions of key instruments are missing: tests, ethical dilemma scenarios, questionnaires, evaluation criteria, and procedures for qualitative data analysis. To increase scientific credibility, it is required to provide a detailed description of all measurement tools and procedures.
- The experimental design for evaluating the "cognition-behavior" dual-loop mechanism in Section 4.2 appears unbalanced. The control group receives only traditional lectures, while the experimental group receives both theoretical and practical training. Since both groups are assessed on practical decision-making skills, the advantage of the experimental group may be due not to the structure of the "dual-loop" itself, but simply to the presence of practice, rather than its integration with theory.
- Lines 351–353 remain from the manuscript template. The abbreviation AI is introduced multiple times throughout the text. The subsection numbering in Section 4 is incorrect.
Author Response
Responses to Reviewer
Manuscript ID: education-3829123
Title of the Paper: From Theoretical Navigation to Intelligent Prevention: Constructing a Full-Cycle AI Ethics Education System in Higher Education
Dear Editors and Reviewers,
Thank you and the two reviewers for your valuable time and insightful, constructive comments on our manuscript. We carefully considered each piece of feedback. Their insightful and comprehensive reviews significantly enhanced the rigor, clarity, and depth of our research. We have thoroughly and systematically revised the manuscript in response to all of their suggestions.
We believe that this substantial revision has significantly improved the quality of our manuscript. Below is our response to each of the reviewers' comments.
Responses for Reviewer 2
Comment 1:
The paper positions itself as addressing AI ethics in general, yet the main focus is on issues related to generative AI (Gen-AI), particularly in the context of academic integrity and the use of tools such as ChatGPT. The authors do not make a clear conceptual distinction between classical AI systems and Gen-AI. The authors should clarify which type of AI the system is targeting and possibly adjust the terminology used in the paper.
Response 1:
We are very grateful to the reviewer for raising this crucial point about precision. We have revised the introduction and added a new section on core concept definitions to explicitly state that the primary focus of our research is on addressing the novel ethical challenges posed by generative AI (Gen-AI). The term "Gen-AI" has been appropriately integrated throughout the manuscript to ensure this focus is clear.
Location of revision: Page 1 Introduction, and new Section 3.1.
Comment 2:
The theoretical foundation of the system (Section 3.1) relies predominantly on sources from 2014–2019, which raises doubts about the relevance of the proposed framework in the context of the rapidly developing field of generative AI ethics. At the same time, key statements in Sections 2.1 and 2.2.2 regarding teaching practices and faculty preparation remain insufficiently supported by references to recent empirical studies.
Response 2:
We completely agree that the timeliness of the theoretical foundation is paramount. We have undertaken a major revision of the literature review and theoretical framework sections (2.1, 2.2, 3.2), incorporating several key publications from 2020 onwards on Gen-AI ethics, educational applications, and related empirical research. This ensures that our framework is current and relevant.
Location of revision: Sections 2.1, 2.2, 3.2.
Comment 3:
The paper lacks clear definitions of central concepts — “AI ethics”, “AI technologies/tools”, and “full-cycle education”. Although their meaning is partially revealed through context and system structure, the absence of explicit definitions reduces conceptual clarity and terminological transparency.
Response 3:
This is a very pertinent critique. To enhance conceptual clarity, we have added a new subsection at the beginning of the methodology section titled "3.1 Core Concept Definitions" to provide clear, operational definitions for these terms.
Location of revision: New subsection 3.1.
Comment 4:
The figure “Overall architecture of our methods” contains a large number of terms that appear for the first time in the diagram but are not explained until later in the text. In addition, the absence of connections between blocks makes it difficult to understand the logic of the system. This approach reduces the clarity and accessibility of the material. The presentation of the diagram in the paper needs substantial revision. It is recommended to start with a simplified, concise architecture showing the key components of the system, and then insert more detailed diagrams as each module is introduced in the text.
Response 4:
We are very grateful to both reviewers for pointing out the clarity issues with Figure 1. To address readability while preserving the figure's integrity, we have added a detailed "textual guide" paragraph immediately preceding Figure 1. This paragraph provides a step-by-step walkthrough of the figure's structure, clarifying the implicit logical flow and connections between the modules to help readers understand its systemic logic.
Comment 5:
The results section contains significant data but suffers from insufficient methodological transparency. Descriptions of key instruments are missing: tests, ethical dilemma scenarios, questionnaires, evaluation criteria, and procedures for qualitative data analysis. To increase scientific credibility, it is required to provide a detailed description of all measurement tools and procedures.
Response 5:
To significantly improve the methodological transparency and replicability of our research, we have added a new "Appendix A: Research Instruments and Procedures." This appendix provides detailed descriptions of all measurement tools, including: the structure and sample items of the ethics knowledge test, a sample ethical dilemma scenario, the full Likert scale for user experience, and the procedure for qualitative data analysis.
Location of revision: New Appendix A.
Comment 6:
The experimental design for evaluating the "cognition-behavior" dual-loop mechanism in Section 4.2 appears unbalanced. The control group receives only traditional lectures, while the experimental group receives both theoretical and practical training. Since both groups are assessed on practical decision-making skills, the advantage of the experimental group may be due not to the structure of the "dual-loop" itself, but simply to the presence of practice, rather than its integration with theory.
Response 6:
This is a deeply insightful and precise methodological critique. We candidly acknowledge this limitation in our current experimental design. We have added a dedicated discussion in the "Limitations" section to frankly admit this design flaw. We state clearly that our study cannot disentangle the effect of "practice itself" from the effect of the "effective integration of theory and practice." We have listed this as a key direction for future research.
Location of revision: New subsection 5.5.
Comment 7:
Lines 351–353 remain from the manuscript template. The abbreviation AI is introduced multiple times throughout the text. The subsection numbering in Section 4 is incorrect.
Response 7:
We sincerely apologize for these editorial errors. We have carefully proofread the entire manuscript and made the following corrections: (1) we have removed all remaining template text; (2) we have ensured that the abbreviation "AI" is defined only at its first appearance; and (3) we have corrected all section numbering. Thank you for your meticulous review.
Once again, we thank all reviewers for their diligent work and insightful guidance. We hope that with these extensive revisions, the manuscript is now suitable for publication.
Sincerely,
Round 2
Reviewer 2 Report
Comments and Suggestions for AuthorsThe authors responded to the comments in sufficient detail. That said, I still have two residual concerns. First, while the textual explanation preceding Figure 1 improves its interpretability, the figure itself remains visually dense and complex. Its current design may hinder comprehension, especially for readers unfamiliar with the system’s architecture. Second, although the literature review has been updated, it would gain further depth by incorporating more recent studies specifically focused on Gen-AI in educational contexts (e.g., on academic integrity, authorship, and pedagogical integration).
Author Response
Response to Reviewer
Dear Reviewer,
Thank you very much for taking the time to review our revised manuscript again and for providing further valuable comments. We sincerely appreciate your rigorous scholarly approach and your continued efforts to help us improve the quality of our paper. We fully agree with the two concerns you have raised and have made final, targeted revisions to the manuscript accordingly.
Below are our responses to your two comments and the corresponding revisions.
Reviewer's Minor Comments 1
First, while the textual explanation preceding Figure 1 improves its interpretability, the figure itself remains visually dense and complex. Its current design may hinder comprehension, especially for readers unfamiliar with the system’s architecture.
Response 1
We are very grateful that you have once again highlighted the fundamental issue with the visual presentation of Figure 1. Upon further reflection, we completely agree that the textual explanation alone cannot fully compensate for the complexity of the figure itself. We sincerely apologize for not having addressed this more thoroughly in the previous revision.
To resolve this issue completely and to maximize the readability of the methodology section, we have fully adopted your insightful suggestion and have completely removed the original, dense Figure 1.
In its place, we have decomposed the original figure into three simpler, more logical new figures (now Figure 1, Figure 2, and Figure 3) and have embedded them individually at the beginning of the three most relevant subsections within the methodology chapter. Specifically:
- Figure 1 (Theoretical Framework Construction): is now placed at the beginning of Section 3.2, serving as a high-level flowchart for that part.
- Figure 2 (Practical Tools and Pedagogical Implementation): is now placed at the beginning of Section 3.4, serving as the overall blueprint for that part.
- Figure 3 (Validation, Optimization, and Quality Control Loop): is now placed at the beginning of the new Section 3.5, to summarize and illustrate the dynamic loop of the entire system.
We believe that this "decompose and embed" strategy not only resolves the visual density issue of the original figure but also greatly optimizes the narrative structure and logical flow of the methodology section, allowing readers to more easily understand the layered architecture of our system.
Location of revision: In the Methodology section, the original Figure 1 has been replaced by three new, dispersed figures (Figure 1, 2, 3) and their corresponding introductory text.
Reviewer's Minor Comments 2
Second, although the literature review has been updated, it would gain further depth by incorporating more recent studies specifically focused on Gen-AI in educational contexts (e.g., on academic integrity, authorship, and pedagogical integration).
Response 2
Thank you for pointing us in a specific direction to further deepen the literature review. We completely agree that incorporating more recent studies on the specific applications of Gen-AI in education will significantly enhance the academic novelty and depth of our paper.
To this end, we have conducted a more targeted literature search and have added a new paragraph in the "Related Work" section to specifically discuss recent core ethical and practical issues concerning Gen-AI in higher education, such as Himendra Balalle. et al. 2025, Serhii Nazarovets 2024, Zaim, M.. 2024:
- Nazarovets, S., & Teixeira da Silva, J. A. (2024). ChatGPT as an “author”: Bibliometric analysis to assess the validity of authorship. Accountability in Research, 1–11. https://doi.org/10.1080/08989621.2024.2345713
- Zaim, M., Arsyad, S., Waluyo, B., Ardi, H., Al Hafizh, M., Zakiyah, M., Syafitri, W., Nusi, A., & Hardiah, M. (2024). AI-powered EFL pedagogy: Integrating generative AI into university teaching preparation through UTAUT and activity theory. Computers and Education: Artificial Intelligence, 7, 100335. https://doi.org/10.1016/j.caeai.2024.100335
- Balalle, H., & Pannilage, S. (2025). Reassessing academic integrity in the age of AI: A systematic literature review on AI and academic integrity. Social Sciences & Humanities Open, 11, 101299. https://doi.org/10.1016/j.ssaho.2025.101299
Once again, we would like to express our sincere gratitude for your valuable feedback. We are confident that, with these final critical revisions, the quality and clarity of the manuscript have been decisively improved, and we sincerely hope that it now fully meets the publication requirements of the journal.
Sincerely,
