Next Article in Journal
Research on Power Cable Intrusion Identification Using a GRT-Transformer-Based Distributed Acoustic Sensing (DAS) System
Previous Article in Journal
DFPoLD: A Hard Disk Failure Prediction on Low-Quality Datasets
 
 
Review
Peer-Review Record

A Comprehensive Review of ChatGPT in Teaching and Learning Within Higher Education

Informatics 2025, 12(3), 74; https://doi.org/10.3390/informatics12030074
by Samkelisiwe Purity Phokoye *, Siphokazi Dlamini, Peggy Pinky Mthalane, Mthokozisi Luthuli and Smangele Pretty Moyane
Reviewer 1:
Reviewer 2:
Reviewer 3: Anonymous
Informatics 2025, 12(3), 74; https://doi.org/10.3390/informatics12030074
Submission received: 11 April 2025 / Revised: 5 June 2025 / Accepted: 12 June 2025 / Published: 21 July 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

There is the statement "PRISMA diagram was used" in the abstract, but it is not clear according to which categories it is classified. Methodological detail should be added.

Literature remains at a descriptive level. Conceptual mapping and critical analysis are missing.

Certain key concepts (e.g. "learning autonomy", "instructional design", "data literacy") are not discussed in detail.

Literature should be linked to the theoretical context (e.g. postdigital learning, AI literacy, technology acceptance models) on the pedagogical implications of AI.

VOSviewer outputs are presented but the metrics used (e.g. centrality, density, cluster linkage) are not detailed.

The fact that the bibliometric analysis is only based on word co-occurrence and not thematic analysis or content analysis is an important deficiency.

"In-depth" analysis is missing when explaining the findings. For example, general statements such as "its use in higher education is increasing" are sufficient.

The implications of the findings for educational policies or institutional strategies are not discussed.

A "critical pedagogical perspective" is missing. Issues such as the epistemological effects of ChatGPT, cognitive dominance in the learning process are not addressed.

The impact on structural components such as student autonomy, assessment systems, instructional design is not analyzed.

There should be more nuanced analysis instead of simple word clouds.

Current studies on "ChatGPT integration in education", "what ChapGPT means for universities" can also be checked.

Comments on the Quality of English Language

There are grammatical errors in some places (especially statements such as "it's also plays a valuable role").

The text is lengthened with repetitive phrases.

Author Response

Reviewer 1’s Comments

Authors Response

There is the statement "PRISMA diagram was used" in the abstract, but it is not clear according to which categories it is classified. Methodological detail should be added.

Added

Literature remains at a descriptive level. Conceptual mapping and critical analysis are missing.

Added

Certain key concepts (e.g. "learning autonomy", "instructional design", "data literacy") are not discussed in detail.

Added from page 117 to 130

Literature should be linked to the theoretical context (e.g. postdigital learning, AI literacy, technology acceptance models) on the pedagogical implications of AI

I appreciate the suggestion to incorporate a theoretical context. However, this manuscript is designed as a comprehensive literature review rather than an empirical study, and as such, it is not structured around a formal theoretical framework. The intention is to synthesise existing literature on the impact of ChatGPT in higher education broadly and inclusively, without limiting the analysis to a specific theoretical lens.

VOSviewer outputs are presented but the metrics used (e.g. centrality, density, cluster linkage) are not detailed.

 

Added from  line 276

The fact that the bibliometric analysis is only based on word co-occurrence and not thematic analysis or content analysis is an important deficiency

Thank you for this observation. This study is designed as a bibliometric mapping review focused on identifying structural trends and keyword linkages within the literature on ChatGPT in higher education. While thematic or content analysis would certainly provide deeper insights, the scope of this review was limited to co-occurrence analysis to provide an overview of research density, conceptual clusters, and emerging trends. Future work may incorporate thematic coding to complement the bibliometric findings.

"In-depth" analysis is missing when explaining the findings. For example, general statements such as "its use in higher education is increasing" are sufficient.

Finding were revised

The implications of the findings for educational policies or institutional strategies are not discussed.

I have now included a paragraph in the discussion section that highlights the implications of the findings for educational policy and institutional strategy, particularly in relation to AI governance, training, and pedagogical integration.

A "critical pedagogical perspective" is missing. Issues such as the epistemological effects of ChatGPT, cognitive dominance in the learning process are not addressed.

Added on page 293

The impact on structural components such as student autonomy, assessment systems, instructional design is not analyzed.

Added under discussion page 471

There should be more nuanced analysis instead of simple word clouds

 

Current studies on "ChatGPT integration in education", "what ChapGPT means for universities" can also be checked.

Added 

 

Reviewer 2 Report

Comments and Suggestions for Authors

Dear Authors,

I read your work titled "Exploring the Impact of ChatGPT in Higher Education: A Comprehensive Review" with great interest. It is commendable that you address such a current and important topic on the place and effects of ChatGPT in higher education, and particularly that you carefully and meticulously explain the contributions of this technology to students and academics in the introduction section.

However, in the introduction section, elements such as the problem statement in the literature, common emphases or differences in existing studies, and which gaps are being addressed do not appear to be presented clearly enough. If you could more explicitly explain why the research was conducted, what was intended to be understood through the bibliometric analysis method used, and what contribution would be made in this context, readers would both better grasp the purpose of the study and be more motivated to continue reading the text. Also, concluding the introduction section with a clear and explicit purpose of the study could be important for coherence.

In the literature section, there are frequent references to ChatGPT as a tool that enriches education and increases students' motivation and achievements. At this point, including concrete examples of how and why these contributions occur would provide readers with a better understanding. Similarly, explanations under the heading "Factors influencing the adoption and acceptance of ChatGPT in higher education" would be more explanatory and impressive if supported with examples.

Throughout the text, headings such as ethics, individuals with special needs, and technological divide are repeated in different places. If these repetitions, even with the same sources, are diversified in terms of content and handled originally in the context of relevant headings, the integrity and academic value of the study would increase. Particularly under the heading "Ethical issues arise with the adoption of ChatGPT in Teaching and Learning," there seems to be a need for concrete examples in a similar way.

Some expressions used in the methodology section (for example: "AI in education, the application of ChatGPT in higher education, and the future of writing with AI within educational contexts") have given the impression of being ambiguous. In this context, more clearly defining the processes, especially focusing on ChatGPT and the higher education context in the selected studies, would increase the reliability of the method.

In the results section, although expressions such as "Emerging patterns underscore concerns regarding AI ethics, transparency, and understanding user viewpoints" are included, it is not possible to directly derive these conclusions from the diagram in Figure 1. It would be appropriate to base such results on more explicit foundations.

In the discussion section, it is noticeable that the findings are mostly summarized, but there is not enough comparison with similar studies or in-depth interpretation of the results obtained. Moreover, although the bibliometric analysis method used is successful in describing general trends, it appears limited in producing new or original information on some comprehensive themes that are elaborately addressed in the literature section. This situation may create inconsistency between the literature and findings. Alternatively, applying content analysis could have better served the main purpose of the research.

Author Response

Reviewers 2 comments

Authors response

I read your work titled "Exploring the Impact of ChatGPT in Higher Education: A Comprehensive Review" with great interest. It is commendable that you address such a current and important topic on the place and effects of ChatGPT in higher education, and particularly that you carefully and meticulously explain the contributions of this technology to students and academics in the introduction section.

Noted with thanks

However, in the introduction section, elements such as the problem statement in the literature, common emphases or differences in existing studies, and which gaps are being addressed do not appear to be presented clearly enough. If you could more explicitly explain why the research was conducted, what was intended to be understood through the bibliometric analysis method used, and what contribution would be made in this context, readers would both better grasp the purpose of the study and be more motivated to continue reading the text. Also, concluding the introduction section with a clear and explicit purpose of the study could be important for coherence.

Suggestions were include

In the literature section, there are frequent references to ChatGPT as a tool that enriches education and increases students' motivation and achievements. At this point, including concrete examples of how and why these contributions occur would provide readers with a better understanding. Similarly, explanations under the heading "Factors influencing the adoption and acceptance of ChatGPT in higher education" would be more explanatory and impressive if supported with examples.

Literature review section was revised.

Throughout the text, headings such as ethics, individuals with special needs, and technological divide are repeated in different places. If these repetitions, even with the same sources, are diversified in terms of content and handled originally in the context of relevant headings, the integrity and academic value of the study would increase. Particularly under the heading "Ethical issues arise with the adoption of ChatGPT in Teaching and Learning," there seems to be a need for concrete examples in a similar way.

Ethical section was revised

Some expressions used in the methodology section (for example: "AI in education, the application of ChatGPT in higher education, and the future of writing with AI within educational contexts") have given the impression of being ambiguous. In this context, more clearly defining the processes, especially focusing on ChatGPT and the higher education context in the selected studies, would increase the reliability of the method.

Methodology was revised

In the results section, although expressions such as "Emerging patterns underscore concerns regarding AI ethics, transparency, and understanding user viewpoints" are included, it is not possible to directly derive these conclusions from the diagram in Figure 1. It would be appropriate to base such results on more explicit foundations.

Revised

In the discussion section, it is noticeable that the findings are mostly summarized, but there is not enough comparison with similar studies or in-depth interpretation of the results obtained. Moreover, although the bibliometric analysis method used is successful in describing general trends, it appears limited in producing new or original information on some comprehensive themes that are elaborately addressed in the literature section. This situation may create inconsistency between the literature and findings. Alternatively, applying content analysis could have better served the main purpose of the research.

Revised

I read your work titled "Exploring the Impact of ChatGPT in Higher Education: A Comprehensive Review" with great interest. It is commendable that you address such a current and important topic on the place and effects of ChatGPT in higher education, and particularly that you carefully and meticulously explain the contributions of this technology to students and academics in the introduction section.

Introduction was revised

However, in the introduction section, elements such as the problem statement in the literature, common emphases or differences in existing studies, and which gaps are being addressed do not appear to be presented clearly enough. If you could more explicitly explain why the research was conducted, what was intended to be understood through the bibliometric analysis method used, and what contribution would be made in this context, readers would both better grasp the purpose of the study and be more motivated to continue reading the text. Also, concluding the introduction section with a clear and explicit purpose of the study could be important for coherence.

Revised

In the literature section, there are frequent references to ChatGPT as a tool that enriches education and increases students' motivation and achievements. At this point, including concrete examples of how and why these contributions occur would provide readers with a better understanding. Similarly, explanations under the heading "Factors influencing the adoption and acceptance of ChatGPT in higher education" would be more explanatory and impressive if supported with examples.

Examples were provided.

Author Response File: Author Response.docx

Reviewer 3 Report

Comments and Suggestions for Authors

Thank you for the opportunity to review your manuscript. The topic is timely and relevant. Overall, the paper is clearly written and contributes to the growing body of literature on AI in education. However, I have several concerns that require attention.

  • One key concern refers to the misalignment between the research goals and the methodology used. The manuscript aims to conduct a “comprehensive exploration of the impact of ChatGPT in teaching and learning within higher education,” with two specific objectives: to identify key factors influencing the adoption and acceptance of ChatGPT, and to investigate the role of institutional policies and support systems in this process. However, the study employs a bibliometric analysis methodology guided by the PRISMA framework. While bibliometric analysis can provide valuable insights into publication trends, citation networks, and thematic developments, it is not inherently equipped to assess impact - particularly the kind of impact implied by the manuscript’s title and goals. This mismatch raises some questions such as how do the authors define “impact,” and what kind of impact is being measured or inferred through the bibliometric approach? Given that the study is not empirical in the traditional sense, it does not involve primary data collection, I would suggest that the authors revise the manuscript’s title and research objectives to more accurately reflect the scope and nature of their analysis.
  • Another key concern refers to the misalignment between the research objectives and the actual results presented. The findings reported in the manuscript primarily focus on bibliometric indicators, including keyword co-occurrence, country-level publication trends, annual publication volume, and the top research areas associated with ChatGPT publications. These analyses do not clearly address the study’s reported objectives. For example, one research objective refers to the factors influencing adoption, but the analysis does not provide insight into pedagogical, psychological, or institutional factors influencing adoption and acceptance. The aspect of institutional policies and support systems is also not addressed in the findings. No systematic coding, synthesis, or thematic analysis of policy-related content appears to be included.
  • Introduction - The introduction effectively highlights the potential benefits of integrating ChatGPT into higher education. However, it presents an overly optimistic view, with little to no discussion of the challenges or controversies associated with ChatGPT's use in educational contexts. For instance, on page 2, lines 69–70, the authors state: “While integrating ChatGPT into education brings both opportunities and challenges, it also plays a valuable role in tasks like...” -  yet the text proceeds to elaborate only on the opportunities, without addressing at all to the challenges mentioned.
  • While the manuscript includes two research objectives, it does not clearly articulate specific research questions. I recommend the authors to explicitly formulate research questions aligned with their stated objectives.
  • Literature review - The literature review presents several important claims regarding the integration of ChatGPT and AI tools into higher education, but in multiple instances, these claims are not supported by references. For example: on page 3, lines 99-101 and 103-105 contain significant assertions that require proper citation.
  • On line 102, the full term “artificial intelligence” is used, whereas “AI” is consistently used throughout the rest of the section.
  • The sentence in lines 89-93, which discusses the relevance and capabilities of ChatGPT, is supported by a reference to Siemens & Long (2011), which pertain to Learning Analytics. These citations do not appear to substantiate the specific claims made about ChatGPT and should be revised accordingly.
  • On lines 128-130, an additional key statement is made without any reference.
  • The literature review would also benefit from incorporating more recent and directly relevant studies on the integration of GenAI tools into higher education. The authors may consider referencing the following recent publications that explore both opportunities and concerns surrounding AI in higher education settings:
  • Bond, M., Khosravi, H., De Laat, M. et al. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education, 21(4). https://doi.org/10.1186/s41239-023-00436-z
  • Usher, M. & Amzalag, M. (2025). From prompt to polished: Exploring student-chatbot interactions for academic writing assistance. Education Sciences, 15(3), 329. https://doi.org/10.3390/educsci15030329
  • Usher, M. (2025). Generative AI vs. instructor vs. peer assessments: A comparison of grading and feedback in higher education. Assessment and Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2025.2487495
  • Subsection 2.1 – the title of this subsection is somewhat misleading. It is titled “Factors Influencing the Adoption and Acceptance of ChatGPT in Higher Education” but its content primarily focuses on ethical considerations rather than broader adoption factors. Moreover, the authors appear to refer to ethical issues again in subsection 2.3, leading to unnecessary repetition.
  • If the intent in subsection 2.1 was indeed to discuss adoption and acceptance, then the current section omits several key aspects, such as institutional readiness, digital infrastructure, faculty and student attitudes, etc. See the paper of – Faraon, M., Rönkkö, K., Milrad, M. et al. (2025). International perspectives on artificial intelligence in higher education: An explorative study of students’ intention to use ChatGPT across the Nordic countries and the USA. Educ Inf Technology. https://doi.org/10.1007/s10639-025-13492-x
  • On page 4, lines 154-156 are vague and unsupported by references. So here, again, more recent literature addressing ethical concerns around AI and chatbots in higher education should be incorporated.
  • Subsection 2.2 – On page 5, lines 187–189, the discussion of “policies” lacks clarity. It is unclear whether the policies referenced pertain to the integration of AI in education, digital learning technologies more broadly, or some other domain. The source cited in support of this sentence does not appear to be directly related to AI.
  • As previously noted, subsection 2.3, which refers to ethical issues, is redundant. Much of this content overlaps with subsection 2.1, and its inclusion in both sections fragments the discussion unnecessarily. The authors should consolidate these points into a single, well-titled subsection to avoid repetition.
  • On subsection 2.3, there is a citation labeled simply as “(Reference 2024)” – this appears to be an oversight.
  • In the Methodology section - the authors state that a total of 3,874 articles were retrieved across three databases, yet there is no information about the time range – the publication years from which these records were drawn.
  • The methodology section is still missing several important details, particularly in the initial “identification” phase. Specifically: - Search strings - the authors did not provide elaboration about the actual search strings or combinations of keywords used - this is a critical omission. Readers need to know which specific terms were used and how they were combined across the databases. - Time range – as I mentioned above, there is no mention of the time period covered in the search. – The inclusion criteria are vaguely stated. For example, it is unclear how the authors operationalized “focusing on the influence of AI technologies (ChatGPT)”- was this determined based on titles, abstracts, keywords, or full-text screening? - Were any automated tools or inter-rater reliability measures used during screening?
  • At the end of the methodology section, the authors include an illustration depicting the article identification and filtering process. However, it is not labeled or referenced in the text. This figure should be designated as Figure 1 and explicitly referenced in the body of the methodology section. As a result, the subsequent diagram on the following page should be renumbered accordingly (e.g., Figure 2).
  • Results section – one comment refers to the presentation of visual materials throughout this section. In multiple instances, such as on page 8, line 286 ("The diagram illustrates…") and page 9, line 322 (“the graph showcased..”) - the text refers to visual content without formally identifying them as figures/images. To adhere to academic conventions, each figure should be sequentially numbered and clearly referenced in the text.
  • On page 7, the beginning of the Results section (lines 276-279) reiterates information that has already been presented in the methodology section. These sentences could be removed or revised to avoid redundancy.
  • On the same page, lines 279-283 require some polishing for improved clarity and readability. Moreover, the citation for Van Eck and Waltman (2010), referenced in these lines, does not appear in the reference list.
  • The keyword analysis presented in the Results section lacks the depth and precision expected in a bibliometric study. While the authors discuss common terms such as “ChatGPT,” “higher education,” and “university students,” these results are unsurprising given the study’s search scope. It goes back to my comment that the methodology section does not include the specific search terms used, which further undermines the value of reporting that these exact terms appear frequently.
  • Additionally, the analysis relies heavily on visualizations (like word clouds) but without adequate explanation or quantification. Many questions remain without answers, for example: - what percentage of the publications included each of the top keywords? - How often did terms related to ethics or pedagogy appear? - Among the clusters identified, which was the most dominant in terms of frequency or centrality (in numbers/percentages)?
  • Statements like “emerging patterns underscore concerns regarding AI ethics” (page 9, lines 317-318) are too vague - how many publications addressed ethical concerns? What proportion does this represent within the overall corpus? Word clouds can offer visual interest, but readers should not be expected to interpret them without detailed narrative support. A more data-driven presentation (such as frequency tables or percentages) would allow for a more rigorous and informative interpretation of thematic patterns across the dataset.
  • On page 9, line 323, the authors mention that Australia leads in publications related to ChatGPT, followed by Malaysia, Singapore, Japan, and Romania. This ranking appears counterintuitive, especially in light of recent large-scale reviews such as Bond et al. (2024), which found the United States and Canada to be the leading countries in publications on AI in higher education. The authors should address this discrepancy.
  • I was wondering why the authors exclusively focus on ChatGPT as a generative AI tool. While ChatGPT is prominent, the rationale for limiting the scope to only one tool is not discussed. A more comprehensive approach might have included other popular GenAI tools, so the authors need to include a justification for why ChatGPT alone was selected for analysis.
  • Phrases such as “relatively low” or “a notable increase” (page 10, line 343, 345) are used without providing supporting numerical data. The results section should report concrete values rather than general impressions.
  • There is a recurring tendency to interpret findings within the Results section (see, for example, lines 346–349), where the authors begin speculating on causes or implications. This blurs the line between results and discussion. If the authors wish to engage in interpretation alongside reporting, it would be more appropriate to retitle this section to “Findings and Discussion.” Otherwise, interpretive commentary should be reserved for the subsequent discussion section.
  • On page 11, the authors claim to present the “top 10 research areas for ChatGPT publications,” yet the accompanying graph actually displays the titles of journals with the highest number of relevant publications. This is not equivalent to a classification by research areas, unless the journals have been explicitly categorized into thematic domains, which is neither described in the text nor shown in the figure. Several of these journals belong to the same subfield or are interdisciplinary, so mentioning their titles does not clearly indicate distinct research areas.
  • The Discussion section introduces several important themes - such as concerns around AI ethics, transparency, and user perspectives (see page 12, lines 390–395) - but these themes are not clearly supported by the data presented in the Results section. This weakens the credibility of the argument and raises questions about whether these interpretations are based on the current analysis or extrapolated from the broader literature. Actually, much of the discussion appears to draw heavily on prior literature  on GenAI in higher education, rather than on the study’s actual findings. To improve coherence and academic rigor, I recommend the authors to revise the discussion to ensure all interpretive claims are explicitly tied to specific findings, and you should avoid introducing entirely new themes in the discussion that were not at all present in the results.
  • Minor comment - there appears to be a duplication in the reference list with references 8 and 9 citing the same source.

Good luck and I look forward to reading your revised manuscript!

Author Response

Thank you for the opportunity to review your manuscript. The topic is timely and relevant. Overall, the paper is clearly written and contributes to the growing body of literature on AI in education. However, I have several concerns that require attention.

  • One key concern refers to the misalignment between the research goals and the methodology used. The manuscript aims to conduct a “comprehensive exploration of the impactof ChatGPT in teaching and learning within higher education,” with two specific objectives: to identify key factors influencing the adoption and acceptance of ChatGPT, and to investigate the role of institutional policies and support systems in this process. However, the study employs a bibliometric analysis methodology guided by the PRISMA framework. While bibliometric analysis can provide valuable insights into publication trends, citation networks, and thematic developments, it is not inherently equipped to assess impact - particularly the kind of impact implied by the manuscript’s title and goals. This mismatch raises some questions such as how do the authors define “impact,” and what kind of impact is being measured or inferred through the bibliometric approach? Given that the study is not empirical in the traditional sense, it does not involve primary data collection, I would suggest that the authors revise the manuscript’s title and research objectives to more accurately reflect the scope and nature of their analysis. Title and objectives were revised. A Comprehensive Review of ChatGPT in Teaching and Learning within Higher Education

 

  • Another key concern refers to the misalignment between the research objectives and the actual results presented. The findings reported in the manuscript primarily focus on bibliometric indicators, including keyword co-occurrence, country-level publication trends, annual publication volume, and the top research areas associated with ChatGPT publications. These analyses do not clearly address the study’s reported objectives. For example, one research objective refers to the factors influencing adoption, but the analysis does not provide insight into pedagogical, psychological, or institutional factors influencing adoption and acceptance. The aspect of institutional policies and support systems is also not addressed in the findings. No systematic coding, synthesis, or thematic analysis of policy-related content appears to be included. Revised and added from page 529 to 554
  • Introduction- The introduction effectively highlights the potential benefits of integrating ChatGPT into higher education. However, it presents an overly optimistic view, with little to no discussion of the challenges or controversies associated with ChatGPT's use in educational contexts. For instance, on page 2, lines 69–70, the authors state: “While integrating ChatGPT into education brings both opportunities and challenges, it also plays a valuable role in tasks like...” -  yet the text proceeds to elaborate only on the opportunities, without addressing at all to the challenges mentioned. Revised and added a paragraph from page 77
  • While the manuscript includes two research objectives, it does not clearly articulate specific research questions. I recommend the authors to explicitly formulate research questions aligned with their stated objectives.included on page 114
  • Literature review - The literature review presents several important claims regarding the integration of ChatGPT and AI tools into higher education, but in multiple instances, these claims are not supported by references. For example: on page 3, lines 99-101 and 103-105 contain significant assertions that require proper citation.
  • On line 102, the full term “artificial intelligence”is used, whereas “AI” is consistently used throughout the rest of the section.
  • The sentence in lines 89-93, which discusses the relevance and capabilities of ChatGPT, is supported by a reference to Siemens & Long (2011), which pertain to Learning Analytics. These citations do not appear to substantiate the specific claims made about ChatGPT and should be revised accordingly. Corrected
  • On lines 128-130, an additional key statement is made without any reference.
  • The literature review would also benefit from incorporating more recent and directly relevant studies on the integration of GenAI tools into higher education. The authors may consider referencing the following recent publications that explore both opportunities and concerns surrounding AI in higher education settings: included
  • Bond, M., Khosravi, H., De Laat, M. et al. (2024). A meta systematic review of artificial intelligence in higher education: a call for increased ethics, collaboration, and rigour. International Journal of Educational Technology in Higher Education,21(4). https://doi.org/10.1186/s41239-023-00436-z
  • Usher, M. & Amzalag, M. (2025). From prompt to polished: Exploring student-chatbot interactions for academic writing assistance. Education Sciences, 15(3), 329.https://doi.org/10.3390/educsci15030329
  • Usher, M.(2025). Generative AI vs. instructor vs. peer assessments: A comparison of grading and feedback in higher education. Assessment and Evaluation in Higher Education, 1–16. https://doi.org/10.1080/02602938.2025.2487495
  • Subsection 2.1– the title of this subsection is somewhat misleading. It is titled “Factors Influencing the Adoption and Acceptance of ChatGPT in Higher Education” but its content primarily focuses on ethical considerations rather than broader adoption factors. Moreover, the authors appear to refer to ethical issues again in subsection 2.3, leading to unnecessary repetition. Revised
  • If the intent in subsection 2.1 was indeed to discuss adoption and acceptance, then the current section omits several key aspects, such as institutional readiness, digital infrastructure, faculty and student attitudes, etc. See the paper of – Faraon, M., Rönkkö, K., Milrad, M. et al. (2025). International perspectives on artificial intelligence in higher education: An explorative study of students’ intention to use ChatGPT across the Nordic countries and the USA. Educ Inf Technology. https://doi.org/10.1007/s10639-025-13492-x. 1 was revised to include the suggestion
  • On page 4, lines 154-156 are vague and unsupported by references. So here, again, more recent literature addressing ethical concerns around AI and chatbots in higher education should be incorporated. Revised and cited.
  • Subsection 2.2– On page 5, lines 187–189, the discussion of “policies” lacks clarity. It is unclear whether the policies referenced pertain to the integration of AI in education, digital learning technologies more broadly, or some other domain. The source cited in support of this sentence does not appear to be directly related to AI. Rephrased and added reference
  • As previously noted, subsection 2.3,which refers to ethical issues, is redundant. Much of this content overlaps with subsection 2.1, and its inclusion in both sections fragments the discussion unnecessarily. The authors should consolidate these points into a single, well-titled subsection to avoid repetition.
  • On subsection 2.3, there is a citation labeled simply as “(Reference 2024)” – this appears to be an oversight. Corrected
  • In the Methodologysection - the authors state that a total of 3,874 articles were retrieved across three databases, yet there is no information about the time range – the publication years from which these records were drawn. Sorry about that I have added a year range
  • The methodologysection is still missing several important details, particularly in the initial “identification” phase. Specifically: - Search strings - the authors did not provide elaboration about the actual search strings or combinations of keywords used - this is a critical omission. Readers need to know which specific terms were used and how they were combined across the databases. - Time range – as I mentioned above, there is no mention of the time period covered in the search. – The inclusion criteria are vaguely stated. For example, it is unclear how the authors operationalized “focusing on the influence of AI technologies (ChatGPT)”- was this determined based on titles, abstracts, keywords, or full-text screening? - Were any automated tools or inter-rater reliability measures used during screening? Addressed
  • At the end of the methodology section, the authors include an illustration depicting the article identification and filtering process. However, it is not labeled or referenced in the text. This figure should be designated as Figure 1and explicitly referenced in the body of the methodology section. As a result, the subsequent diagram on the following page should be renumbered accordingly (e.g., Figure 2). The PRISMA flow diagram has now been labeled as Figure 1 and is explicitly referenced in the methodology section. All subsequent figures have been renumbered accordingly.
  • Results section– one comment refers to the presentation of visual materials throughout this section. In multiple instances, such as on page 8, line 286 ("The diagram illustrates…") and page 9, line 322 (“the graph showcased..”) - the text refers to visual content without formally identifying them as figures/images. To adhere to academic conventions, each figure should be sequentially numbered and clearly referenced in the text.
  • On page 7, the beginning of the Results section (lines 276-279) reiterates information that has already been presented in the methodology section. These sentences could be removed or revised to avoid redundancy.
  • On the same page,lines 279-283 require some polishing for improved clarity and readability. Moreover, the citation for Van Eck and Waltman (2010), referenced in these lines, does not appear in the reference list. Polished
  • The keyword analysis presented in the Results section lacks the depth and precision expected in a bibliometric study. While the authors discuss common terms such as “ChatGPT,” “higher education,” and “university students,” these results are unsurprising given the study’s search scope. It goes back to my comment that the methodology section does not include the specific search terms used, which further undermines the value of reporting that these exact terms appear frequently. Addressed
  • Additionally, the analysis relies heavily on visualizations (like word clouds) but without adequate explanation or quantification. Many questions remain without answers, for example: - what percentage of the publications included each of the top keywords? - How often did terms related to ethics or pedagogy appear? - Among the clusters identified, which was the most dominant in terms of frequency or centrality (in numbers/percentages)? Revised
  • Statements like “emerging patterns underscore concerns regarding AI ethics” (page 9, lines 317-318) are too vague - how many publications addressed ethical concerns? What proportion does this represent within the overall corpus? Word clouds can offer visual interest, but readers should not be expected to interpret them without detailed narrative support. A more data-driven presentation (such as frequency tables or percentages) would allow for a more rigorous and informative interpretation of thematic patterns across the dataset. Revised
  • On page 9, line 323, the authors mention that Australia leads in publications related to ChatGPT, followed by Malaysia, Singapore, Japan, and Romania. This ranking appears counterintuitive, especially in light of recent large-scale reviews such as Bond et al. (2024), which found the United States and Canada to be the leading countries in publications on AI in higher education. The authors should address this discrepancy. Revised
  • I was wondering why the authors exclusively focus on ChatGPT as a generative AI tool. While ChatGPT is prominent, the rationale for limiting the scope to only one tool is not discussed. A more comprehensive approach might have included other popular GenAI tools, so the authors need to include a justification for why ChatGPT alone was selected for analysis. A justification has now been added to the manuscript to clarify this choice.
  • Phrases such as “relatively low” or “a notable increase” (page 10, line 343, 345) are used without providing supporting numerical data. The results section should report concrete values rather than general impressions. Revised
  • There is a recurring tendency to interpret findings within the Results section (see, for example, lines 346–349), where the authors begin speculating on causes or implications. This blurs the line between results and discussion. If the authors wish to engage in interpretation alongside reporting, it would be more appropriate to retitle this section to “Findings and Discussion.” Otherwise, interpretive commentary should be reserved for the subsequent discussion section. Thank you for that, revised
  • On page 11, the authors claim to present the “top 10 research areas for ChatGPT publications,” yet the accompanying graph actually displays the titles of journals with the highest number of relevant publications. This is not equivalent to a classification by research areas, unless the journals have been explicitly categorized into thematic domains, which is neither described in the text nor shown in the figure. Several of these journals belong to the same subfield or are interdisciplinary, so mentioning their titles does not clearly indicate distinct research areas. Revised
  • The Discussion sectionintroduces several important themes - such as concerns around AI ethics, transparency, and user perspectives (see page 12, lines 390–395) - but these themes are not clearly supported by the data presented in the Results section. This weakens the credibility of the argument and raises questions about whether these interpretations are based on the current analysis or extrapolated from the broader literature. Actually, much of the discussion appears to draw heavily on prior literature  on GenAI in higher education, rather than on the study’s actual findings. To improve coherence and academic rigor, I recommend the authors to revise the discussion to ensure all interpretive claims are explicitly tied to specific findings, and you should avoid introducing entirely new themes in the discussion that were not at all present in the results. Revised

Author Response File: Author Response.docx

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

The author(s) responded to most of the points raised by the reviewers. The applied revisions are appropriate and contribute to the development of the article. However, the quality, originality and extent of the article remain unchanged.

Comments on the Quality of English Language

A final proofreading by native English language editor will help the readability of the paper.

Author Response

Reviewer's comment: A final proofreading by native English language editor will help the readability of the paper.

Author's response: Thank you once again, The paper will be sent to the language editor to assist with readability of the paper

Reviewer 2 Report

Comments and Suggestions for Authors

It is gratifying to see that you have carefully considered and diligently implemented the suggestions I provided. Your openness to feedback and your effort to enhance the scientific quality of your work are truly commendable. I congratulate you on this meticulous approach and wish you continued success in your academic endeavors.

Author Response

Comment: It is gratifying to see that you have carefully considered and diligently implemented the suggestions I provided. Your openness to feedback and your effort to enhance the scientific quality of your work are truly commendable. I congratulate you on this meticulous approach and wish you continued success in your academic endeavors.

Response: Thank you so much for your kind words and encouragement. I truly appreciate your guidance and constructive feedback; it has played a significant role in helping me refine and strengthen my work. Your support means a great deal, and I remain committed to maintaining a high standard of academic rigor moving forward

Reviewer 3 Report

Comments and Suggestions for Authors

See the attached file.

Comments for author File: Comments.pdf

Author Response

please see attached

Author Response File: Author Response.pdf

Round 3

Reviewer 3 Report

Comments and Suggestions for Authors

For your future submissions, please ensure that you respond to each reviewer comment thoroughly and respectfully. Avoid superficial replies, and do not state that an issue has been resolved if it was not actually addressed.

Author Response

Comment 1: The revised title better reflects the actual scope of the manuscript. As a minor suggestion, I would recommend changing the preposition in the title to “A Comprehensive Review of ChatGPT” (instead of “on ChatGPT”) as written in the manuscript.

Response: Revised to “A Comprehensive Review of ChatGPT in Teaching and Learning within Higher Education”

2 comment: This is written once again in lines 72-73: "Hence, this research aims to conduct a Comprehensive Review on ChatGPT in Teaching and Learning within Higher Education” – why the words are capitalized mid-sentence and why “review on”?

Response: Revised to “, this research aims to conduct a Comprehensive review of chatgpt in teaching and learning within higher education”

comment 3: There seems to be a lack of careful proofreading. For example, the revised sentence from the abstract is “This paper aims to conduct a comprehensive on ChatGPT in Teaching and Learning within Higher Education,” which is grammatically incorrect and missing a critical word within it.

Response: revised to "this paper aims to conduct a comprehensive of ChatGPT in Teaching and Learning within Higher Education"

comment 4: Regarding the research objectives – the authors did not change their two objectives; hence, my prior concern still remains. The authors responded to me writing that “Revision was done and added as per your suggestion see line 529 to 554” – well, these lines are the reference list, how does it correspond to my comment?

Response: Thank you for your feedback and suggestions regarding the revision of the paper’s title and objectives.
I have implemented the recommended change to the title, now revised as "A Comprehensive Review of ChatGPT in Teaching and Learning within Higher Education."
Regarding the study’s objectives, I appreciate your input; however, after careful reflection, I believe the original objectives remain well-aligned with the revised title and the scope of the paper. Specifically, the focus on:
1.
Identifying key factors influencing the adoption and acceptance of ChatGPT in higher education, and
2. Investigating the role of institutional policies and support systems in this process
Directly supports the intent to provide a comprehensive understanding of ChatGPT’s role within teaching and learning environments.

These objectives guide the review's structure and thematic content, ensuring both practical relevance and academic contribution. for these reasons, I have retained the original objectives. I hope this explanation clarifies the rationale behind this decision, and I remain open to any further suggestions you may have.

Comment 5: I will attach my prior comment about the research objectives: Another key concern refers to the misalignment between the research objectives and the actual results presented. The findings reported in the manuscript primarily focus on bibliometric indicators, including keyword co- occurrence, country-level publication trends, annual publication volume, and the top research areas associated with ChatGPT publications. These analyses do not clearly address the study’s reported objectives. For example, one research objective refers to the factors influencing adoption, but the analysis does not provide insight into pedagogical, psychological, or institutional factors influencing adoption and acceptance. The aspect of institutional policies and support systems is also not addressed in the findings. No systematic coding, synthesis, or thematic analysis of policy-related content appears to be included.

response: Thank you for your feedback. I would like to clarify that the study was intentionally designed as a bibliometric review. The stated objectives identifying key adoption factors and institutional support are addressed through bibliometric indicators such as keyword co-occurrence, publication trends, and research areas.
While the study does not include thematic or policy analysis, these aspects are reflected indirectly in the patterns and focus of the existing literature. I believe the current scope aligns with the study’s aim and methodology, and I appreciate your suggestions for future research development.

Comment 6: In response to my previous comment regarding the lack of a balanced perspective in the Introduction, the authors state that they “revised and added a paragraph from page 77” (presumably referring to line 77). However, upon review of the revised manuscript, it is unclear what changes were actually made in this section. The only visible change appears to be the numerical references in yellow. No new sentences or substantive additions are evident in the surrounding text. This raises two possibilities, both of which are problematic: If the highlighted numbers are newly added references, it is unclear how so many citations could have been added without any corresponding expansion or revision of the text itself. Or - if the highlighted material was not intended as a revision, then it is unclear what change, if any, was made in response to my previous concern.

Response: Thank you for your observation. I would like to clarify that the paragraph in question was revised to address your comment about balance, and new references were indeed added. The highlighted numbers reflect a formatting update, from in-text author-date style to numbered referencing as per the journal’s referencing guidelines.

Comment 7: In my initial review, I noted that the manuscript lacks clearly formulated research questions. I recommended that the authors explicitly state research questions aligned with these objectives to improve the focus and structure of the paper. In their response, the authors indicate that “Research question was formulated see line 114". Well, once again, upon reviewing this line, I find no new content that presents a research question. The section remains part of the literature review and does not contain any explicit formulation.

Response: I truly appreciate your recommendation regarding the inclusion of a clearly formulated research question. After careful reflection, I have decided to retain the current structure of the paper, focusing on the stated objectives to guide the bibliometric review. I fully acknowledge the importance of research questions in enhancing clarity and direction, and I will certainly take this valuable suggestion forward in future related studies.

Comment 8: In my previous review, I recommended the authors to expand the literature review and add more recent publications about the integration of GenAI tools into higher education, and also suggested four recent publications, however, I do not see that they have added any of these to the manuscript. In response to my previous comment, they merely responded “Included” – well, included where exactly? How? In what manner?

Response: We have now incorporated a comprehensive paragraph discussing recent and directly relevant studies on the integration of generative AI tools into higher education, see line 125-136.

Comment 9: In my previous review, I commented about the title of subsection 2.1 and the authors wrote in response that it was revised, Well, it was not, it still has the same title.

Response: Thank you for that, the title of subsection 2.1 remains unchanged. However, we would like to clarify that while the heading was retained, the content within the subsection was revised to better align with the theme of factors influencing the adoption and acceptance of ChatGPT in higher education, as per your earlier suggestion. The updated content now more directly addresses these factors, incorporating relevant literature and discussion.

Comment 10: In my previous review, I commented that in the Methodology section there is no information about the time range – the publication years from which these records were drawn. The authors responded that they have added it, yet I do not see it. If it has been added please state in which lines and highlight it in the revised manuscript.

Response: Sorry about that, it’s not included, see line 261

 

Comment 11: In my previous review I commented about the methodology section – “The methodology section is still missing several important details, particularly in the initial “identification” phase. Specifically:
- Search strings - the authors did not provide elaboration about the actual search strings or combinations of keywords used - this is a critical omission. Readers need to know which specific terms were used and how they were combined across the databases. - Time range – as I mentioned above, there is no mention of the time period covered in the search. – The inclusion criteria are vaguely stated. For example, it is unclear how the authors operationalized “focusing on the influence of AI technologies (ChatGPT)”- was this determined based on titles, abstracts, keywords, or full-text screening? - Were any automated tools or inter-rater reliability measures used during screening?” – I do not see anything highlighted in the revised methodology section. If something was added, please highlight and mention it in the point-by-point response

Response: Thank you for your feedback. We have now addressed your comments by including a PRISMA diagram to clearly show the article selection process, including the number of duplicates, irrelevant studies removed, and the final number of articles included. We also clarified the time range (2018–2025), described the use of Boolean operators, and explained that article selection was based on title, abstract, and keyword screening. The screening was conducted manually by the lead author. We hope these additions now meet the expectations for transparency in the methodology section

Comment 12: In my initial review, I noted that the figures included in the manuscript lacked proper labeling, titles, and in-text references. The authors responded that this issue has been addressed. However, upon reviewing the revised manuscript, I still observe figures without any labels, captions, or consistent references within the text. This is not a minor formatting detail - it significantly affects the clarity, readability, and academic quality of the paper.

Response: Figures were revised and labelled accordingly

 

Back to TopTop