Next Article in Journal
Assess Spatial Equity Considering the Similarity Between GIS-Based Supply and Demand Maps: A New Framework with Case Study in Beijing
Previous Article in Journal
A Model of Building Changes to Support Comparative Studies and Open Discussions on Densification
 
 
Article
Peer-Review Record

Use and Effectiveness of Chatbots as Support Tools in GIS Programming Course Assignments

ISPRS Int. J. Geo-Inf. 2025, 14(4), 156; https://doi.org/10.3390/ijgi14040156
by Hartwig H. Hochmair
Reviewer 1: Anonymous
Reviewer 2: Anonymous
Reviewer 3:
Reviewer 4: Anonymous
ISPRS Int. J. Geo-Inf. 2025, 14(4), 156; https://doi.org/10.3390/ijgi14040156
Submission received: 6 January 2025 / Revised: 16 March 2025 / Accepted: 30 March 2025 / Published: 2 April 2025

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Author,

Thank you for the interesting manuscript on a very emerging topic. In my opinion it is worthy to be published upon some small improvements.

Overall, in my opinion, the research design is very well-constructed and personally I appreciate discussions such as the overreliance, for which was a clear sign the findings in RQ3. And evaluations of the code modifications from the students' side.

Nevertheless, I find a lack of discussion on the actual outputs from the assignments (in the end were they submitted correctly or on?) and whether the self-perception of helpfulness from the students was actually translated into better final marks. In fact, most of your discussion is from the side of the students evaluating the use of chatbots, but in my opinion, it is missing the part on how you, as an educator, evaluate the use of chatbots.

A few smaller comments are as:

- in the spirit of openness and reproducibility as a researcher, are you willing to publish your quizzes and questionnaires? eventually, if many scholars do the same, probably some definitive set can be appointed as standard.

- in my opinion a bit more explanation is needed on why you have chosen specifically assignements 2,3 and 7, it is not really clear

- l156-157 - probably needs to be clarified that this was useful at the time of the experiment.

please format the formats accordingly.

regards.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

Article ID : ijgi-3438214

Article Name : Use and Effectiveness of Chatbots as Support Tools in GIS Programming Course Assignments

 

 

The present article has many stong technical contributions.

Advancements in large language models have profoundly altered higher education through the incorporation of AI chatbots in course design, instruction, administration, and student care. This study assesses the utilization trends, efficacy, and perceptions.

Utilization of chatbots in a graduate-level GIS Programming course utilizing Python at a U.S. university. Students indicated observed enhancements in skills and the utilization of assistance resources across three assignments of differing complexity and spatial context. During group discussions, students exchanged their experiences, strategies, and anticipated future applications of chatbots in GIS programming and other areas. The findings provide new perspectives on the function of generative AI in GIS programming education, emphasizing that previous programming knowledge increases the perceived utility of assignments. The research indicates that chatbots partially supplant conventional resources (e.g., websites) for assignment assistance. Students expressed favorable feelings of the chatbots' efficacy, especially in relation to intricate spatial problems. Students expressed optimism regarding the prospective role of chatbots in enhancing learning; yet, apprehensions emerged regarding excessive dependence on AI, which may impede the cultivation of autonomous problem-solving and programming abilities.
This study offers significant insights for enhancing the incorporation of chatbots in GIS teaching.

 

 

Therefore, it is interesting and attractive. However, it should be major revised to enhance the quality of the present manuscript, as follows:

 

The article should open with three sub sections in introduction—a part on motivation, a section on contributions, and a section on organization .

The literature review does not fulfill the established criteria. I would appreciate it if you could share one table that provides a general overview about the current developments about the particular topic.

 

The contributions to the study article are either absent or are not mentioned in a clear way. Please include it in the appropriate contributions sub area.

Please add one comparison table with the existing work and the proposed work done.

Methodology should be present in flow chart mode for clear understanding of the present work

Figure 1 should be clearly derived with its sub part

Section 2.2.2 is not clearly derived

 

 

Comments on the Quality of English Language

Quality of presentation is ok.

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 3 Report

Comments and Suggestions for Authors

1. The quiz results suggest chatbot exposure lowers comprehension, but could motivation or test design be influencing this outcome? 

2. Should AI use be phased in for beginners to prevent over-reliance?

3. Do you think the small sample size affects the generalizability of the findings, particularly for different GIS education contexts?

4. Were students from different academic backgrounds, or did most have prior GIS or programming experience? Could differences in prior knowledge influence chatbot reliance and perceived learning outcomes?

 

 

Author Response

Please see the attachment

Author Response File: Author Response.pdf

Reviewer 4 Report

Comments and Suggestions for Authors

The manuscript reports the results of an interesting study. I would like to mention the following points for improvement:

1) Many of the findings rely on student self-assessments (e.g., perceived skill improvement, chatbot helpfulness). These subjective measures may not accurately reflect actual learning outcomes. Morevoer, some survey questions could introduce bias if they guide students towards expected responses rather than capturing genuine perspectives.

2) Another point refers to the effect size reporting: While effect sizes (Cohen’s d, Cramer’s V, η²) are included, more context is needed regarding practical significance. Even if statistical significance is achieved in this empirical study, the question comes up: Is the effect large enough to be meaningful in practice?

3) The study compares GPT-4 Turbo and Gemini 1.5 Pro but does not deeply analyze the differences in their responses. More detailed evaluation (e.g., accuracy, relevance, specificity) would enhance this aspect.

4) The study focuses on students' experiences but does not consider how instructors perceive chatbot use in programming assignments. And in additiion, while the study discusses chatbot use replacing websites, it does not compare chatbots to human-based support (e.g., tutors, office hours). How would you evalute the study results against this background

 

Author Response

Please see the attachment.

Author Response File: Author Response.pdf

Round 2

Reviewer 2 Report

Comments and Suggestions for Authors

Authors addressed all the queries as per the suggestion of reviewer comments. 

It may accept for the publication in this platform.

Comments on the Quality of English Language

Authors addressed all the queries as per the suggestion of reviewer comments. 

It may accept for the publication in this platform.

Reviewer 4 Report

Comments and Suggestions for Authors

The author addressed all points mentioned in the first review round, both in the manuscript and in the response letter. The argumentation is sound. Therefore, I would like to recommend this manuscript for publication.

Back to TopTop