Use of Generative Artificial Intelligence by Final Degree Project Students: Is It Useful in All Steps of Their Work?
Round 1
Reviewer 1 Report
Comments and Suggestions for AuthorsThank you for the paper. It reads well but I am not sure about the contribution. The conclusions are not novel (see e.g., "These findings collectively demonstrate that individual experiences with GenAI in academic contexts vary substantially, even within disciplinary groups, highlighting the critical importance of personalized approaches to AI integration in educational settings."). I can't image any practical implication based on those conclusions.
Also, the novelty of the paper is not explained in the introduction. Moreover, the paper ignores the research published by the community of writing studies scholars.
I have the biggest issue with the Methods section which should provide all the details how the study was performed. From the description provided, I wouldn't be able to replicate the study. I don't understand:
- what happened during the sessions
- who observed what and how
- what happened during the moderated discussions
- how were the answers coded/ analyzed
Author Response
Comments 1:
"Thank you for the paper. It reads well but I am not sure about the contribution. The conclusions are not novel (see e.g., "These findings collectively demonstrate that individual experiences with GenAI in academic contexts vary substantially, even within disciplinary groups, highlighting the critical importance of personalized approaches to AI integration in educational settings."). I can't image any practical implication based on those conclusions.
Also, the novelty of the paper is not explained in the introduction. Moreover, the paper ignores the research published by the community of writing studies scholars."
Response 1:
The present version of the manuscript has been carefully revised to better explain the usefulness of our findings and also include references to other works as the reviewer suggested (references increased from 16 to 26, all of them relevant and recent). Indeed, we have modified the conclusions to make them clearer, but we have also introduced the main goals of our work at the end of the first section (introduction), where, in the previous version of the manuscript, we already proposed two main key points of the paper. By doing so, readers will be able to understand from the very beginning the objectives of our study.
Comments 2:
"I have the biggest issue with the Methods section which should provide all the details how the study was performed. From the description provided, I wouldn't be able to replicate the study. I don't understand:
what happened during the sessions
who observed what and how
what happened during the moderated discussions
how were the answers coded/ analyzed"
Response 2:
We apologize we did not explain the methodology properly in the first version of the manuscript. The main problem, after carefully reading the manuscript again, is that the procedure of the three discussion sessions was not completely described. Without the adequate and complete description, the problem was extended to the following section (tools), where the reader cannot understand why and how the different tools were used.
We have corrected this problem in the present version of the manuscript, including indeed detailed information about the four points raised by the reviewer.
We would like to finish our answer saying that we really appreciate all the reviewer's comment, being extremely relevant and useful to improve the quality of the manuscript.
Reviewer 2 Report
Comments and Suggestions for AuthorsThe manuscript addresses a timely and important topic, examining students’ perceptions of generative AI in final degree projects across multiple disciplines. The design is well thought out and the integration of linguistic analysis adds methodological rigor. The study is clearly relevant for higher education and will be of interest to both educators and researchers.
That said, a few areas could be improved. The English expression, while generally clear, would benefit from some editing to improve readability and flow, especially in the introduction and discussion sections where sentences are sometimes dense. The figures are informative but captions and labels could be expanded to make them more self-explanatory for readers unfamiliar with the study. Finally, the relatively small sample size should be acknowledged more explicitly as a limitation in the conclusions, to avoid overgeneralization of the findings.
Overall, this is a valuable contribution and, with minor refinements, the paper has the potential to be a strong addition to the literature.
Author Response
Comments 1:
"The manuscript addresses a timely and important topic, examining students’ perceptions of generative AI in final degree projects across multiple disciplines. The design is well thought out and the integration of linguistic analysis adds methodological rigor. The study is clearly relevant for higher education and will be of interest to both educators and researchers."
Response 1:
We really appreciate the positive opinion of the reviewer about the potential interest of our article. We agree with the reviewer about the interest of the article to both educators and researchers, involving also different areas such as Computer Science, Education and Psychology.
Comments 2:
"That said, a few areas could be improved. The English expression, while generally clear, would benefit from some editing to improve readability and flow, especially in the introduction and discussion sections where sentences are sometimes dense."
Response 2:
We are really interested on making the article easy to read, so we have read it again trying to make the sentences clearer in the sections that the reviewer suggested. Indeed, we have read the whole paper again, but the methodology and results sections do not seem to be difficult to read in the present version. Those sections are the most related to scientific results, including less discussion, so sentences cannot be extremely long or complicate there.
Comments 3:
" The figures are informative but captions and labels could be expanded to make them more self-explanatory for readers unfamiliar with the study."
We do not completely agree with the reviewer here but we understand his/her concerns. When we jump to the results sections, we have detected that it is hard to understand the objective of the linguistic analysis. For that reason, it seems that it could be hard for the reader to understand the importance of the figure and the results. We could not find a clear way to expand the caption and labels as the reviewer suggested, but we have explained the results and also the tools section a little bit more to make clearer the analysis performed in this first step of the experiment.
However, we agree with the reviewer, when facing figure 2, since we have found that the caption could be improved, as it has been done in the present version of the manuscript.
Comments 3:
"Finally, the relatively small sample size should be acknowledged more explicitly as a limitation in the conclusions, to avoid overgeneralization of the findings."
We have included a sentence at the end of the conclusions related to this point.
Comments 4:
"Overall, this is a valuable contribution and, with minor refinements, the paper has the potential to be a strong addition to the literature."
Again, we appreciate reviewer's good opinion about our article and we strongly believe that his/her comments help to improve the quality of the document.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe aim of the study was to address a gap in the existing literature on the role of GenAI in tackling challenging and complex tasks such as the preparation of final degree projects across various university programs. The study involved 11 final-year students from the following fields of study: Tourism, Business Administration and Management, Law, Computer Engineering, Chemical Engineering, Chemistry, History, Musicology, Philosophy, Medicine, and Psychology.
To ensure participant homogeneity, which the authors regard as an important methodological prerequisite for the comparison and interpretation of results, three indicators were defined: Type Token Ratio (TTR), Flesch-Kincaid Grade (FKG), and Academic Maturity, the latter predicted using AI. While TTR and Academic Maturity indicated a high degree of homogeneity, the FKG results revealed considerable variability.
For the purposes of the study, students used GenAI in three different phases of their final project preparation. In total, each student engaged in these activities for three hours, supervised by at least three supervisors. After a 60-minute GenAI-supported activity, the students spent the following 60 minutes responding to the researchers’ questions.
The paper presents the results of a descriptive statistical analysis concerning students’ assessments of the effectiveness of AI in various aspects of final project preparation. Most of the results focus on individual differences among students from different majors. Students perceived GenAI as most useful for developing theoretical framework and defining research objectives, and least useful for visualizing results. Differences in responses were minimal for the use of GenAI in relation to the theoretical framework and objectives, whereas results for other aspects of project preparation varied considerably. The authors attribute the pronounced variability in individual results to disciplinary factors rather than individual differences between participants.
Strengths of the paper
The authors provided a clear explanation of the results of AI tool use in recent studies.
They identified a lack of research on the use of GenAI in tackling complex and demanding tasks, particularly at the university level of education. The paper therefore addresses an important and under-researched topic.
Some particularly interesting conclusions are presented, for example:
“GenAI as a cognitive amplifier that enhances higher order thinking by offloading routine tasks [6,14]. On the contrary, students demonstrated considerable skepticism regarding GenAI reliability for bibliographic research and source validation, with none expressing complete trust in GenAI-generated bibliographic recommendations—echoing concerns about AI’s epistemological limitations and the need for domain-specific training [5,3]…
Students expressed mixed opinions regarding GenAI's ability to synthesize conclusions with theoretical frameworks, acknowledging its potential for novel perspectives while critiquing its tendency toward oversimplification and false positivity bias [3,11].”
Weaknesses of the paper
The abstract highlights the use of a mixed-methods design based on the collection and analysis of both quantitative and qualitative data. From the description of the 60-minute discussions following students’ use of GenAI in preparing their final projects, it is evident that both types of data were collected. However, the results are dominated by quantitative findings. Furthermore, the study does not specify which type of mixed-methods design was employed or provide a rationale for its use.
In subsection 2.2., the term experimental setup is mentioned, and the study is referred to as an experiment in several places. However, it is never explained what type of experiment was conducted. Creswell (2019) distinguishes the following types of experimental designs used in educational research:
“Between-group designs
- True experiments (pre- and posttest, posttest only)
- Quasi-experiments (pre- and posttest, posttest only)
- Causal-comparative research
- Factorial designs
Within-group or individual designs
- Time-series experiments (interrupted, equivalent)
- Repeated-measures experiments
- Single-subject experiments” (p. 309)
This study is difficult to categorize within any of the above-mentioned experimental designs.
The results primarily concern the comparison of responses from the 11 participants, each representing a different university program. While they were found to have similar levels of academic maturity and lexical diversity/readability (as measured by TTR), it is rather bold to suggest that “GenAI utility perceptions could be attributed to disciplinary factors rather than individual academic preparedness” (lines 332–333). It is quite likely that the differences in participants’ responses were the result of various factors that could not be controlled, since each discipline was represented by only one student. Even if the study had been conducted with a larger sample, it would still have been difficult to control for 11 different groups of participants.
Based on descriptive statistical analysis alone, it is not possible to generalize the findings or claim with reasonable certainty that the conclusions apply beyond the specific context of this study.
Participants used GenAI under controlled conditions to prepare final projects on three occasions, for a total of three hours. Based on such limited experience with AI tools, it is unlikely that students could draw substantial conclusions. Their responses were likely influenced by previous experience with GenAI.
The discussion could be further elaborated and more clearly linked to the findings of similar studies.
Conclusion
Since most of the results and conclusions relate to comparisons of different disciplines represented by a single student, the results and conclusions obtained cannot be generalized and are therefore not sufficiently scientifically relevant. Given the importance and timeliness of the topic, it would be advisable to conduct a new study based on a clearly defined methodology and a larger sample if an experimental design is to be used. As the authors have already collected qualitative data (screen recordings and guided discussions after the use of GenAI), it might be worthwhile to develop a case study that could contribute to the understanding of the use of GenAI in preparing final degree projects. In this case, the small sample size would no longer be a limitation.
Reference
Creswell, J. W. (2019). Educational research: Planning, conducting, and evaluating quantitative and qualitative research. Pearson.
Author Response
Comments 1:
The aim of the study was to address a gap in the existing literature on the role of GenAI in tackling challenging and complex tasks such as the preparation of final degree projects across various university programs. The study involved 11 final-year students from the following fields of study: Tourism, Business Administration and Management, Law, Computer Engineering, Chemical Engineering, Chemistry, History, Musicology, Philosophy, Medicine, and Psychology.
To ensure participant homogeneity, which the authors regard as an important methodological prerequisite for the comparison and interpretation of results, three indicators were defined: Type Token Ratio (TTR), Flesch-Kincaid Grade (FKG), and Academic Maturity, the latter predicted using AI. While TTR and Academic Maturity indicated a high degree of homogeneity, the FKG results revealed considerable variability.
For the purposes of the study, students used GenAI in three different phases of their final project preparation. In total, each student engaged in these activities for three hours, supervised by at least three supervisors. After a 60-minute GenAI-supported activity, the students spent the following 60 minutes responding to the researchers’ questions.
The paper presents the results of a descriptive statistical analysis concerning students’ assessments of the effectiveness of AI in various aspects of final project preparation. Most of the results focus on individual differences among students from different majors. Students perceived GenAI as most useful for developing theoretical framework and defining research objectives, and least useful for visualizing results. Differences in responses were minimal for the use of GenAI in relation to the theoretical framework and objectives, whereas results for other aspects of project preparation varied considerably. The authors attribute the pronounced variability in individual results to disciplinary factors rather than individual differences between participants.
Strengths of the paper
The authors provided a clear explanation of the results of AI tool use in recent studies.
They identified a lack of research on the use of GenAI in tackling complex and demanding tasks, particularly at the university level of education. The paper therefore addresses an important and under-researched topic.
Some particularly interesting conclusions are presented, for example:
“GenAI as a cognitive amplifier that enhances higher order thinking by offloading routine tasks [6,14]. On the contrary, students demonstrated considerable skepticism regarding GenAI reliability for bibliographic research and source validation, with none expressing complete trust in GenAI-generated bibliographic recommendations—echoing concerns about AI’s epistemological limitations and the need for domain-specific training [5,3]…
Students expressed mixed opinions regarding GenAI's ability to synthesize conclusions with theoretical frameworks, acknowledging its potential for novel perspectives while critiquing its tendency toward oversimplification and false positivity bias [3,11].”
Answer 1:
We really appreciate the complete analysis made by the reviewer and we also appreciate the good opinion he/she has about our work, raising in detail what he/she considers the strenghts of our manuscript.
Comments 2:
Weaknesses of the paper
The abstract highlights the use of a mixed-methods design based on the collection and analysis of both quantitative and qualitative data. From the description of the 60-minute discussions following students’ use of GenAI in preparing their final projects, it is evident that both types of data were collected. However, the results are dominated by quantitative findings. Furthermore, the study does not specify which type of mixed-methods design was employed or provide a rationale for its use.
In subsection 2.2., the term experimental setup is mentioned, and the study is referred to as an experiment in several places. However, it is never explained what type of experiment was conducted. Creswell (2019) distinguishes the following types of experimental designs used in educational research:
“Between-group designs
- True experiments (pre- and posttest, posttest only)
- Quasi-experiments (pre- and posttest, posttest only)
- Causal-comparative research
- Factorial designs
Within-group or individual designs
- Time-series experiments (interrupted, equivalent)
- Repeated-measures experiments
- Single-subject experiments” (p. 309)
This study is difficult to categorize within any of the above-mentioned experimental designs.
Answer 2:
We appreciate the reviewer’s careful feedback and have revised the Methods to clarify design and terminology. Specifically, we now state that the study employs a convergent mixed-methods design with concurrent collection of quantitative data (closed-ended items during the debates) and qualitative data (full recordings/transcripts of 60-minute discussions). Each strand was analyzed separately and integrated at interpretation to enable triangulation. The rationale for mixing is pragmatic and complementary: numerical indicators of perceived GenAI utility are paired with participants’ articulated reasoning to provide a fuller account. Although results are presented with quantitative summaries first, inferential weight is assigned at the integration stage where convergences and divergences are interpreted. We also clarify that the study is descriptive and observational—there is no manipulation, random assignment, or control group—and we have replaced “experimental setup” with “study procedures.”
Comments 3:
The results primarily concern the comparison of responses from the 11 participants, each representing a different university program. While they were found to have similar levels of academic maturity and lexical diversity/readability (as measured by TTR), it is rather bold to suggest that “GenAI utility perceptions could be attributed to disciplinary factors rather than individual academic preparedness” (lines 332–333). It is quite likely that the differences in participants’ responses were the result of various factors that could not be controlled, since each discipline was represented by only one student. Even if the study had been conducted with a larger sample, it would still have been difficult to control for 11 different groups of participants.
Based on descriptive statistical analysis alone, it is not possible to generalize the findings or claim with reasonable certainty that the conclusions apply beyond the specific context of this study.
Participants used GenAI under controlled conditions to prepare final projects on three occasions, for a total of three hours. Based on such limited experience with AI tools, it is unlikely that students could draw substantial conclusions. Their responses were likely influenced by previous experience with GenAI.
Answer 3:
We appreciate these important cautions. In response, we have tempered all disciplinary attributions, characterizing differences as exploratory and hypothesis-generating given the single participant per discipline and the descriptive nature of our analyses; noted that the limited exposure (~3 hours) constrains ecological validity; and strengthened mixed-methods integration by aligning descriptive trends with explanatory qualitative excerpts in the results. In addition, we have conducted an inductive–deductive thematic analysis of the verbatim transcripts: two researchers independently coded an initial subset to develop a shared codebook, reconciled discrepancies by consensus, and applied the finalized codebook to the full corpus using constant comparison; we maintained an audit trail and performed cross-checks on a random sample. Themes were then mapped to item-level descriptive results to integrate qualitative explanations with the quantitative strand.
Comments 4:
The discussion could be further elaborated and more clearly linked to the findings of similar studies.
Answer 4:
We have revised Section 4 ( renamed as “Discussion and Conclusions”) and provided more information at the end of the section. We believe these additions enhance the theoretical depth and interpretative richness of the discussion, aligning our results with ongoing debates in the field.
Comments 5:
Conclusion
Since most of the results and conclusions relate to comparisons of different disciplines represented by a single student, the results and conclusions obtained cannot be generalized and are therefore not sufficiently scientifically relevant. Given the importance and timeliness of the topic, it would be advisable to conduct a new study based on a clearly defined methodology and a larger sample if an experimental design is to be used. As the authors have already collected qualitative data (screen recordings and guided discussions after the use of GenAI), it might be worthwhile to develop a case study that could contribute to the understanding of the use of GenAI in preparing final degree projects. In this case, the small sample size would no longer be a limitation.
Answer 5:
We appreciate the reviewer’s point regarding generalizability from single-student disciplinary representations. In response, we have reframed the study as an exploratory, mixed-methods case study using a convergent core design: each student constitutes a bounded case (three structured sessions; screen recordings; moderated debates; closed-ended items), followed by within-case profiling and cross-case comparison to identify tentative patterns (analytic generalization), rather than statistical generalization. We have tempered all disciplinary attributions and now present them as hypothesis-generating insights grounded in convergences between descriptive trends and explanatory themes from transcripts/screen captures. We also clarify that particularity—not broad generalization—is the hallmark of case study contribution, and we position our findings as building blocks for a subsequent, larger-N confirmatory design (if an experimental approach is warranted). Methodologically, we now specify: study boundaries (time/activity), data sources and integration logic to enhance trustworthiness. This reframing leverages the qualitative corpus already collected and addresses the reviewer’s concern that small-N ceases to be a limitation in a properly bounded case study aimed at understanding GenAI use in final degree project preparation.
Reviewer 4 Report
Comments and Suggestions for AuthorsDear Authors,
First, I would like to say that I found this manuscript very interesting, and it sheds light on important issues within the subject of study.
The methodology is generally appropriate and well-structured. The references are solid, and the topic, as noted, is highly relevant. The conclusions in relation to the initial hypotheses are coherent. It is noteworthy that, despite the small sample size, the authors have carefully nuanced the results to draw valid and easily understandable conclusions. Moreover, the impact of using AI in final degree projects is of high relevance, since it is something, educators increasingly face in their daily practice.
That said, I believe the manuscript would benefit from several improvements:
- The manuscript would benefit from a thorough language revision, especially in the conclusions section. There are redundancies at the beginning of sentences and paragraphs, and in some words and expressions literal Spanish-to-English translations affect clarity.
- The introduction effectively reflects the advances and positive outcomes associated with AI use. However, the potential drawbacks are presented less clearly throughout the paper. It would strengthen the manuscript to expand on the possible negative consequences of irresponsible AI use—such as issues with information retention, immediacy in pedagogy, lack of attention, or conceptual understanding.
- While the methodology is generally clear, some aspects remain somewhat diffuse. On the one hand, the study evaluates AI linguistically, and on the other, it assesses students’ perceptions of the outcomes. Moreover, the procedures for calculating TTR and FKG are not described in sufficient detail. I recommend clarifying these points explicitly in the methodology section.
- Finally, I strongly encourage adding a critical discussion of the results. Such a section would provide valuable perspectives, highlight limitations, and suggest future directions within the field of didactics.
I recommend major revisions before the manuscript can be considered for publication.
That is all from my side.
Thank you very much
Author Response
Comments 1:
First, I would like to say that I found this manuscript very interesting, and it sheds light on important issues within the subject of study.
The methodology is generally appropriate and well-structured. The references are solid, and the topic, as noted, is highly relevant. The conclusions in relation to the initial hypotheses are coherent. It is noteworthy that, despite the small sample size, the authors have carefully nuanced the results to draw valid and easily understandable conclusions. Moreover, the impact of using AI in final degree projects is of high relevance, since it is something, educators increasingly face in their daily practice.
That said, I believe the manuscript would benefit from several improvements:
The manuscript would benefit from a thorough language revision, especially in the conclusions section. There are redundancies at the beginning of sentences and paragraphs, and in some words and expressions literal Spanish-to-English translations affect clarity.
Response 1:
Thank you for this comment and recommendations on the language style. We have thoroughly revised all the manuscript to improve readability and have rewritten the conclusions – together with the newly added discussion contents – to ensure linguistic accuracy and clarity, while avoiding redundancies.
Comments 2:
The introduction effectively reflects the advances and positive outcomes associated with AI use. However, the potential drawbacks are presented less clearly throughout the paper. It would strengthen the manuscript to expand on the possible negative consequences of irresponsible AI use—such as issues with information retention, immediacy in pedagogy, lack of attention, or conceptual understanding.
Response 2:
We greatly appreciate this suggestion for improvement. More qualitative analysis has been added to observe these types of negative consequences in its use according to student perceptions.
Comments 3:
While the methodology is generally clear, some aspects remain somewhat diffuse. On the one hand, the study evaluates AI linguistically, and on the other, it assesses students’ perceptions of the outcomes. Moreover, the procedures for calculating TTR and FKG are not described in sufficient detail. I recommend clarifying these points explicitly in the methodology section.
Response 3:
We appreciate the reviewer’s careful feedback and have revised the Methods to clarify design and terminology. Firstly, details regarding linguistic feature analysis tools and its use have been added. Additionally, we now explain that the study employs a convergent mixed-methods design with concurrent collection of quantitative data (closed-ended items during the debates) and qualitative data (full recordings/transcripts of 60-minute discussions). Each strand was analyzed separately and integrated at interpretation to enable triangulation. The rationale for mixing is pragmatic and complementary: numerical indicators of perceived GenAI utility are paired with participants’ articulated reasoning to provide a fuller account.
Comments 4:
Finally, I strongly encourage adding a critical discussion of the results. Such a section would provide valuable perspectives, highlight limitations, and suggest future directions within the field of didactics.
Response 4:
We have revised Section 4 (“Discussion and Conclusions”) by adding two new paragraphs at the end of the section. We believe these additions enhance the theoretical depth and interpretative richness of the discussion, aligning our results with ongoing debates in the field.
Round 2
Reviewer 1 Report
Comments and Suggestions for AuthorsThe authors definitely tried to improve the paper, but the research design is insufficient and I don't think this can be improved. I don't think students's interactions with AI can be studied within one 60minute session + 60minute group discussion.
Author Response
Comments 1:
The authors definitely tried to improve the paper, but the research design is insufficient and I don't think this can be improved. I don't think students's interactions with AI can be studied within one 60minute session + 60minute group discussion.
Response 1:
We apologize we did not find a way to make the methodology clear in previous versions of the manuscript. Actually, although the reviewer stated that we analyzed students' interaction with AI with one 60 minute session + 60 minute group discussion, our study includes 3 sessions of that kind. We assume complete responsability of this misunderstanding and we have made a great effort in the present version of the manuscript to clarify the methodology and give all details about how we performed the experiment in order to avoid any confusion or lack of information. In detail:
- We included in the the methodology the Creswell's core tipology for mixed methods research.
- We have stated now that the study employs a convergent mixed-methods design with concurrent collection of quantitative data (closed-ended items during the debates) and qualitative data (full recordings/transcripts of 60-minute discussions)
- In the present version of the manuscript, although results are presented with quantitative summaries first, inferential weight is assigned at the integration stage where convergences and divergences are interpreted. We have also clarified that the study is descriptive and observational(i.e. there is no manipulation, random assignment, or control group) and we have replaced “experimental setup” with “study procedures.”
Related to reviewer's concerns about the limitation of time in the experiment, we must insist that it is really three times bigger, but, to make sure that the experiment is adequate and robust, we have also improved the manuscript in the following ways:
- We have tempered all disciplinary attributions, characterizing differences as exploratory and hypothesis-generating given the single participant per discipline and the descriptive nature of our analyses; noted that the limited exposure (~3 hours) constrains ecological validity; and strengthened mixed-methods integration by aligning descriptive trends with explanatory qualitative excerpts in the results.
- We have conducted an inductive–deductive thematic analysis of the verbatim transcripts: two researchers independently coded an initial subset to develop a shared codebook, reconciled discrepancies by consensus, and applied the finalized codebook to the full corpus using constant comparison; we maintained an audit trail and performed cross-checks on a random sample. Themes were then mapped to item-level descriptive results to integrate qualitative explanations with the quantitative strand.
- We have reframed the study as an exploratory, mixed-methods case study using a convergent core design: each student constitutes a bounded case (three structured sessions; screen recordings; moderated debates; closed-ended items), followed by within-case profiling and cross-case comparison to identify tentative patterns (analytic generalization), rather than statistical generalization.
- We have tempered all disciplinary attributions and now present them as hypothesis-generating insights grounded in convergences between descriptive trends and explanatory themes from transcripts/screen captures.
- We also clarify that particularity (not broad generalization) is the hallmark of case study contribution, and we position our findings as building blocks for a subsequent, larger-N confirmatory design (if an experimental approach is warranted).
We do believe that the present version of the manuscript fully answers all the reviewer's concerns and make the article suitable for publication.
We do really appreciate the reviewer's comments and time spent analyzing our manuscript.
Reviewer 3 Report
Comments and Suggestions for AuthorsThe paper has been systematically revised in accordance with the suggestions. The methodology is now more soundly structured. I thank the authors for their effort and look forward to the publication of the paper.
Reviewer 4 Report
Comments and Suggestions for AuthorsChatGPT Plus
Dear authors,
Thank you very much for your kind response. In my view, with the revisions made, the paper is suitable for publication in its current form.
Thank you.
