Next Article in Journal
Integrating Digital Tools with Origami Activities to Enhance Geometric Concepts and Creative Thinking in Kindergarten Education
Previous Article in Journal
Knowledge About Attention Deficit Hyperactivity Disorder (ADHD) Among Kindergarten and Primary School Teachers in Hungary
Previous Article in Special Issue
From Intimidation to Innovation: Cross-Continental Multiple Case Studies on How to Harness AI to Elevate Engagement, Comprehension, and Retention
 
 
Article
Peer-Review Record

Supporting Reflective AI Use in Education: A Fuzzy-Explainable Model for Identifying Cognitive Risk Profiles

Educ. Sci. 2025, 15(7), 923; https://doi.org/10.3390/educsci15070923
by Gabriel Marín Díaz 1,2
Reviewer 1:
Reviewer 2:
Educ. Sci. 2025, 15(7), 923; https://doi.org/10.3390/educsci15070923
Submission received: 4 June 2025 / Revised: 3 July 2025 / Accepted: 15 July 2025 / Published: 18 July 2025
(This article belongs to the Special Issue Generative AI in Education: Current Trends and Future Directions)

Round 1

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Authors,

It was pleasure reading your work. It is thought-provoking and promising. It is an undeniable fact that GenAI must be integrated into our pedagogical approaches and practices. However, I would like to see some clarification about the data collection process since you have a big number of participants and a big amount of data. Although you have stated the context and procedures of data collection, collecting and managing this much data requires delicate policies. It is worth mentioning it in details. In addition, the significance of the study is repeated in five or six different places. The significance of all appropriately executed research deserves highlighting, however, limiting this to one or two sounds more appropriate. Although this type of is study is novel, I would like to be assured that relevant work has been reviewed thoroughly. Is there a prior reference to this work to compare and contrast? More importantly, although there is a discussion and conclusion part stating the importance of "How can differentiated user profiles be identified based on their AI-related behaviors and cognitive dispositions, and how this knowledge can be used to design strategic pedagogical interventions?", the practical implications are not visible (I am aware of the fact that you mentioned practical implications). As a researcher, I would like to give full credit to all the hard work, but practical uses in the work requires more tangible contexts. One last question, why this model is for tertiary level, is there a specific reason for excluding k12? This preference could also be grounded.

Best regards,

 

Author Response

Thank you for your thoughtful and constructive comments. I greatly appreciate your recognition of the relevance of the study presented and your suggestions for improvement. I will now respond to each point raised, highlighting the corresponding changes made to the manuscript.

Point 1: Clarification about the data collection process, given the large number of participants.

Response 1:

Your concern regarding the data collection process is appreciated. To clarify this point, the dataset consists of 1,273 anonymized user profiles voluntarily completed using a structured online questionnaire. Data were collected in accordance with national ethical guidelines for noninterventional studies that do not involve personal or sensitive information. No identifiable data was collected, and participants were not exposed to any risk or intervention. Therefore, neither informed consent nor Institutional Review Board (IRB) approval was required according to Spanish standards for educational research.

To reflect this more clearly in the manuscript, we have added the following information to Section 3.1 (“Survey and Data Design”):

“Data collection took place in real-world educational settings under controlled conditions, involving undergraduate students from various academic disciplines at institutions where the researcher holds teaching responsibilities. A total of 1,273 anonymized user profiles were gathered through a structured online questionnaire designed to capture behavioral and cognitive dimensions related to AI use. No personal identifiers were collected, and participation was entirely voluntary. In accordance with national regulations on anonymous, non-interventional research, ethical approval and informed consent were not required. The study complies with the ethical principles outlined in the Declaration of Helsinki (2013 revision).”

The “Institutional Review Board Statement” and the “Informed Consent Statement” have also been updated in the final matter.

Point 2: The significance of the study is repeated in five or six different places. It is worth highlighting but should be limited to one or two instances.

Response 2:

The manuscript has been carefully revised to reduce redundant mentions of the importance of the study, especially in cases where it was previously reiterated in several sections. Now, the relevance and contributions of the paper are highlighted more concisely in the Introduction and Discussion sections, avoiding repetition while maintaining emphasis on its pedagogical and methodological value.

Manuscript modifications:

  • Repetitive references to the novelty of the study in sections 2.6 (Research gap) and 4.1 (Context) are eliminated.
  • The statement of importance in the Introduction and the concluding remarks in the Discussion have been maintained and refined.

Point 3: Although the study is novel, a stronger review of prior work is needed for comparison.

Response 3:

Thank you for this important observation. The literature review in section 2.6 (“Research gaps and theoretical positioning”) has been expanded and strengthened to provide a more comprehensive comparison with previous work. The revised section now includes recent contributions on AI literacy, cognitive profiling, and explainable AI in education (Chimatapu et al., 2021; Karpouzis, 2024;  Casalino et al., 2024), as well as foundational models using fuzzy logic and AHP in pedagogical settings.

Also incorporated is our previous study on AI adoption in education using the Technology Acceptance Model (TAM) framework (Marín Díaz et al., 2024), which analyzed student and teacher perceptions of AI integration in learning environments. While that study provided an overview on perceived usefulness and ease of use, the present work goes further by developing differentiated user profiles based on cognitive dispositions and behavioral patterns, with explainable scoring and grouping strategies.

These additions clarify the contribution of the study relative to existing models and reinforce its novelty and methodological continuity within the field.

Point 4: Practical implications are not sufficiently tangible, despite being mentioned.

Thank you for this thoughtful observation. We agree that practical implications must be articulated in a more concrete and actionable manner. To address this, we have expanded Section 4.4 (“Cluster-Based Strategic Design”) and Section 5.1 (“Pedagogical Implications”), incorporating examples of how the proposed methodology can inform differentiated teaching strategies.

It is important to clarify that our contribution does not aim to define a universal model applicable across all educational contexts. Instead, we propose a transferable methodology that combines fuzzy logic, multi-criteria decision analysis (AHP), clustering, and explainable AI techniques (SHAP, LIME) to identify context-specific profiles and action areas. The profiles and strategic actions described in this study are based on the sample analyzed and should be understood as illustrative and exploratory.

For instance, the current application includes actions such as:

  • “AI Detox” sessions for digitally overloaded but low-reflective users,
  • Assigning peer-mentor roles to highly mature users,
  • Designing micro-reflection tasks to scaffold ethical engagement with AI,
  • Prioritizing interventions based on AHP-derived scoring.

These are intended as context-sensitive illustrations of how the methodology may be used to derive pedagogical decisions. The framework allows for adaptation to other educational environments, where different clusters and variables may emerge. This adaptability is inherent in the AHP model, which supports decision-making under multiple criteria and expert judgment.

We have explicitly emphasized this methodological positioning and the non-generalizable nature of the results in the revised manuscript, particularly in Sections 4.4, 5.1, 5.2, and in the Conclusions (Section 6), where a final paragraph reiterates that the study offers a methodological approach adaptable to diverse educational settings.

Point 5: Why is the model designed for tertiary level? Is there a specific reason for excluding K12?

Thank you for this insightful question. We have clarified in the revised manuscript that the present study was intentionally conducted in a tertiary education context due to the nature of the cognitive and metacognitive constructs evaluated. The questionnaire items and the profiling methodology rely on users’ ability to self-report complex behaviors such as epistemic reflection, verification strategies, and ethical reasoning related to AI usage. These constructs presuppose a level of cognitive maturity and technological autonomy typically found in adult learners at the university level.

We recognize that the challenges of responsible AI engagement and critical thinking development are equally relevant in K12 settings. However, adapting this methodology to younger populations would require significant adjustments, particularly in terms of instrument design, item formulation, and ethical safeguards. For example, the use of fuzzy logic and AHP scoring mechanisms would need to be mediated through age-appropriate scaffolding and possibly supported by teacher-assessed inputs rather than self-reports.

We have made this distinction explicit in the revised manuscript, specifically in the Section 5.2 (“Future Directions and Limitations”), where we state that future adaptations of the framework may explore its application in K12 environments, with careful methodological tailoring to developmental and contextual constraints.

Additional References

Casalino, G., Castellano, G., Di Mitri, D., Kaczmarek-Majer, K., & Zaza, G. (2024). A Human-centric Approach to Explain Evolving Data: A Case Study on Education. In J. A. I. Martinez, R. D. Baruah, D. Kangin, & P. V. D. Souza (Eds.), IEEE CONFERENCE ON EVOLVING AND ADAPTIVE INTELLIGENT SYSTEMS 2024, IEEE EAIS 2024 (pp. 208–215). https://doi.org/10.1109/EAIS58494.2024.10569098

Chimatapu, R., Hagras, H., Kern, M., & Owusu, G. (2021). Enhanced Deep Type-2 Fuzzy Logic System For Global Interpretability. 2021 IEEE International Conference on Fuzzy Systems (FUZZ-IEEE), 1–8. https://doi.org/10.1109/FUZZ45933.2021.9494569

Karpouzis, K. (2024). Explainable AI for Intelligent Tutoring Systems. 59–70. https://doi.org/10.1007/978-981-99-9836-4_6

Marín Díaz, G., Galán Hernández, J. J., Gómez Medina, R., & Aijón Jiménez, J. A. (2024). Understanding AI Adoption in Education: A TAM Perspective on Students’ and Teachers’ Perceptions. Dykinson, S.L. https://doi.org/https://doi.org/10.14679/3416

Author Response File: Author Response.pdf

Reviewer 2 Report

Comments and Suggestions for Authors

The manuscript addresses an increasingly critical issue, how AI tools influence students’ reflective and cognitive behavior, by proposing a novel and well-integrated framework (CRITIC-AI) combining fuzzy logic, AHP, and explainable AI techniques (SHAP, LIME). The use of 1,273 structured user profiles offers strong empirical grounding.

However, a few substantive issues should be addressed:

  1. Ethical Approval: The study appears to involve human participants, yet there is no clear statement about ethical approval by an institutional review board. Please clarify and cite the approval process.

  2. Sampling Bias and Generalizability: The sample is drawn from institutions where the author teaches (p. 6), potentially introducing bias. Please discuss the implications of this on generalizability.

  3. Construct Validity: While the cognitive dimensions are compelling, justification for the constructs (e.g., “AI_Confidence”, “Cognitive_effort”) should be better grounded in psychometric literature. This would strengthen the internal validity of the segmentation.

  4. Explainability Tools: SHAP and LIME are well-applied, but readers may benefit from greater interpretation of these visualizations and clearer explanation of how they directly influence recommendations.

  5. Structure and Flow: The manuscript is information-rich but can benefit from additional sub-headings or paragraph breaks in dense sections (e.g., Sections 3 and 4) to improve readability.

Author Response

Thank you for your thoughtful and constructive comments. I greatly appreciate your recognition of the relevance of the study presented and your suggestions for improvement. I will now respond to each point raised, highlighting the corresponding changes made to the manuscript.

Point 1: Ethical Approval and Compliance with Human Subjects Protocols.

Response 1:

Your concern regarding the data collection process is appreciated. To clarify this point, the dataset consists of 1,273 anonymized user profiles voluntarily completed using a structured online questionnaire. Data were collected in accordance with national ethical guidelines for noninterventional studies that do not involve personal or sensitive information. No identifiable data was collected, and participants were not exposed to any risk or intervention. Therefore, neither informed consent nor Institutional Review Board (IRB) approval was required according to Spanish standards for educational research.

To reflect this more clearly in the manuscript, we have added the following information to Section 3.1 (“Survey and Data Design”):

“Data collection took place in real-world educational settings under controlled conditions, involving undergraduate students from various academic disciplines at institutions where the researcher holds teaching responsibilities. A total of 1,273 anonymized user profiles were gathered through a structured online questionnaire designed to capture behavioral and cognitive dimensions related to AI use. No personal identifiers were collected, and participation was entirely voluntary. In accordance with national regulations on anonymous, non-interventional research, ethical approval and informed consent were not required. The study complies with the ethical principles outlined in the Declaration of Helsinki (2013 revision).”

The “Institutional Review Board Statement” and the “Informed Consent Statement” have also been updated in the final matter.

Point 2: Sampling Bias and Generalizability.

Response 2:

Thank you for this observation. We recognize that the sample used in this study, participants from higher education institutions where the author teaches, represents a form of convenience sampling. However, it is important to clarify that this research does not aim to generalize behavioral profiles across populations, but rather to demonstrate how the proposed framework can operate in a real educational context.

The model introduced here is not designed to predict specific outcomes for all learners, but to serve as a flexible methodological tool that can be adapted to a variety of institutional or professional settings. Depending on the data available and the goals of each context, different profiles and intervention strategies may emerge. In this regard, the present sample allows us to illustrate the applicability of the methodology using real-world conditions and authentic responses.

To address this point explicitly, we have added the following clarification at the end of Section 5.2 (Future Directions and Limitations):

“Although the participants in this study were drawn from a convenience sample of higher education institutions known to the author, this choice served the purpose of illustrating the applicability of the methodology using real, context-bound data. No claim is made regarding the representativeness of the resulting profiles, which are intended solely to demonstrate the framework's capacity to produce actionable insights under defined conditions.”

This addition reinforces the methodological scope of the study and clarifies that any generalization of profiles is beyond its intended contribution.

Point 3: Construct Validity.

Response 3:

Thank you for this important suggestion. We agree that anchoring the core constructs in relevant psychometric and educational literature enhances the internal validity of the segmentation process. To address this, we have expanded Section 3.2 (“Variables and Cognitive Dimensions”) by introducing a new paragraph that explicitly links each of the six variables to established theoretical foundations.

Specifically:

  • AI_Confidence is grounded in research on trust in automation, AI literacy, and perceived technological reliability, as seen in recent studies such as (Marín Díaz et al., 2024; Rajki et al., 2025).
  • Cognitive_effort draws from the literature on metacognitive engagement and self-regulated learning, referencing foundational work by (Flavell, 1979; Kember et al., 2007).
  • AI_Reflection and Argumentation are supported by frameworks in critical thinking and epistemic cognition, as established by (CHINN et al., 2011; Facione, 2011).
  • Verification behavior connects with the concept of multi-source validation and epistemic vigilance (Barzilai & Zohar, 2012).

We have incorporated these references into the introductory paragraph of Section 3.2, immediately before Table 2, to make the theoretical grounding of each dimension clear to readers. The updated text now explains that these constructs are not arbitrary labels but derive from validated models of cognitive engagement and AI-related behavior.

Additionally, we reiterate that the purpose of this model is not to establish a definitive psychometric scale, but to provide a pragmatic and interpretable segmentation method using theoretically informed variables.

Point 4: Explainability Tools

Thank you for this thoughtful observation. We agree that the pedagogical relevance of the explainability tools used in the study should be as clear as their technical validity. To this end, we have reviewed Section 4.3.3 and verified that the connection between SHAP/LIME outputs and pedagogical decisions is already clearly articulated, both globally (via SHAP) and at the level of individual user profiles (via LIME).

This section explains how:

  • The XGBoost classifier was trained using the fuzzy cluster labels to predict user membership based on six cognitive-behavioral variables.
  • SHAP was applied to identify the most influential features, with AI_Confidence, AI_Verifies, and Argumentation emerging as key differentiators.
  • LIME was used to analyze representative instances (Figures 8–10), confirming the interpretability and internal consistency of the cluster assignments.

To make this alignment more explicit, we added a short clarifying paragraph at the end of Section 4.3.3, summarizing how the explainability outputs reinforce the strategic actions proposed in Sections 4.4 and 5.1. For instance:

  • Cluster 1 (high AI_Confidence and Frequency) aligns with interventions like “AI Detox” and cognitive disruption tasks.
  • Cluster 2 (high Reflection and Verification) supports mentoring and epistemic scaffolding strategies.
  • Cluster 0 (balanced but unstructured use) suggests low-barrier prompts and awareness-building exercises.

As the figures were already interpreted in detail in the main text, no additional changes were made to their captions. We appreciate the reviewer’s suggestion to ensure these connections were made fully visible and pedagogically meaningful.

Point 5: Structure and Flow

We appreciate the reviewer’s suggestion to improve the readability of dense sections such as Sections 3 and 4. Upon review, we confirm that Section 3 already follows a logically segmented structure (from 3.1 to 3.7), reflecting each methodological phase in a clear, sequential order. This structure is also visually summarized in Figure 1, which outlines the CRITIC-AI framework from data design to cluster-based interpretation.

To enhance clarity in Section 4, we have implemented a minor structural adjustment: we now explicitly distinguish between global interpretability using SHAP and local interpretability using LIME, in line with the methodological description in Section 3.6. These refinements aim to support smoother navigation through the explainability results without altering the content.

We thank the reviewer for encouraging us to consider these improvements, which we believe contribute to a more accessible and pedagogically coherent presentation.

Additional References

CHINN, C. A., BUCKLAND, L. A., & SAMARAPUNGAVAN, A. L. A. (2011). Expanding the Dimensions of Epistemic Cognition: Arguments From Philosophy and Psychology. Educational Psychologist, 46(3), 141–167. https://doi.org/10.1080/00461520.2011.587722

Facione, P. a. (2011). Critical Thinking : What It Is and Why It Counts. Insight Assessment, ISBN 13: 978-1-891557-07-1., 1–28.

Flavell, J. H. (1979). Metacognition and cognitive monitoring: A new area of cognitive–developmental inquiry. American Psychologist, 34(10), 906–911. https://doi.org/10.1037/0003-066X.34.10.906

Kember, D., Leung, D. Y. P., & Ma, R. S. F. (2007). Characterizing learning environments capable of nurturing generic capabilities in higher education. Research in Higher Education, 48(5), 609–632. https://doi.org/10.1007/s11162-006-9037-0

Marín Díaz, G., Galán Hernández, J. J., Gómez Medina, R., & Aijón Jiménez, J. A. (2024). Understanding AI Adoption in Education: A TAM Perspective on Students’ and Teachers’ Perceptions. Dykinson, S.L. https://doi.org/https://doi.org/10.14679/3416

Rajki, Z., Dringó-Horváth, I., & Nagy, J. T. (2025). Artificial Intelligence in Higher Education: Students’ Artificial Intelligence Use and its Influencing Factors. Journal of University Teaching and Learning Practice , 22(2). https://doi.org/10.53761/j0rebh67

Author Response File: Author Response.pdf

Round 2

Reviewer 1 Report

Comments and Suggestions for Authors

Dear Authors,

Thank you for the amendments in the text. The text is more coherent and comprehensive in this form. I do not have any further comments or suggestions for the updated version.

Best.

Back to TopTop