IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI
Abstract
1. Introduction
2. Theoretical Framework and Related Work
2.1. Academic Emotions and Learning: From Theory to Operationalisation
2.2. Attention, Self-Regulation and Engagement in Technology-Mediated Learning Environments
2.3. Multimodal Recognition of Emotion and Attention: Recent Advances and Persistent Limitations
2.4. Educational AI and Neuro-Adaptation: From Prediction to Traceable Decision
2.5. Critical Synthesis and Design Requirements Derived from the State of the Art
- R1. Interpretable integration: emotion and attention must be combined in a common metric that preserves psychological meaning and allows for pedagogical interpretation, avoiding fragmented or purely classificatory outputs.
- R2. Robust pipeline: multimodal inference must be based on reproducible protocols, with explicit controls for temporal asynchrony and information leakage.
- R3. Two-level validation: it is necessary to distinguish between the structural coherence of the model and ecological empirical evidence, avoiding undue extrapolations from only one of these levels.
2.6. From State of the Art to System: NAILF as an Operational Framework
3. Materials and Methods
3.1. Study Objectives and Research Questions
- R1. How can multimodal emotional and attentional indicators be integrated into a single, interpretable signal that is operational in near real time?
- R2. Does the IPA 2.0 exhibit internal consistency and numerical stability when exploring the complete space of emotion–attention states under plausible constraints (Study A)?
- Q3. What evidence of convergent validity does IPA 2.0 show against an external criterion of engagement in an ecological empirical validation with human subjects (Study B)?
- P4. What methodological decisions (windows, temporal calibration, leakage controls) are necessary for rigorous validation in “in the wild” contexts?
3.2. Study Design and Validation Logic
- Study A (pre-empirical): structural validation through biologically informed simulation, aimed at examining the internal consistency, numerical stability and limit behaviour of the IPA 2.0.
- Study B (empirical): external validation with human subjects using the DIPSEER dataset, collected in real classroom conditions, aimed at estimating convergent validity under a robust protocol.
3.3. Formalisation of the Learning Improvement Index (IPA 2.0)
3.3.1. General Definition
3.3.2. Emotional Component
3.3.3. Attention Component
3.3.4. Adjustment Terms: Discordance and Neuro-Alignment
3.4. Study A: Structural Simulation of the Emotion–Attention Space
3.5. Study B: External Empirical Validation with the DIPSEER Dataset
3.5.1. Dataset and Ecological Context
3.5.2. Operationalisation and Mapping to IPA 2.0
3.5.3. Time Windows, Calibration and Leakage Prevention
3.6. Metrics and Statistical Analysis
3.7. Ethical and Data Governance Considerations
4. Results
4.1. Study A: Structural Validation Through Biologically Informed Simulation
4.1.1. Coverage of the State Space and Range of the Index
4.1.2. Numerical Stability and Relative Contribution of Emotion and Attention
4.2. Study B: External Empirical Validation with DIPSEER
4.2.1. Cohort, Eligibility, and Anti-Leakage Controls
4.2.2. Convergent Validity of the IPA 2.0
4.2.3. Temporal Calibration and Parameter Stability
5. Discussion
5.1. Summary of Main Findings
5.2. Interpretation of Effect Size in “In the Wild” Contexts
5.3. Methodological Robustness and Interpretation of Temporal Calibration
5.4. Implications for the Design of Neuro-Adaptive Systems
5.5. Summary of Results
- 1.
- Exhibits structural consistency and numerical stability (Study A).
- 2.
- Exhibits positive convergent validity under ecological conditions (Study B).
- 3.
- Shows interindividual heterogeneity with a strong signal subset.
5.6. Response to Research Questions
- R1. The IPA 2.0 demonstrates it is possible to integrate emotion and attention into a continuous and interpretable signal, that is operational in time windows and suitable for traceable decisions.
- R2. Structural simulation (Study A) confirms the internal consistency and numerical stability of the index when traversing the entire space of emotion–attention states.
- Q3. Empirical validation (Study B) provides evidence of positive convergent validity against an external criterion of engagement in ecological conditions, under a robust protocol.
- P4. The results underscore the importance of temporal calibration, source separation, and hold-out evaluation as necessary conditions for credible validation “in the wild.”
5.7. Limitations and Future Directions
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
Appendix A. Appendix Tables
| Subject ID | Best Lag | Best Window (s) | Smoo-Thing | ||
|---|---|---|---|---|---|
| group_03_exp_05_subject_05 | 0.969 | 5 | 3 | 93,924 | |
| group_02_experiment_07_subject_04 | 0.941 | 0 | 1 | 3 | 94,179 |
| group_02_experiment_07_subject_21 | 0.917 | 0 | 1 | 3 | 94,778 |
| group_02_experiment_01_subject_11 | 0.881 | 0 | 5 | 3 | 93,659 |
| group_02_experiment_08_subject_08 | 0.860 | 0 | 5 | None | 93,469 |
| group_03_exp_08_subject_08 | 0.804 | 1 | 5 | 3 | 93,634 |
| group_02_experiment_03_subject_03 | 0.773 | 0 | 5 | 3 | 93,963 |
| group_02_experiment_09_subject_09 | 0.771 | 0 | 5 | 3 | 93,866 |
| group_01_experiment_07_subject_14 | 0.763 | 5 | 3 | 92,137 | |
| group_03_exp_06_subject_06 | 0.745 | 5 | 3 | 93,651 | |
| group_03_exp_07_subject_04 | 0.740 | 2 | 5 | 3 | 93,837 |
| group_02_experiment_05_subject_18 | 0.720 | 0 | 1 | None | 74,696 |
| group_02_experiment_07_subject_08 | 0.710 | 0 | 1 | None | 93,936 |
| group_02_experiment_03_subject_12 | 0.686 | 1 | 5 | 3 | 94,060 |
| group_01_experiment_07_subject_01 | 0.649 | 0 | 1 | 3 | 55,858 |
| group_02_experiment_02_subject_13 | 0.608 | 0 | 5 | None | 93,888 |
| group_1_experiment_2_subject_13 | 0.608 | 0 | 5 | None | 93,888 |
| group_02_experiment_07_subject_01 | 0.599 | 1 | 5 | 3 | 2399 |
| group_1_experiment_6_subject_13 | 0.590 | 5 | 3 | 93,444 | |
| group_02_experiment_07_subject_02 | 0.571 | 0 | 5 | None | 93,825 |
| group_03_exp_02_subject_11 | 0.558 | 5 | 3 | 94,764 | |
| group_01_experiment_08_subject_20 | 0.532 | 5 | None | 2364 | |
| group_02_experiment_08_subject_01 | 0.530 | 0 | 1 | 3 | 2429 |
| group_02_experiment_02_subject_17 | 0.526 | 0 | 1 | None | 93,819 |
| group_1_experiment_2_subject_17 | 0.526 | 0 | 1 | None | 93,819 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 20.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 18.20 |
| Selective | 1.4 | 0.30 | 0.50 | 17.20 |
| Alternating | 1.2 | 0.30 | 0.50 | 16.20 |
| Divided | 0.8 | 0.30 | 0.50 | 14.20 |
| Reactive | 0.4 | 0.30 | 0.50 | 12.20 |
| Active | 2.0 | 0.30 | 0.50 | 20.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 19.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 16.20 |
| External | 0.30 | 0.50 | 5.20 | |
| Internal | 0.30 | 0.50 | 7.70 | |
| Shared | 0.0 | 0.30 | 0.50 | 10.20 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 17.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 15.20 |
| Selective | 1.4 | 0.30 | 0.50 | 14.20 |
| Alternating | 1.2 | 0.30 | 0.50 | 13.20 |
| Divided | 0.8 | 0.30 | 0.50 | 11.20 |
| Reactive | 0.4 | 0.30 | 0.50 | 9.20 |
| Active | 2.0 | 0.30 | 0.50 | 17.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 16.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 13.20 |
| External | −1.0 | 0.30 | 0.50 | 2.20 |
| Internal | −0.5 | 0.30 | 0.50 | 4.70 |
| Shared | 0.0 | 0.30 | 0.50 | 7.20 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 17.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 15.20 |
| Selective | 1.4 | 0.30 | 0.50 | 14.20 |
| Alternating | 1.2 | 0.30 | 0.50 | 13.20 |
| Divided | 0.8 | 0.30 | 0.50 | 11.20 |
| Reactive | 0.4 | 0.30 | 0.50 | 9.20 |
| Active | 2.0 | 0.30 | 0.50 | 17.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 16.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 13.20 |
| External | −1.0 | 0.30 | 0.50 | 2.20 |
| Internal | −0.5 | 0.30 | 0.50 | 4.70 |
| Shared | 0.0 | 0.30 | 0.50 | 7.20 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 5.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 3.20 |
| Selective | 1.4 | 0.30 | 0.50 | 2.20 |
| Alternating | 1.2 | 0.30 | 0.50 | 1.20 |
| Divided | 0.8 | 0.30 | 0.50 | −0.80 |
| Reactive | 0.4 | 0.30 | 0.50 | −2.80 |
| Active | 2.0 | 0.30 | 0.50 | 5.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 4.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 1.20 |
| External | −1.0 | 0.30 | 0.50 | −9.80 |
| Internal | −0.5 | 0.30 | 0.50 | −7.30 |
| Shared | 0.0 | 0.30 | 0.50 | −4.80 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 5.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 3.20 |
| Selective | 1.4 | 0.30 | 0.50 | 2.20 |
| Alternating | 1.2 | 0.30 | 0.50 | 1.20 |
| Divided | 0.8 | 0.30 | 0.50 | −0.80 |
| Reactive | 0.4 | 0.30 | 0.50 | −2.80 |
| Active | 2.0 | 0.30 | 0.50 | 5.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 4.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 1.20 |
| External | −1.0 | 0.30 | 0.50 | −9.80 |
| Internal | −0.5 | 0.30 | 0.50 | −7.30 |
| Shared | 0.0 | 0.30 | 0.50 | −4.80 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 2.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 0.20 |
| Selective | 1.4 | 0.30 | 0.50 | −0.80 |
| Alternating | 1.2 | 0.30 | 0.50 | −1.80 |
| Divided | 0.8 | 0.30 | 0.50 | −3.80 |
| Reactive | 0.4 | 0.30 | 0.50 | −5.80 |
| Active | 2.0 | 0.30 | 0.50 | 2.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 1.20 |
| Interactive | 1.2 | 0.30 | 0.50 | −1.80 |
| External | −1.0 | 0.30 | 0.50 | −12.80 |
| Internal | −0.5 | 0.30 | 0.50 | −10.30 |
| Shared | 0.0 | 0.30 | 0.50 | −7.80 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 2.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 0.20 |
| Selective | 1.4 | 0.30 | 0.50 | −0.80 |
| Alternating | 1.2 | 0.30 | 0.50 | −1.80 |
| Divided | 0.8 | 0.30 | 0.50 | −3.80 |
| Reactive | 0.4 | 0.30 | 0.50 | −5.80 |
| Active | 2.0 | 0.30 | 0.50 | 2.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 1.20 |
| Interactive | 1.2 | 0.30 | 0.50 | −1.80 |
| External | −1.0 | 0.30 | 0.50 | −12.80 |
| Internal | −0.5 | 0.30 | 0.50 | −10.30 |
| Shared | 0.0 | 0.30 | 0.50 | −7.80 |
| Attention Type | wa | Discordance (δ) | FNP | IPA |
|---|---|---|---|---|
| Focused | 2.0 | 0.30 | 0.50 | 0.20 |
| Sustained | 1.6 | 0.30 | 0.50 | −1.80 |
| Selective | 1.4 | 0.30 | 0.50 | −2.80 |
| Alternating | 1.2 | 0.30 | 0.50 | −3.80 |
| Divided | 0.8 | 0.30 | 0.50 | −5.80 |
| Reactive | 0.4 | 0.30 | 0.50 | −7.80 |
| Active | 2.0 | 0.30 | 0.50 | 0.20 |
| Constructive | 1.8 | 0.30 | 0.50 | −0.80 |
| Interactive | 1.2 | 0.30 | 0.50 | −3.80 |
| External | −1.0 | 0.30 | 0.50 | −14.80 |
| Internal | −0.5 | 0.30 | 0.50 | −12.30 |
| Shared | 0.0 | 0.30 | 0.50 | −9.80 |
Appendix A.1. Study A Robustness and Sensitivity (Monte Carlo)
| Check | What It Validates | Configuration | Metric | Result |
|---|---|---|---|---|
| Sx-1 Axioms | Structural coherence (not psychological construct validity) | 108-grid; weights from Table 1, Table 2 and Table 3; δ = 0.30; FNP = +0.50 | Constraint compliance | 100% (108/108) |
| Sx-2 Weight perturbation ±20% | Robustness to parameterisation | Monte Carlo n = 10,000; factors U[0.8, 1.2] applied per weight | Spearman’s (median; P5–P95; min) | 0.9912; 0.9863–0.9950; min = 0.9781 |
| Sx-3 Noise (near-ceiling regime) | Sensitivity to measurement noise | Monte Carlo n = 5000; Gaussian noise per state; clipping to valid ranges | Spearman’s (median; P5–P95; min) | 0.9967; 0.9951–0.9978; min = 0.9928 |
| Sx-3b Noise (mid-range regime) | Sensitivity under non-ceiling scenarios | Monte Carlo n = 5000; ≈ 6, ≈ 3; higher variance; clipping | Spearman’s (median; P5–P95; min) | 0.9411; 0.9247–0.9553; min = 0.9011 |
| Sx-4 Conditional /FNP | Sensitivity of adjustment terms | Auditable sign-alignment rule (see script) | Spearman’s vs. baseline | 0.99994 |
| Sx-4b = 0 | Impact of removing the penalty term | = 0 constant; FNP = +0.50 constant | Spearman’s vs. baseline | 1.0000 (identical ranking) |
Appendix A.2. Strong-Signal Subset Descriptive Breakdown (Study B)
| Parameter | Value | n (Evaluable, n = 172) | n (Strong Subset, n = 25) |
|---|---|---|---|
| Window | 1 s | 46 | 8 |
| Window | 5 s | 126 | 17 |
| Smoothing | None | 57 | 9 |
| Smoothing | 3 | 115 | 16 |
| Lag | −2 | 36 | 3 |
| Lag | −1 | 21 | 3 |
| Lag | 0 | 45 | 15 |
| Lag | +1 | 26 | 3 |
| Lag | +2 | 44 | 1 |
| std_target ( variability) | median [P25, P75] | 0.470 [0.399, 0.542] | 0.468 [0.414, 0.575] |
| unique_levels | median [P25, P75] | 3 [3, 5] | 4 [3, 5] |
| n_samples_eval | median [P25, P75] | 43 [42, 198] | 43 [42, 206] |
Appendix A.3. Declarative Mapping DIPSEER A→B IPA 2.0 (Reproducibility)
| Emotion Label (DIPSEER/AEQ Schema) | IPA Category (5-Group) | Valence | Traceability Note | |
|---|---|---|---|---|
| Joy | Joy/Hope | + | +1.0 | Operative mapping used to compute emotion_weight; see Table 1 |
| Hope | Joy/Hope | + | +1.0 | |
| Pride | Pride/Relief | + | +0.7 | |
| Relief | Pride/Relief | + | +0.7 | |
| Anxiety | Anxiety/Anger | − | −0.5 | |
| Anger | Anxiety/Anger | − | −0.5 | |
| Shame | Shame/Sadness | − | −0.8 | |
| Sadness | Shame/Sadness | − | −0.8 | |
| Boredom | Boredom | − | −1.0 | Anchor negative-deactivating extreme. |
| Variable in DIPSEER/Artifact | IPA 2.0 Input | Transformation | Traceability Comment |
|---|---|---|---|
| Emotional intensity (self-report/annotation) | (1–10) | Identity if already 1–10; otherwise linear rescaling | Original range explicitly reported. |
| Attention/Engagement (self-report) | (1–5) | Identity (1–5) | Subject-wise temporal calibration (lag/window/smoothing). strong-signal subset (Phase 0C ) 590 |
| External engagement criterion (4 labelers) | Target | Median of labelers | Split-source: is not used in IPA computation; only for validation. |
Appendix A.4. Phase 0C TensorFlow Baseline (Sensor-Only vs. Sensor + IPA)
| Metrics | Value |
|---|---|
| TF subjects with metrics (n) | 139 |
| Accuracy baseline (mean; median) | 0.4981; 0.5238 |
| Accuracy sensor+IPA (mean; median) | 0.4905; 0.4706 |
| δaccuracy (mean; median) | −0.0076; 0.0000 |
| δaccuracy (Q1; Q3) | −0.1820; 0.1693 |
| Improves/Stays the same/Worsens | 56/17/66 |
| Range δaccuracy (min; max) | −0.7188; 0.7188 |
References
- Goldberg, E. The Executive Brain: Frontal Lobes and the Civilized Mind; Oxford University Press: New York, NY, USA, 2001. [Google Scholar]
- Pekrun, R. The control-value theory of achievement emotions: Assumptions, corollaries, and implications for educational research and practice. Educ. Psychol. Rev. 2006, 18, 315–341. [Google Scholar] [CrossRef]
- Fredricks, J.A.; Blumenfeld, P.C.; Paris, A.H. School engagement: Potential of the concept, state of the evidence. Rev. Educ. Res. 2004, 74, 59–109. [Google Scholar] [CrossRef]
- D’mello, S.K.; Kory, J. A review and meta-analysis of multimodal affect detection systems. ACM Comput. Surv. (CSUR) 2015, 47, 43. [Google Scholar] [CrossRef]
- Lian, H.; Lu, C.; Li, S.; Zhao, Y.; Tang, C.; Zong, Y. A survey of deep learning-based multimodal emotion recognition: Speech, text, and face. Entropy 2023, 25, 1440. [Google Scholar] [CrossRef]
- Ramaswamy, M.P.A.; Palaniswamy, S. Multimodal emotion recognition: A comprehensive review, trends, and challenges. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2024, 14, e1563. [Google Scholar] [CrossRef]
- Kabudi, T.; Pappas, I.; Olsen, D.H. AI-enabled adaptive learning systems: A systematic mapping of the literature. Comput. Educ. Artif. Intell. 2021, 2, 100017. [Google Scholar] [CrossRef]
- Alam, A. Harnessing the power of AI to create intelligent tutoring systems for enhanced classroom experience and improved learning outcomes. In Intelligent Communication Technologies and Virtual Mobile Networks; Springer Nature: Singapore, 2023. [Google Scholar]
- Marquez-Carpintero, L.; Suescun-Ferrandiz, S.; Álvarez, C.L.; Fernandez-Herrero, J.; Viejo, D.; Roig-Vila, R.; Cazorla, M. DIPSER: A Dataset for In-Person Student Engagement Recognition in the Wild. arXiv 2025, arXiv:2502.20209. [Google Scholar] [CrossRef]
- Pekrun, R.; Goetz, T.; Frenzel, A.C.; Barchfeld, P.; Perry, R.P. Measuring emotions in students’ learning and performance: The Achievement Emotions Questionnaire (AEQ). Contemp. Educ. Psychol. 2011, 36, 36–48. [Google Scholar] [CrossRef]
- Bieleke, M.; Gogol, K.; Goetz, T.; Daniels, L.; Pekrun, R. The AEQ-S: A short version of the Achievement Emotions Questionnaire. Contemp. Educ. Psychol. 2021, 65, 101940. [Google Scholar] [CrossRef]
- Posner, M.I.; Petersen, S.E. The attention system of the human brain. Annu. Rev. Neurosci. 1990, 13, 25–42. [Google Scholar] [CrossRef]
- Zhai, X.; Chu, X.; Chai, C.S.; Jong, M.S.Y.; Istenic, A.; Spector, M.; Liu, J.B.; Yuan, J.; Li, Y. A Review of Artificial Intelligence (AI) in Education from 2010 to 2020. Complexity 2021, 2021, 8812542. [Google Scholar] [CrossRef]
- Camacho, V.L.; de la Guía, E.; Olivares, T.; Flores, M.J.; Orozco-Barbosa, L. Data capture and multimodal learning analytics focused on engagement with a new wearable IoT approach. IEEE Trans. Learn. Technol. 2020, 13, 704–717. [Google Scholar] [CrossRef]
- da Silva Soares, R., Jr.; Oku, A.Y.A.; Barreto, C.d.S.F.; Sato, J.R. Exploring the potential of eye tracking on personalized learning and real-time feedback in modern education. Prog. Brain Res. 2023, 282, 49–70. [Google Scholar] [PubMed]
- Smallwood, J.; Schooler, J.W. The restless mind. Psychol. Bull. 2006, 132, 946–958. [Google Scholar] [CrossRef] [PubMed]
- Ding, Y.; Li, Z.; Zou, Y.; Dong, X. A DeepSeek cross-modal platform for personalized art education in Autism Spectrum Disorder. Sci. Rep. 2025, 15, 44800. [Google Scholar] [CrossRef]
- Cárdenas-López, H.M.; Zatarain-Cabada, R.; Barrón-Estrada, M.L.; Mitre-Hernández, H. Semantic fusion of facial expressions and textual opinions from different datasets for learning-centered emotion recognition. Soft Comput. 2023, 27, 17357–17367. [Google Scholar] [CrossRef]
- Goel, A.; Karim, R.; Singh, U.; Kumar, R. A Review On Emotion Identification Using YOLO and DeepSORT. In Proceedings of the 2024 IEEE International Conference on Computing, Power and Communication Technologies (IC2PCT); IEEE: New York, NY, USA, 2024; Volume 5, pp. 430–434. [Google Scholar]
- Tyng, C.M.; Amin, H.U.; Saad, M.N.; Malik, A.S. The influences of emotion on learning and memory. Front. Psychol. 2017, 8, 235933. [Google Scholar] [CrossRef]
- Zaibout, N.; Madrane, M.; Khamlichi, L. Towards Advanced Digital Assessments: Artificial Intelligence, Gamification, And Learning Analytics. Int. J. Tech. Phys. Probl. Eng. (IJTPE) 2024, 16, 93–105. [Google Scholar]
- Alonso-Secades, V.; López-Rivero, A.J.; Martín-Merino-Acera, M.; Ruiz-García, M.J.; Arranz-García, O. Designing an intelligent virtual educational system to improve the efficiency of primary education in developing countries. Electronics 2022, 11, 1487. [Google Scholar] [CrossRef]
- Almusaed, A.; Almssad, A.; Yitmen, I.; Homod, R.Z. Enhancing student engagement: Harnessing “AIED”’s power in hybrid education—A review analysis. Educ. Sci. 2023, 13, 632. [Google Scholar] [CrossRef]
- Sadegh-Zadeh, S.A.; Movahhedi, T.; Hajiyavand, A.M.; Dearn, K.D. Exploring undergraduates’ perceptions of and engagement in an AI-enhanced online course. Front. Educ. 2023, 8, 1252543. [Google Scholar] [CrossRef]
- dos Santos, S.C.; Junior, G.A. Opportunities and Challenges of AI to Support Student Assessment in Computing Education: A Systematic Literature Review. CSEDU 2024, 2, 15–26. [Google Scholar]
- Amashi, R.; Koppikar, U.; Vijayalakshmi, M. Investigating the Association Between Student Engagement With Video Content and Their Learnings. IEEE Trans. Educ. 2023, 66, 479–486. [Google Scholar] [CrossRef]
- Rose, D.H.; Meyer, A. Teaching Every Student in the Digital Age: Universal Design for Learning; ERIC: Alexandria, VA, USA, 2002. [Google Scholar]
- Immordino-Yang, M.H.; Damasio, A. We feel, therefore we learn: The relevance of affective and social neuroscience to education. Mind Brain Educ. 2007, 1, 3–10. [Google Scholar] [CrossRef]
- Dewan, M.; Murshed, M.; Lin, F. Engagement detection in online learning: A review. Smart Learn. Environ. 2019, 6, 1. [Google Scholar] [CrossRef]





| Emotional Category | Emotion Weight |
|---|---|
| Joy/Hope | +1.0 |
| Pride/Relief | +0.7 |
| Anxiety/Anger | −0.5 |
| Shame/Sadness | −0.8 |
| Boredom | −1.0 |
| Attention Type | |
|---|---|
| Focused/Active | +2.0 |
| Constructive | +1.8 |
| Executive/Sustained | +1.6 |
| Selective | +1.4 |
| Alternating/Interactive | +1.2 |
| Fragmented/Divided | +0.8 |
| Reactive | +0.4 |
| Shared | 0. |
| Internal | |
| External |
| Type of Attention | Attention Activation (1–5) |
|---|---|
| Focused attention | 5.0 |
| Sustained attention | 4.47 |
| Selective attention | 4.20 |
| Alternating attention | 3.93 |
| Divided attention | 3.40 |
| Reactive attention | 2.87 |
| Active attention | 5.00 |
| Constructive attention | 4.73 |
| Interactive attention | 3.93 |
| Shared attention | 2.33 |
| Internal attention | 1.67 |
| External attention | 1.00 |
| Attention Type | Discordance () | FNP | IPA | |
|---|---|---|---|---|
| Focused/Active | 2.0 | 0.30 | 0.5 | 20.20 |
| Sustained | 1.6 | 0.30 | 0.50 | 18.20 |
| Selective | 1.4 | 0.30 | 0.50 | 17.20 |
| Alternating | 1.2 | 0.30 | 0.5 | 16.20 |
| Divided | 0.8 | 0.30 | 0.50 | 14.20 |
| Reactive | 0.4 | 0.30 | 0.5 | 12.20 |
| Constructive | 1.8 | 0.30 | 0.50 | 19.20 |
| Interactive | 1.2 | 0.30 | 0.50 | 16.20 |
| Shared | 0.0 | 0.30 | 0.50 | 10.20 |
| Internal | 0.30 | 0.5 | 7.70 | |
| External | 0.30 | 0.50 | 5.20 |
| Emotional Category () | Variable | Mean | SD |
|---|---|---|---|
| Joy/Hope (1.0) | IPA | 14.74 | 4.95 |
| Pride/Relief (0.7) | IPA | 11.74 | 4.95 |
| Anxiety/Anger () | IPA | 4.95 | |
| Shame/Sadness () | IPA | 4.95 | |
| Boredom () | IPA | 4.95 |
| Variable | Emotional Weight | Attention Value | IPA |
|---|---|---|---|
| Emotional weight | 1.000 | 0.000 | 0.857 |
| Attention value | 0.000 | 1.000 | 0.515 ** |
| IPA | 0.857 | 0.515 | 1.000 |
| Metric | Value |
|---|---|
| processed | 405 |
| (target available) | 405 |
| (variance filter) | 179 |
| (Fisher aggregation) | 172 |
| Strong-signal subset () | 25 |
| Statistic | Value |
|---|---|
| Mean | 0.131 |
| Median | 0.101 |
| SD | 0.342 |
| Min | −0.605 |
| Max | 0.969 |
| (Fisher-z) | 0.166 |
| 95% CI | [0.017, 0.308] |
| n (effective) | 172 |
| Parameter | Value | n |
|---|---|---|
| Window | 1 s | 46 |
| Window | 5 s | 126 |
| Smoothing | None | 57 |
| Smoothing | 3 | 115 |
| Lag | −2 | 36 |
| Lag | −1 | 21 |
| Lag | 0 | 45 |
| Lag | +1 | 26 |
| Lag | +2 | 44 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Arranz-Romero, J.; Roig-Vila, R.; Cazorla, M. IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI. Appl. Sci. 2026, 16, 2515. https://doi.org/10.3390/app16052515
Arranz-Romero J, Roig-Vila R, Cazorla M. IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI. Applied Sciences. 2026; 16(5):2515. https://doi.org/10.3390/app16052515
Chicago/Turabian StyleArranz-Romero, Javier, Rosabel Roig-Vila, and Miguel Cazorla. 2026. "IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI" Applied Sciences 16, no. 5: 2515. https://doi.org/10.3390/app16052515
APA StyleArranz-Romero, J., Roig-Vila, R., & Cazorla, M. (2026). IPA 2.0: Validation of an Interpretable Emotion-Attention Index for Neuro-Adaptive Learning with AI. Applied Sciences, 16(5), 2515. https://doi.org/10.3390/app16052515

