Next Article in Journal
Deep Learning-Enhanced Ocean Acoustic Tomography: A Latent Feature Fusion Framework for Hydrographic Inversion with Source Characteristic Embedding
Previous Article in Journal
Incremental Beta Distribution Weighted Fuzzy C-Ordered Means Clustering
Previous Article in Special Issue
Using Large Language Models to Simulate History Taking: Implications for Symptom-Based Medical Education
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Education, Neuroscience, and Technology: A Review of Applied Models

by
Elena Granado De la Cruz
1,
Francisco Javier Gago-Valiente
2,
Óscar Gavín-Chocano
3 and
Eufrasio Pérez-Navío
3,*
1
Department of Pedagogy, University of Huelva, 21071 Huelva, Spain
2
Department of Nursing, University of Huelva, 21071 Huelva, Spain
3
Department of Pedagogy, University of Jaén, 23071 Jaén, Spain
*
Author to whom correspondence should be addressed.
Information 2025, 16(8), 664; https://doi.org/10.3390/info16080664 (registering DOI)
Submission received: 24 May 2025 / Revised: 29 July 2025 / Accepted: 31 July 2025 / Published: 4 August 2025

Abstract

Advances in neuroscience have improved the understanding of cognitive, emotional, and social processes involved in learning. Simultaneously, technologies such as artificial intelligence, augmented reality, and gamification are transforming educational practices. However, their integration into formal education remains limited and often misapplied. This study aims to evaluate the impact of technology-supported neuroeducational models on student learning and well-being. A systematic review was conducted using PubMed, the Web of Science, ScienceDirect, and LILACS, including open-access studies published between 2020 and 2025. Selection and methodological assessment followed PRISMA 2020 guidelines. Out of 386 identified articles, 22 met the inclusion criteria. Most studies showed that neuroeducational interventions incorporating interactive and adaptive technologies enhanced academic performance, intrinsic motivation, emotional self-regulation, and psychological well-being in various educational contexts. Technology-supported neuroeducational models are effective in fostering both cognitive and emotional development. The findings support integrating neuroscience and educational technology into teaching practices and teacher training, promoting personalized, inclusive, and evidence-based education.

1. Introduction

Learning is a complex and dynamic process involving the interaction of multiple cognitive, emotional, social, and technological factors. Over the past decades, advances in neuroscience have enabled a deeper understanding of the brain mechanisms underlying knowledge acquisition, offering new opportunities to optimize teaching and learning [1]. In this context, neuroeducation has emerged as an interdisciplinary field that integrates neuroscience, cognitive psychology, pedagogy, and, increasingly, emerging technologies, with the aim of developing teaching strategies grounded in empirical evidence about brain functioning [2].
One of the core principles of neuroeducation is brain plasticity, which refers to the brain’s ability to reorganize itself structurally and functionally in response to experience and learning [3]. Studies in cognitive neuroscience have shown that exposure to enriched learning environments facilitates synaptic consolidation and improves knowledge retention [4]. Advanced technologies such as functional magnetic resonance imaging (fMRI) and functional near-infrared spectroscopy (fNIRS) have made it possible to map neural networks involved in information processing, leading to teaching methods better aligned with brain functioning [5].
At the same time, technological tools such as artificial intelligence (AI), augmented reality (AR), large language models (LLMs), and virtual learning environments are increasingly being integrated into education. These tools not only enrich the pedagogical environment but also allow for personalized learning, real-time feedback, and active student engagement [6,7]. For instance, using LLMs in clinical simulations has been shown to enhance decision making in medical students by triggering deeper and more structured reasoning processes [8]. Similarly, incorporating AR and 3D models in anatomy instruction has significantly increased student motivation and academic performance [9].
Despite these advances, the systematic application of neuroeducation in formal settings faces several challenges. Teacher training in neuroscientific principles and critical use of emerging technologies remains limited, which hinders their effective integration into pedagogical practice [10]. Furthermore, the persistence of neuromyths has generated confusion and unrealistic expectations about the applicability of neuroscience in education [11]. Overcoming these barriers requires coordinated efforts among researchers, educators, and policymakers, as well as the development of ethical and scientific standards for the use of brain-based technologies [12]. Various studies have compared the effectiveness of neuroeducational models versus traditional methods, concluding that approaches such as multisensory learning, flipped classrooms, simulations with immediate feedback, and gamification enhance conceptual understanding, intrinsic motivation, and metacognitive skills [13,14,15]. However, the impact of these methodologies may vary depending on students’ cognitive development level, sociocultural context, and the quality of teaching implementation [16].
Therefore, this systematic review aims not only to analyze the application of neuroeducational models in teaching but also to explore how emerging technologies can amplify their benefits, offering deeper insights into their impact on learning, motivation, and students’ emotional well-being. The objective of this review is to provide a solid foundation for the informed and critical implementation of technology-supported neuroeducational approaches, offering relevant evidence for teaching practice and the development of educational policies grounded in brain knowledge. Through the analysis of recent studies, this review seeks to contribute to the design of pedagogical strategies backed by neuroscientific evidence and serve as a key reference for researchers, educators, and policymakers interested in transforming education through new technologies and brain-based knowledge.
In this work, we use the term neuroeducational models in a broad sense to refer to educational approaches that incorporate principles, findings, or contributions from neuroscience with the aim of enriching teaching and learning. We do not intend to imply that all cases represent formal, structured, and validated models, but instead wish to highlight the application of neuroscience-based foundations in various educational contexts.

2. Materials and Methods

This study conducted a systematic review of the scientific literature with the aim of analyzing the implementation and effectiveness of neuroeducational models in formal educational settings. It corresponds to a pilot systematic review that served as a preliminary situational diagnosis. Therefore, only open-access articles published between 2020 and 2025 were included to ensure free and immediate availability of data. This limitation is acknowledged in the discussion, and future studies will expand the search to include closed-access articles and the gray literature.
A registration request for this review has been submitted to PROSPERO (International Prospective Register of Systematic Reviews), with ID 1048004.
The methodology followed the PRISMA 2020 guidelines [15], ensuring transparency and rigor in the selection and analysis of included studies. The PICO framework was used to define the core elements of the review:
-
P (Participants): students and teachers in formal education (primary, secondary, higher education, and teacher training);
-
I (Intervention): implementation of neuroeducational models in teaching;
-
C (Comparison): traditional teaching methods vs. neuroeducation-based approaches in different educational populations;
-
O (Outcomes): impact on learning, conceptual understanding, academic performance, and teacher training and perception.
A final research question guided this review: What is the impact of neuroeducational models on teaching and learning compared to traditional methods in formal educational settings?

2.1. Study Selection Criteria

The systematic search was conducted between January and April 2025 using the databases PubMed, the Web of Science (WoS), LILACS, and ScienceDirect. The review focused on the scientific literature published in the last five years (2020–2025), prioritizing open-access publications. Empirical studies addressing the application of neuroeducational models in education and their impact on learning were included.

2.2. Search Strategy

A standardized search strategy was used across all databases, combining Boolean operators and relevant keywords.

2.2.1. PubMed

Here, the keywords were as follows: neuroeducation[All Fields] AND (brain-based[All Fields] AND (“learning”[MeSH Terms] OR “learning”[All Fields])) AND ((“teaching”[MeSH Terms] OR “teaching”[All Fields] OR (“teaching”[All Fields] AND “methods”[All Fields]) OR “teaching methods”[All Fields]) AND (“methods”[MeSH Terms] OR “methods”[All Fields] OR “intervention”[All Fields]) AND (“education”[Subheading] OR “education”[All Fields] OR “educational status”[MeSH Terms] OR (“educational”[All Fields] AND “status”[All Fields]) OR “educational status”[All Fields] OR “education”[MeSH Terms])) AND (“2020/04/26”[PubDate]: “2025/04/24”[PubDate]).

2.2.2. Web of Science

The keywords were as follows: neuroeducation AND brain-based learning OR teaching methods and intervention and education (Topic) AND 2021–2025 (Publication Years) AND All Open Access AND Clinical Trial.

2.2.3. LILACS

The following keywords were used: neuroeducation OR brain-based learning AND teaching methods AND intervention AND education AND db:(“LILACS”) AND type_of_study:(“clinical_trials”) AND (year_cluster:[2020 TO 2025]) AND instance:”lilacsplus”.

2.2.4. ScienceDirect

We used the following keywords: neuroeducation AND brain-based learning AND teaching methods AND intervention AND education (with filters applied for publication years 2020–2025, full open access).

2.3. Inclusion and Exclusion Criteria

To ensure reliability in study selection, two independent reviewers assessed each study’s relevance according to inclusion and exclusion criteria (See Table 1). In case of disagreement, a third reviewer resolved the conflict. The selection process and reasons for exclusion are illustrated in the PRISMA flowchart (Figure 1), ensuring methodological rigor in accordance with PRISMA standards [15].
Study selection was carried out by two independent reviewers through consensus. Although no κ concordance coefficient was calculated, all discrepancies were resolved through discussion. Given the heterogeneity of the included studies and the exploratory nature of this pilot review, no meta-analysis or heterogeneity estimation was conducted.

2.4. Data Extraction

The data extraction process was carried out through extensive trials and post-search procedures. It began with a meticulous review of each article’s title, abstract, methodology, results, and conclusions. The data were extracted as presented in their respective studies at the time of the review and are found into Table 2.
In this systematic review, the selection and extraction of variables were based on the PICO framework, which considers participants, interventions, comparisons, and outcomes. This strategy allowed for the establishment of clear inclusion criteria and, from them, the qualitative analysis of the selected studies.
In addition to the main variables, other relevant characteristics were included, such as the authors, year of publication, country of origin, study design, research objectives, participant details, measured variables, and the scales used.

2.5. Presentation of Results: Adherence to the PRISMA Quality Initiative

The results of the primary studies, obtained through a systematic and reproducible methodology, were presented both qualitatively and quantitatively (Figure 1).

2.6. Quality Assessment

When selecting articles for this review, a quality analysis was conducted using the EPHPP tool [37]. This instrument provides an overall quality rating for each study based on the assessment of six key components. Studies are rated as “strong” if they have no weak components and at least four strong ones. Those with fewer than four strong components and one weak component are considered “moderate.” Studies receiving two or more weak component ratings are categorized as “weak” [37].
The results of this analysis are presented in Table 3. Among the various articles analyzed, 4.5% received an overall strong rating [18], 86.4% a moderate rating [19,20,21,22,23,24,25,26,27,28,29,30,31,32,33,34,35,36], and 9.1% a weak rating [16,17].
Although the percentage of studies with a strong overall rating was the lowest (4.5%), all evaluated articles presented solid internal components, especially regarding the use of data collection instruments and risk of bias management. These internal strengths are particularly relevant to the aims of this systematic review and were prioritized when deciding on study inclusion. Despite the presence of moderate or weak components in some areas, it was observed that studies with a moderate overall rating (43%) only presented one weak component among the six assessed, as in the cases of Wang et al. (2024) [19], Dehghani et al. (2024) [20], and Syväoja et al. (2024) [21], among others. Meanwhile, studies with a weak overall rating (52%), such as those by Dhungel et al. (2023) [16], Ballesta Claver et al. (2024) [17], and Zheng et al. (2024) [18], presented only two weak components. This suggests that although they did not meet the criteria for “moderate,” they still maintained certain methodological strengths that justified their inclusion in the analysis.

3. Results

3.1. Study Selection and Data Extraction Process

A systematic search was carried out in the Web of Science, PubMed, LILACS, and ScienceDirect databases using controlled descriptors (DeCS and MeSH) and Boolean operators. A total of 386 records were identified, distributed as follows: the Web of Science (n = 311), PubMed (n = 25), ScienceDirect (n = 33), and LILACS (n = 17).
Before screening, 31 duplicate records were removed, and 198 were excluded for various reasons (such as failing to meet basic inclusion criteria or being unrelated to educational research). No eliminations were recorded by automated tools. As a result, 157 studies advanced to the title and abstract screening phase. The full text of these 157 studies was reviewed. After applying inclusion and exclusion criteria, 135 studies were eliminated for the following reasons: the intervention design did not align with the review’s objectives (n = 102), they did not specifically address neuroeducational models (n = 30), or they were review articles without experimental data (n = 3).
Finally, 22 studies met the established quality and relevance criteria. Study selection was performed independently by two reviewers, and a third reviewer was consulted in case of discrepancies. Relevant information was extracted using a PICO-based matrix, which recorded methodological aspects, population characteristics, interventions, and key findings of each study.

3.2. Study Characteristics: Summary of Results

Table 2 provides a comprehensive summary of the main characteristics of the studies included in this systematic review. The extracted information includes authors, year of publication, country of origin, study design, comparisons made, research objectives, participant demographics, measured variables, instruments used, implemented interventions, and main results.
Of the 22 included studies, 14 (63.6%) were randomized clinical trials [18,20,21,23,24,25,26,27,28,29,30,32,35]; 5 (22.7%) were quasi-experimental studies [16,19,27,33,36]; 2 (9%) were pre-experimental studies [17,22]; and 1 (4.5%) was a longitudinal experimental study [31].
Regarding geographical origin, seven studies (31.8%) were conducted in China [18,19,31,32,34,36]; three in Brazil [24,25]; two in Switzerland [22,33]; two in Australia [22,23]; and one each in Spain [17], Finland [21], Nepal [16], Iran [20], the USA [26], Italy [28], Germany [29], Kenya [30], and India [35].
Regarding study topics, seven studies (31.8%) explored the use of educational technologies such as augmented reality, artificial intelligence, or educational robotics [18,24,25,26,28,34]; six studies (27.3%) analyzed active teaching strategies like flipped classrooms, teamwork, or problem-based learning [17,19,20,32,35,36]; five studies (22.7%) focused on the effects of integrating physical activity into learning [16,21,22,23,27]; and four studies (18.2%) investigated the development of teaching competencies through neuroeducation training programs [17,22,29,33].
Overall, the results consistently showed improvements in academic performance, intrinsic motivation, conceptual understanding, and students’ cognitive skills following the implementation of neuroeducational models.
For example, Ballesta-Claver et al. (2024) [17] reported a 27% increase in teaching knowledge among future teachers after a university-level neuroeducation intervention; Dehghani et al. (2024) [20] reported a 21% improvement in self-efficacy among multiple sclerosis patients through teamwork-based instruction; and Syväoja et al. (2024) [21] observed significant gains in mathematics performance (+14%) and intrinsic motivation in elementary school students through physically active math lessons.
Zheng et al. (2024) [18] showed that AI-driven scenario-based simulation improved diagnostic accuracy by 15% in medical students; Brügge et al. (2024) [26] found that large language models improved the quality of clinical decision making among future doctors.
From a neuroscientific perspective, the results confirmed that interventions stimulating brain plasticity—through physical activity, social interaction, emotional regulation, and multisensory experiences—enhanced knowledge consolidation and higher-order cognitive skill development [1,2,3].
Several studies also reported common limitations, such as small sample sizes [16,17,22,27,33,36]; lack of formal psychometric validation of instruments [16,17,22,36]; and the need for longitudinal studies to assess long-term effects [18,19,29,30,31].

3.3. Relationship Between Neuroeducational Interventions and Learning Outcomes

The implementation of neuroeducational models showed a positive impact on both conceptual learning and students’ emotional and motivational development. Strategies such as active learning, multisensory instruction, teamwork, and the integration of educational technology enhanced higher-order cognitive skills and optimized knowledge acquisition [17,21]. In several studies, interventions involving physical activity integrated into teaching—especially in mathematics—improved academic performance and significantly increased students’ intrinsic motivation and self-efficacy [21,23]. These practices also fostered emotional regulation and reduced academic anxiety, contributing to more positive and stimulating learning environments [27].
Moreover, technologies like augmented reality, AI-powered clinical simulation, and educational robotics improved conceptual understanding, critical thinking, and students’ practical skills by offering immediate feedback and dynamic learning settings [18,26,28]. Another important contribution of neuroeducational models was the improvement in self-regulated learning. Teacher training programs that promoted self-regulated learning strategies strengthened key competencies such as strategic planning, learning monitoring, and the emotional management of students [22,33].
Finally, these interventions also had positive effects on students’ ability to manage academic uncertainty and respond to challenges with resilience. Emotionally supportive strategies, the development of metacognition, and active engagement in the learning process contributed to higher self-efficacy, reduced stress, and increased learning satisfaction [29,30,31]. In summary, the analyzed studies suggest that neuroeducational models, by integrating emotional, cognitive, and social factors, enhance not only academic performance but also students’ emotional well-being and motivation, establishing themselves as highly effective and holistic educational strategies.

4. Discussion

The objective of this systematic review was to identify research studies that analyzed the impact of neuroeducational models on teaching and learning in formal educational settings, with special emphasis on the role of emerging technologies as facilitators of these approaches. Today’s education system faces increasing complexity due to technological, social, and cognitive changes that influence how students learn [2]. This review highlights multiple findings that underscore the essential contribution of neuroeducational models to improving academic performance, developing higher-order cognitive skills, and enhancing students’ intrinsic motivation. By integrating knowledge from neuroscience, psychology, pedagogy, and educational technology, neuroeducational models enable the design of more personalized, multisensory, and adaptive interventions that promote deep learning, emotional regulation, and active participation [1,4]. In this regard, active learning, multisensory teaching, interactive simulations, and gamified learning environments have shown a significant impact on improving academic performance [17,18].
One key contribution of these models is their effect on emotional regulation and the management of academic anxiety, especially among primary and secondary students [21,27]. Evidence shows that combining physical activity with cognitive tasks not only improves math performance but also enhances emotional well-being and student self-efficacy.
Importantly, this review demonstrates that the integration of emerging technologies—such as augmented reality, artificial intelligence in clinical simulations, large language models (LLMs), and educational robotics—has notably enhanced the effectiveness of neuroeducational models. These technologies allow content to be tailored to individual neurocognitive profiles, provide immediate feedback, and increase students’ immersion in the learning process [26,28]. Together, they transform the classroom into a dynamic, interactive environment aligned with brain functioning, thus facilitating meaningful learning. The success of these methodologies also depends on teacher preparedness. Training educators in applied neuroscience and the use of technological tools is a key factor in ensuring the effective implementation of evidence-based strategies that stimulate brain plasticity and executive function development [22,33]. Conversely, interventions that focus solely on theoretical content delivery—without considering emotional or active learning aspects—are insufficient to foster deep, lasting learning. The findings support the need to integrate emotional, social, physical, and technological factors in the design of educational experiences [9,13].
Another important finding is the role of emotional support in the learning process. Students who receive personalized guidance, socio-emotional support, and access to interactive technologies that promote self-regulation tend to experience lower stress levels and greater academic resilience [29,31]. The overall perception of students, teachers, and families toward neuroeducational models was positive. Participants particularly valued their capacity to promote autonomous, motivating, and inclusive learning [19,25]. Educational technologies were not seen merely as support tools but as active agents in enhancing the learning experience and adapting it to individual needs [30]. Despite these encouraging results, some studies reported methodological limitations, including small sample sizes, heterogeneous designs, and a lack of formal validation of certain instruments. These issues may limit the generalizability of results and highlight the need for greater methodological rigor in future research.
It is important to emphasize that although we have used the term neuroeducational models, many of the included studies focus on the application of strategies or interventions inspired by neuroscience, without constituting a systematic and formal model. Therefore, our conclusions should be interpreted with this broader conceptual scope in mind, and future studies should aim for greater precision regarding defined models.
It is worth noting that, although the use of technology was not defined as an inclusion criterion in this systematic review, it emerged during the analysis as a relevant element in several studies, serving as a complementary support in the implementation of neuroeducational approaches. This highlights the need for future research to specifically assess the impact of technology-supported neuroeducational models on student learning and well-being.
One important limitation of this review is the restriction to open-access articles published between 2020 and 2025, which, while facilitating immediate data availability, may have introduced selection bias and limited the comprehensiveness of our findings. This was inherent to the pilot and diagnostic nature of the study. Future reviews will address this by including closed-access studies and the gray literature to ensure a more exhaustive and balanced synthesis.
While neuroeducational models supported by technology show promising results, the current evidence base, largely composed of studies of moderate to low methodological quality, is insufficient to support broad implementation. Further high-quality research is needed to validate these findings before large-scale adoption can be recommended.
Additionally, it is recommended that pedagogical strategies be complemented with practices that promote students’ overall well-being, such as self-care, regular physical activity, and training in social–emotional skills [2,10]. These factors, combined with educational technology, can maximize the impact of neuroeducational interventions. Regarding the limitations of this review, the diversity of study designs, contexts, and populations may have affected the consistency of findings. Nonetheless, the PRISMA 2020 protocol was strictly followed [15], ensuring transparency and methodological quality throughout the process. Finally, it is recommended that future research conduct randomized clinical trials with large samples and longitudinal designs, as well as meta-analyses that quantitatively synthesize the impact of technology-supported neuroeducational models across different educational levels. It will be especially relevant to assess the long-term effects of these strategies on students’ cognitive, emotional, and social development and their applicability in post-pandemic contexts.

5. Conclusions

Conducting systematic reviews like the one presented in this study represents a challenge due to the methodological diversity of the studies and the recent consolidation of neuroeducational and technological models in the academic field. Nonetheless, the evidence gathered offers solid conclusions both for educational practice and for research in applied neuroscience and educational technology. Neuroeducational models, especially when integrated with emerging technologies, play a fundamental role in enhancing the teaching-learning process. Their application promotes the development of higher cognitive skills, intrinsic motivation, emotional self-regulation, and the psychological well-being of students. This review has highlighted the positive impact of interventions that combine neuroscientific principles with technological tools such as artificial intelligence, augmented reality, simulation, and educational robotics.
Likewise, specialized teacher training—not only in neuroscience but also in the critical and pedagogical use of digital technologies—emerges as an essential component to ensure the effectiveness of these methodologies. The learning environments resulting from this synergy are more inclusive, adaptive, and evidence based, enabling more personalized and effective teaching. In summary, the findings of this review strongly support the incorporation of neuroeducational models supported by technology as a comprehensive strategy to improve educational quality and the full development of students. It is essential to value the contribution of technological neuroeducation to pedagogical transformation and to advance educational policies that promote the continuous training of teachers in these areas, as well as the design of longitudinal research to evaluate its sustained impact. Working with neuroeducational models enriched with technology not only improves students’ learning experience but also strengthens a scientific teaching approach focused on well-being, equity, and the academic success of future generations.
Although the results of this systematic review suggest potential benefits of technology-supported neuroeducational models, it is important to acknowledge that most included studies do not make direct comparisons with traditional methods, thus limiting the ability to draw conclusive claims about their superiority. Additionally, it should be noted that not all analyzed studies apply a formal and systematic neuroeducational model; in many cases, they involve strategies or interventions inspired by neuroscientific principles, which broadens the conceptual scope but also requires greater precision in future research.
Furthermore, while the integration of emerging technologies was a relevant finding of this review, their use was not an initial inclusion criterion, and the specific impact of these tools on student learning and well-being needs to be evaluated more rigorously with standardized criteria. This pilot review was limited to open-access articles published between 2020 and 2025, which may have introduced selection bias and affected the representativeness of the results. Due to the heterogeneity of the included studies, it was not possible to conduct a meta-analysis or a quantitative assessment of the impact, highlighting the need for future research with rigorous designs, large samples, and quantitative syntheses to validate the observed effects. Consequently, although the findings are promising, the current evidence is insufficient to recommend broad and definitive implementation of these models, emphasizing the importance of continuing to develop this line of research with greater methodological rigor.
Therefore, although neuroeducational models show promising results, further rigorous comparative research is needed to establish their impact relative to traditional methods.

Author Contributions

Conceptualization, E.G.D.l.C. and E.P.-N.; methodology, E.G.D.l.C., Ó.G.-C. and F.J.G.-V.; software, E.P.-N.; validation, Ó.G.-C. and E.P.-N.; formal analysis, Ó.G.-C. and E.P.-N.; investigation, E.G.D.l.C.; resources, F.J.G.-V. and E.P.-N.; data analysis, Ó.G.-C.; writing—original draft, E.G.D.l.C.; writing—review and editing, F.J.G.-V.; supervision, E.G.D.l.C. and F.J.G.-V.; project administration, E.P.-N.; funding acquisition, Ó.G.-C. and E.G.D.l.C. All authors have read and agreed to the published version of the manuscript.

Funding

This study was funded by the Vice-Rector for Continuing Training, Educational Technologies and Teaching Innovation of the University of Jaén through the Teacher Innovation and Improvement Project, code PID2024_036, called “Innovative Methodologies in Primary Education”.

Informed Consent Statement

Their quality was verified by ensuring, at all times, compliance with the ethical principles of research set out in the Declaration of Helsinki (World Medical Association, 2013), with the approval of the Ethics Committee of the University of Jaén under the code OCT.20/1.TES.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Immordino-Yang, M.H.; Damasio, A. We feel, therefore we learn: The relevance of affective and social neuroscience to education. Mind Brain Educ. 2007, 1, 3–10. [Google Scholar] [CrossRef]
  2. Tokuhama-Espinosa, T. Neuroeducación: Solo se Puede Aprender Aquello que se Ama, 3rd ed.; Editorial Kairós: Barcelona, Spain, 2021. [Google Scholar]
  3. Doidge, N. The Brain’s Way of Healing: Remarkable Discoveries and Recoveries from the Frontiers of Neuroplasticity; Penguin Life: New York, NY, USA, 2015. [Google Scholar]
  4. Gazzaniga, M.S. The Cognitive Neuroscience of Mind: A Tribute to Michael, S. Gazzaniga; MIT Press: Cambridge, MA, USA, 2020. [Google Scholar]
  5. Maguire, E.A. Neuroeducation: The bridge between neuroscience and education. In Neuroscience in Education; Della Sala, S., Anderson, M., Eds.; Oxford University Press: Oxford, UK, 2018; pp. 21–36. [Google Scholar]
  6. Howard-Jones, P.A. Neuroscience and education: Myths and messages. Nat. Rev. Neurosci. 2014, 15, 817–824. [Google Scholar] [CrossRef]
  7. Dekker, S.; Lee, N.C.; Howard-Jones, P.; Jolles, J. Neuromyths in education: Prevalence and predictors of misconceptions among teachers. Front. Psychol. 2012, 3, 429. [Google Scholar] [CrossRef] [PubMed]
  8. Ansari, D.; De Smedt, B.; Grabner, R.H. Neuroeducation—A critical overview of an emerging field. Neuroethics 2017, 10, 105–117. [Google Scholar] [CrossRef]
  9. Sousa, D.A. How the Brain Learns; Corwin Press: Thousand Oaks, CA, USA, 2017. [Google Scholar]
  10. Immordino-Yang, M.H. Emotions, Learning, and the Brain: Exploring the Educational Implications of Affective Neuroscience; W. W. Norton & Company: New York, NY, USA, 2016. [Google Scholar]
  11. Hattie, J.; Yates, G. Visible Learning and the Science of How We Learn; Routledge: New York, NY, USA, 2014. [Google Scholar]
  12. Thomas, M.S.C.; Ansari, D.; Knowland, V.C.P. Annual research review: Educational neuroscience—Progress and prospects. J. Child Psychol. Psychiatry 2019, 60, 477–492. [Google Scholar] [CrossRef] [PubMed]
  13. Mayer, R.E. The Cambridge Handbook of Multimedia Learning, 3rd ed.; Cambridge University Press: Cambridge, UK, 2020. [Google Scholar] [CrossRef]
  14. Luckin, R.; Holmes, W.; Griffiths, M.; Forcier, L.B. Artificial Intelligence and Education: Promise and Implications for Teaching and Learning; UNESCO: Paris, France, 2021; Available online: https://unesdoc.unesco.org/ark:/48223/pf0000377071 (accessed on 10 July 2020).
  15. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Moher, D. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  16. Dhungel, S.; Mahat, B.; Limbu, P.; Thapa, S.; Awasthi, J.R.; Thapaliya, S.; Jha, M.K.; Kunwar, A.J. Advantage of neuroeducation in managing mass psychogenic illness among rural school children in Nepal. IBRO Neurosci. Rep. 2023, 14, 435–440. [Google Scholar] [CrossRef]
  17. Ballesta-Claver, J.; Sosa Medrano, I.; Gómez Pérez, I.A.; Ayllón Blanco, M.F. Propuesta neuroeducativa para un aprendizaje tecno-activo de la enseñanza de las ciencias: Un cambio universitario necesario. Rev. Electrón. Interuniv. Form. Profr. 2024, 27, 35–50. [Google Scholar] [CrossRef]
  18. Zheng, K.; Shen, Z.; Chen, Z.; Che, C.; Zhu, H. Application of AI-empowered scenario-based simulation teaching mode in cardiovascular disease education. BMC Med. Educ. 2024, 24, 1003. [Google Scholar] [CrossRef]
  19. Wang, H.; Zhang, W.; Kong, W.; Zhang, G.; Pu, H.; Wang, Y.; Ye, L.W.; Shang, L. The effects of ‘small private online course + flipped classroom’ teaching on job competency of nuclear medicine training trainees. BMC Med. Educ. 2024, 24, 1542. [Google Scholar] [CrossRef]
  20. Dehghani, A.; Fakhravari, F.; Hojat, M. The effect of the team members teaching design vs. regular lectures method on the self-efficacy of multiple sclerosis patients in Iran. Randomised Controlled Trial. Investig. Educ. Enferm. 2024, 42, e13. [Google Scholar] [CrossRef] [PubMed]
  21. Syväoja, H.J.; Sneck, S.; Kukko, T.; Asunta, P.; Räsänen, P.; Viholainen, H.; Kulmala, J.; Hakonen, H.; Tammelin, T.H. Effects of physically active maths lessons on children’s maths performance and maths-related affective factors: Multi-arm cluster randomized controlled trial. Br. J. Educ. Psychol. 2024, 94, 839–861. [Google Scholar] [CrossRef] [PubMed]
  22. Drollinger-Vetter, B.; Buff, A.; Lipowsky, F.; Philipp, K.; Vogel, S. Fostering pedagogical content knowledge on “probability” in preservice primary teachers within formal teacher education: A longitudinal experimental field study. Teach. Teach. Educ. 2025, 105, 104953. [Google Scholar] [CrossRef]
  23. Vetter, M.; O’Connor, H.T.; O’Dwyer, N.; Chau, J.; Orr, R. ‘Maths on the move’: Effectiveness of physically-active lessons for learning maths and increasing physical activity in primary school students. J. Sci. Med. Sport 2020, 23, 735–739. [Google Scholar] [CrossRef]
  24. Rodríguez-López, E.S.; Calvo-Moreno, S.O.; Cimadevilla Fernández-Pola, E.; Fernández-Rodríguez, T.; Guodemar-Pérez, J.; Ruiz-López, M. Aprendizaje de la anatomía musculoesquelética a través de las nuevas tecnologías: Ensayo clínico aleatorizado. Rev. Lat. Am. Enferm. 2020, 28, e3237. [Google Scholar] [CrossRef]
  25. Coelho, M.D.M.F.; Miranda, K.C.L.; Melo, R.C.D.O.; Gomes, L.F.D.S.; Monteiro, A.R.M.; Moreira, T.M.M. Use of a therapeutic communication application in the nursing undergraduate program: Randomized clinical trial. Rev. Lat. Am. Enferm. 2021, 29, e3456. [Google Scholar] [CrossRef]
  26. Brügge, E.; Ricchizzi, S.; Arenbeck, M.; Keller, M.N.; Schur, L.; Stummer, W.; Holling, M.; Lu, M.H.; Darici, D. Large language models improve clinical decision making of medical students through patient simulation and structured feedback: A randomized controlled trial. BMC Med. Educ. 2024, 24, 1391. [Google Scholar] [CrossRef]
  27. Kliziene, I.; Cizauskas, G.; Sipaviciene, S.; Aleksandraviciene, R.; Zaicenkoviene, K. Effects of a physical education program on physical activity and emotional well-being among primary school children. Int. J. Environ. Res. Public Health 2021, 18, 7536. [Google Scholar] [CrossRef]
  28. Di Lieto, M.C.; Pecini, C.; Castro, E.; Inguaggiato, E.; Cecchi, F.; Dario, P.; Cioni, G.; Sgandurra, G. Empowering executive functions in 5- and 6-year-old typically developing children through educational robotics: An RCT study. Front. Psychol. 2020, 10, 3084. [Google Scholar] [CrossRef]
  29. Faure, T.; Weyers, I.; Voltmer, J.B.; Westermann, J.; Voltmer, E. Test-reduced teaching for stimulation of intrinsic motivation (TRUST): A randomized controlled intervention study. BMC Med. Educ. 2024, 24, 718. [Google Scholar] [CrossRef] [PubMed]
  30. Mugo, A.M.; Nyaga, M.N.; Ndwiga, Z.N.; Atitwa, E.B. Evaluating learning outcomes of Christian religious education learners: A comparison of constructive simulation and conventional method. Heliyon 2024, 10, e32632. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Yang, X.; Sun, X. The reciprocal relationship among Chinese senior secondary students’ intrinsic and extrinsic motivation and cognitive engagement in learning mathematics: A three-wave longitudinal study. ZDM Math. Educ. 2023, 55, 399–412. [Google Scholar] [CrossRef]
  32. Zhao, W.; Cao, Y.; Hu, L.; Lu, C.; Liu, G.; Gong, M.; He, J. A randomized controlled trial comparison of PTEBL and traditional teaching methods in “Stop the Bleed” training. BMC Med. Educ. 2024, 24, 462. [Google Scholar] [CrossRef]
  33. Hirt, C.N.; Eberli, T.D.; Jud, J.T.; Rosenthal, A.; Karlen, Y. One step ahead: Effects of a professional development program on teachers’ professional competencies in self-regulated learning. Teach. Teach. Educ. 2025, 159, 104977. [Google Scholar] [CrossRef]
  34. Wang, L.; Zhao, Y.; Wang, P.; Qian, A.; Hong, H.; Xu, S. Application of clinical thinking training system based on entrustable professional activities in emergency teaching. BMC Med. Educ. 2024, 24, 1294. [Google Scholar] [CrossRef] [PubMed]
  35. Channegowda, N.Y.; Pai, D.R.; Manivasakan, S. Simulation-based teaching versus traditional small group teaching for first-year medical students among high and low scorers in respiratory physiology: A randomized controlled trial. J. Educ. Eval. Health Prof. 2025, 22, 8. [Google Scholar] [CrossRef]
  36. Chen, C.-Y.; Shi, T.-L.; Wang, R.-Y.; Li, N.; Hao, Y.-H.; Zhang, J.-L.; Tang, M.; Liu, S.; Qin, G.-M.; Mi, W. Implementation and evaluation of the three action teaching model with learning plan guidance in preventive medicine course. Front. Psychol. 2024, 15, 1508432. [Google Scholar] [CrossRef]
  37. Tomás, B.H.; Ciliska, D.; Dobbins, M.; Micucci, S. Un proceso para revisar sistemáticamente la literatura: Proporcionar evidencia de investigación para intervenciones de enfermería en salud pública. Cosmovisiones Evid. Enfermería Basada 2004, 1, 176–184. [Google Scholar]
Figure 1. Flow diagram of the systematic review process according to the PRISMA protocol statements.
Figure 1. Flow diagram of the systematic review process according to the PRISMA protocol statements.
Information 16 00664 g001
Table 1. Inclusion and exclusion criteria.
Table 1. Inclusion and exclusion criteria.
Inclusion CriteriaExclusion Criteria
Studies published between 2020 and 2025.Studies published prior to 2020.
Open-access or freely accessible full-text studies.Articles without full-text access.
Experimental studies, case studies, or meta-analyses.Research focused solely on clinical contexts without educational application.
Focus on neuroeducational models applied in formal educational settings (schools, universities, and teacher training).Studies discussing neuromyths unrelated to applied neuroeducation.
Methodologies including neuroscientific measures (e.g., neuroimaging and cognitive assessments) or practical applications of neuroscience-based models in education.Studies lacking empirical evidence on neuroeducational model implementation.
Studies written in English, Spanish, or Portuguese (languages in which the review team is proficient).Studies published in other languages due to linguistic limitations of the review team.
Source: own elaboration.
Table 2. Quality assessment components and EPHPP instrument ratings.
Table 2. Quality assessment components and EPHPP instrument ratings.
Author (Year)CountryStudy
Design
Comparisons and Intervention CharacteristicsStudy ObjectivesParticipantsMeasured Variables and ScalesInterventionResults
Dhungel et al. (2023) [16]NepalQuasi-experimental pre-test–post-test designComparison between students from schools affected and not affected by mass hysteria; 4-week intervention including educational theater, anatomical models, and classes on neurology and stress; data collected pre- and post-intervention using questionnairesTo evaluate the impact of neuroeducational tools on awareness of mass hysteriaN = 234 (female students, grades 6–10)Structured questionnaire developed by the research team to assess knowledge of the nervous system and stress. Content validated by a panel of neuroscience and psychology experts; no formal psychometric testing (e.g., reliability or statistical validity) was conducted.4-week intervention (1 session/week); included educational theater, anatomical models, and classes on the nervous system and stress; data collected pre- and post-intervention.Improved awareness of stress, but no significant change in understanding of the nervous system.
Ballesta Claver et al. (2024) [17]SpainPre-experimental pre-test–post-test designIntervention applied to a single group without a control group; the same participants were assessed before and after the implementation of a neuroeducational intervention in a university teacher training settingTo evaluate the impact of neuroeducation in higher education teachingN = 77 (prospective primary school teachers)Two ad hoc content questionnaires developed by the research team were used, along with a validated Likert-type neuroeducational scale (Cronbach’s α = 0.968; KMO = 0.934) measuring dimensions related to learning, attention, emotions, and the development of executive functions. The scale was specifically designed for this study and does not have a commercial or previously published version.Neuroeducational intervention with a constructivist approach, focused on active learning and the use of neuroeducational resources (technology, visual aids, debates, etc.). Duration: 6 weeks. Data collected before and after the intervention.A significant effect on learning was observed, with an average 27% increase in acquired knowledge.
Zheng et al. (2024) [18] ChinaExperimentalComparison between scenario-based simulation teaching (experimental group) and traditional cardiology instruction (control group)To evaluate the impact of simulation-based teaching on learning about cardiovascular diseasesN = 66 medical students:
control group (N = 32); experimental group (N = 34)
Performance evaluations in cardiovascular diagnosis, focusing on measuring the accuracy of diagnoses made by medical students and assessing their ability to apply knowledge in practical scenarios;
specifically, post-tests, Mini-CEX, clinical critical thinking scale, satisfaction surveys (experimental group), and semi-structured interviews. Rubrics are not specified, but standardized and qualitative mixed instruments were used.
Scenario-based simulation teaching using AI. Experimental group: AI-supported simulation in cardiovascular diagnostics. Control group: traditional instruction (no simulation). Intervention duration: 5 weeks.Simulation significantly improved diagnostic performance, with a 15% increase in diagnostic accuracy.
Wang et al. (2024) [19]ChinaExperimentalComparison between a small private online course (SPOC) + flipped classroom (experimental group) and traditional teaching (control group)To evaluate the impact of the “small private online course + flipped classroom” model on professional competencies in nuclear medicineN = 103 first-year residents:
experimental group (N = 52); control group (N = 51)
Questionnaires (pre- and post-class, 20 questions, maximum score: 100); final exam (theoretical and practical, maximum score: 100).
Course satisfaction: 5-domain questionnaire.
Course effectiveness: assessment of 6 competency and skill areas.
Knowledge assessment: objective questionnaires before each class (20 questions), theoretical and practical tests after each session, and a final exam (50% theoretical/50% practical) on equipment handling.
Performance was assessed using standardized faculty criteria.
Satisfaction and perceived course effectiveness were measured in areas such as professional skills, patient care, communication, teamwork, teaching, and learning.
Questionnaire validation is not reported, though detailed content tables (S3 and S4) are referenced.
Experimental group: blended learning combining a small private online course (SPOC) with a flipped classroom approach, designed to improve workplace competencies among nuclear medicine residents. Participants accessed online content before attending in-person sessions to apply their knowledge.
Control group: traditional lecture-based instruction.
Intervention duration: 10 weeks.
The implementation of the blended learning model (SPOC + flipped classroom) significantly improved residents’ workplace competencies, with a 22% increase in performance scores. Results also showed higher overall satisfaction and perceived educational effectiveness among residents.
Dehghani et al. (2024) [20]IranRandomized controlled trial (RCT)Comparison between team-based teaching, lecture-based teaching, and a control group in patients with multiple sclerosisTo evaluate the impact of team-based teaching on patients’ self-efficacyN = 48 patients with multiple sclerosis, randomly assigned to three groups: TMTD (n = 16), lecture-based (n = 16), and control (n = 16)Rigby et al.’s validated Self-Efficacy Scale.
Measured variables: self-efficacy related to daily activities, motor skills, participation in health-related decision making, and emotional regulation.
All three groups received six training sessions over a period of 12 weeks (two sessions per week).
TMTD group (Teaching Method Through Teamwork): Intervention based on teamwork, including group dynamics, interactive discussions, and collaborative problem-solving, aligned with neuroeducational models that promote social collaboration and activate brain areas associated with memory and decision making.
Lecture-based group: Traditional lectures focused on individual, passive learning, with no significant interaction between participants.
Control group: Received no educational intervention.
Intervention duration: 12 weeks.
Team-based teaching improved patient self-efficacy by 21%. Significant improvements were observed across all self-efficacy dimensions in the intervention groups, including health decision making, motor skills, and emotional regulation.
Syväoja et al. (2024) [21]FinlandRandomized controlled trial (RCT)Comparison between physically active math instruction (experimental group) and traditional teaching methods (control group)To evaluate the impact of physical activity during math instructionStudents with signed parental consent (N = 397, mean age: 9.3 years):
experimental group: n = 265; control group: n = 132
Curriculum-Based Mathematics Test: To assess mathematical performance, an adapted test battery was used, including tasks on multiplication, division, geometry, time, column methods, and problem-solving. Results were measured by the total number of correct answers.
Self-Reported Questionnaire on Affective Factors Related to Mathematics: To measure enjoyment and self-perception, a modified version of the Fennema–Sherman Mathematics Attitude Scale was used, adapted for Finnish third-grade students, employing a 5-point Likert scale. The Modified Abbreviated Math Anxiety Scale (mAMAS) was used to assess math anxiety in both learning and testing situations.
Motor Skills Assessment: To measure baseline motor skills, validated batteries such as the Körperkoordinationstest für Kinder (KTK), the Movement Assessment Battery for Children, Second Edition (MABC-2), and the Eurofit protocol were used.
Educational Support Needs Questionnaire: Teachers completed a questionnaire to assess students’ need for educational support (intensified or special), which was included as a confounding covariate.
Physical Activity (PA) Monitoring: An accelerometer was used to measure the amount and intensity of physical activity during math lessons in a subsample of 172 children.
Data Imputation and Statistical Analysis: Linear mixed-effects models (LMEs), adjusted for gender, educational support needs, and arithmetic fluency, were used for data analysis. Multiple imputation (MI) models were also applied to manage missing data.
All instruments used were validated and widely applied in previous educational research. No self-developed instruments were reported, as the scales and tests used were adaptations or established tools.
Experimental group: physically active math lessons based on neuroeducational models.
Control group: traditional teaching without physical activity.
Intervention duration: 12 weeks.
The active teaching group showed a significant improvement in math performance (+14%) and motivation (+0.8 on the Likert scale). No improvements were observed in the control group.
Drollinger-Vetter et al. (2025) [22]SwitzerlandPre-test–post-test experimental study Evaluation of changes in pre-service teachers’ pedagogical content knowledge (PCK) in probability; single experimental groupTo assess the impact of formal education on the development of pedagogical content knowledge in probabilityN = 512 (pre-service primary school teachers)Two main instruments were used to assess pedagogical content knowledge (PCK) in probability:
Mathematical Knowledge Questionnaire: This instrument assessed conceptual mastery of probability. While it was designed specifically for this study, no detailed information about its validation process was provided.
Pedagogical Content Knowledge Scale (PCK Scale): This scale measured teachers’ ability to effectively teach probabilistic concepts. It was adapted from previously validated tools used in similar contexts, though reliability or validity metrics for this study were not specified.
Mathematics education training program with a focus on probability over the course of one academic year. The intervention is considered neuroeducational, as it actively stimulates cognition and neural plasticity through problem solving, promotes meaningful learning by linking new knowledge to prior experience, fosters social interaction that activates brain regions associated with motivation and cognitive processing, and uses spaced repetition to support long-term memory consolidation.
Intervention duration: one academic year.
Significant improvement in pedagogical content knowledge, with better performance in probabilistic problem-solving tasks.
Vetter et al. (2025) [23]AustraliaRandomized controlled trial (RCT); pre-test–post-test designComparison between two groups:
physically active math lessons (playground);
traditional classroom-based math lessons (classroom)
To evaluate the effectiveness of physically active lessons for learning mathematics and increasing physical activity among primary school studentsN = 172 primary school students:
experimental group: n = 86; control group: n = 86
Multiplication test (designed by the authors), general mathematics test (standardized). Physical activity was measured using accelerometers. 3 × 30-min lessons per week for 6 weeks, delivered either in a physically active environment (playground) or traditional classroom setting (classroom).
Data were collected before and after the intervention. The intervention leveraged the principle of neuroplasticity, whereby physical activity may foster brain connectivity that facilitates learning. Additionally, integrating movement with academic content (in this case, mathematics) is known to enhance intrinsic motivation and attention, key factors in the neuroscience of learning.
Significant improvement in multiplication scores in the playground group. No significant differences were found in general math performance. Total physical activity and moderate-to-vigorous physical activity levels were significantly higher in the playground group.
Rodríguez-López et al. (2020) [24]BrazilRandomized clinical trialComparison between traditional teaching (control group) and teaching using interactive technologies (augmented reality and 3D models) (experimental group)To evaluate the effectiveness of interactive technologies in anatomy educationN = 62 physiotherapy students:
control group: N = 43; experimental group: N = 19
Theoretical test (knowledge), practical test (application), and perception questionnaire (satisfaction and motivation).
Variables measured: academic performance and learning perception.
All assessments were developed by the authors; no formal statistical validation was reported.
The experimental group used augmented reality applications and digital 3D models to study musculoskeletal anatomy during lessons, while the control group followed traditional lecture-based instruction.
Intervention duration: 8 weeks.
The experimental group achieved higher scores in both theoretical and practical tests, and reported greater satisfaction and motivation toward learning.
Coelho et al. (2021) [25]BrazilRandomized clinical trialComparison between an experimental group using an educational mobile app on therapeutic communication and a control group receiving traditional instruction without the appTo evaluate the effect of using a mobile app on communication skills in nursing educationN = 68 nursing students (randomized):
analyzed: N = 60 (30 in the experimental group and 30 in the control group)
Questionário de Conhecimento sobre Comunicação Terapêutica (QCCoT) and the Therapeutic Communication Skills Self-Assessment Scale.
Measured variables: theoretical knowledge and perceived communication skills.
The intervention involved the use of an interactive mobile application specifically developed to improve therapeutic communication. The app included clinical simulations, practical exercises, real-case analysis, immediate feedback, and self-assessment modules. It was used over a 6-week period as a complement to theoretical instruction.The experimental group showed a significant improvement in theoretical knowledge and self-assessed communication skills compared to the control group.
Brügge et al. (2024) [26]USARandomized clinical trialComparison between experimental group: use of large language models (LLMs) during clinical simulations and control group; clinical simulations without LLM assistanceTo evaluate the impact of large language models (LLMs) on clinical decision making in medical students using simulated patient interviews and structured feedbackN = 21 participants who completed the study:
control group: 11 participants;
feedback group (experimental): 10 participants
Clinical Reasoning Indicator—History-Taking Inventory (CRI-HTI): assessed clinical decision making based on simulated history-taking interactions.Control group: participated in simulated patient interviews without feedback.
Experimental group: participated in simulated history-taking exercises with AI-generated performance feedback.
Intervention duration: 3 months, 4 sessions.
The feedback group showed significantly greater improvement in scores compared to the control group, particularly in contextualization and information gathering during simulated clinical interviews.
Kliziene et al. (2021) [27]LithuaniaQuasi-experimental pre-test–post-test designIntra-group comparison before and after the implementation of a 20-week structured physical education program conducted during school hours for primary school childrenTo evaluate the effect of structured physical education on emotional well-being and physical activity levels in childrenN = 162 primary school students aged 10–12 years (85 boys and 77 girls)Well-being: assessed using the Strengths and Difficulties Questionnaire (SDQ). Physical activity level: measured with the Physical Activity Questionnaire for Older Children (PAQ-C), validated for pediatric populations. Emotional structured physical education program: 45-min sessions twice per week for 20 weeks, focused on physical and psychosocial development.
The intervention is considered neuroeducational as it targets emotional and physical well-being-two key factors in optimizing learning from a neuroscience perspective. Emotional well-being and physical activity are fundamental components in brain stimulation and enhancement, directly impacting students’ learning capacity.
Data were collected pre- and post-intervention.
Significant improvements in emotional well-being and physical activity levels were observed, with positive effects noted in both boys and girls.
Di Lieto et al. (2020) [28]ItalyRandomized controlled trial (RCT)Comparison between:
experimental group: educational robotics intervention to enhance executive functions;
control group: no intervention
To evaluate the impact of educational robotics on the development of executive functions in typically developing 5- to 6-year-old children N = 128 children (ages 5–6):
experimental group (educational robotics): N = 64;
control group (no intervention): N = 64
Executive function assessment: A task battery was used to evaluate planning, inhibition, and working memory.
Perception questionnaires: Included scales measuring children’s motivation and enjoyment of the activities, completed by children and parents.
All instruments used were validated and showed high reliability (α > 0.75).
Intervention duration: 8 weeks.
Experimental group: participated in an educational robotics program, consisting of two 40-min sessions per week. Activities were designed to promote planning, problem-solving, and decision-making skills
Control group: engaged in traditional pedagogical activities with no robotics component.
Data were collected before and after the intervention.
The experimental group showed significant improvements in all executive function tasks, especially in working memory and inhibitory control.
Faure et al. (2024) [29]GermanyRandomized controlled trial (RCT)Comparison among two intervention groups:
Stress Management Intervention (IVSM) and
Friendly Feedback Intervention (IVFF)
vs. a control group (CG) during an anatomy dissection course
To evaluate the impact of friendly feedback and stress management on intrinsic motivation and stress reduction during an anatomy dissection course in medical studentsN = 166 medical students (85% of those enrolled in the course):
Group 1 (IVFF): N = 55;
Group 2 (IVSM): N = 55;
control group: N = 56
Perceived Stress Scale (PSS): standardized instrument for measuring stress levels.
Anxiety Scale: questionnaire to assess anxiety (e.g., Beck Anxiety Inventory).
Intrinsic and Extrinsic Motivation Scale: questionnaire such as the Academic Motivation Scale (AMS).
Self-Efficacy Scale: measure of confidence in one’s abilities, similar to the General Self-Efficacy Scale.
Positive and Negative Affect Scale (e.g., PANAS): used to assess emotional states.
Group IVFF: formal assessments replaced with frequent, friendly feedback.
Group IVSM: stress management intervention.
Intervention duration: two academic semesters with measurements at nine time points.
The friendly feedback group (IVFF) showed significant reductions in stress, anxiety, and negative affect, as well as improvements in intrinsic motivation, positive affect, and self-efficacy. Perceived academic performance was not affected.
Mugo et al. (2024) [30]KenyaExperimental, randomized group designComparison between a group using constructive simulation (experimental group) and a group using conventional methods in Christian Religious Education (control group)To compare learning outcomes in Christian Religious Education using constructive simulation versus conventional teaching methodsN = 90 secondary school students:
constructive simulation group: N = 50;
conventional method group: N = 40
Academic performance exams: written and oral tests validated by subject-matter experts.
Intrinsic Motivation Scale for Religious Learning: validated Likert-type scale (α > 0.80).
Experimental group (constructive simulation): instruction through virtual environments and hands-on activities.
Control group (conventional method): content-centered doctrinal classes.
Intervention duration: 8 weeks.
Pre- and post-intervention data collection.
The constructive simulation group outperformed the conventional group in academic achievement, intrinsic motivation, and understanding of religious content.
Zhang et al. (2023) [31]ChinaModerated mediation modelComparison of different teaching strategies (interactive strategies, experimental group vs. traditional strategies, control group) and their impact on students learning engagementTo analyze the impact of teachers’ teaching strategies on students’ learning engagement, considering mediating factors such as motivation and emotional supportN = 300 secondary school students:
experimental group: N = 150;
control group: N = 150
Academic Engagement Scale (Student Engagement Scale): measures students’ participation and interest in academic activities.
Intrinsic and Extrinsic Motivation Scale: assesses students’ levels of motivation to learn, both internal and external.
Teaching strategies:
Experimental group: interactive teaching strategies that promote active participation and student motivation.
Control group: traditional strategies with more passive teaching.

The study was conducted over the course of one school semester.
Interactive teaching strategies significantly increased students’ learning engagement compared to traditional strategies. Intrinsic motivation and emotional support mediated the relationship between teaching strategies and student engagement.
Zhao et al. (2024) [32]ChinaRandomized controlled trialComparison between the experimental group using PTEBL teaching (Problems, Teamwork, and Evidence-Based Learning) and the control group using traditional teaching in the “Stop the Bleed” (STB)
training course
To evaluate the effectiveness of PTEBL on hemostasis skills, emergency preparedness, and teamworkN = 153 third-year medical students:
PTEBL group (N = 77) and traditional group (N = 76)
Ad hoc questionnaire with items on mastery of STB techniques, emergency preparedness, and teamwork. Likert-type response scale (psychometric validation not specified).Four-hour STB course using PTEBL methodology vs. traditional method. Pre-test–post-test evaluation through questionnaires.PTEBL was equally effective in STB skills, significantly improved teamwork (94.8% vs. 81.6%), and correlated with clinical reasoning; no significant differences were found in overall preparedness or practical skills.
Hirt et al. (2025) [33]SwitzerlandQuasi-experimental study with control group (pre-test–post-test)Comparison between the experimental group (SRL training) and control group with no interventionTo examine the impact of a professional development program on teachers’ competencies in SRL as promoters and self-regulated learnersN = 54 lower secondary school teachers:
experimental group N = 31; control group n = 23
Instrument: SRL-QuTA Questionnaire (Self-Regulated Learning Questionnaire for Teachers Agency). Variables:
teachers’ self-efficacy in SRL,
knowledge about SRL,
promotion of SRL practices, and
application of SRL in teaching;
5-point Likert scales (1 = strongly disagree, 5 = strongly agree).
Intervention in the experimental group consisted of a multi-day professional development program including theoretical sessions, practical exercises, self-assessment, and reflection on strategies to implement SRL in the classroom. Duration: 5 sessions, each lasting 5.5 to 6 h.Significant positive effects on competencies as SRL promoters. No significant changes observed as self-regulated learners. Initial competencies did not influence development.
Wang et al. (2024) [34]ChinaExperimental (quasi-experimental) studyComparison between a training system based on Entrustable Professional Activities (EPAs) (experimental group) and traditional emergency medicine teaching (control group)To evaluate the impact of an EPA-based training system on students clinical thinkingN = 210 medical students (106 in the experimental group and 104 in the control group)Clinical skills scale adapted for emergency medicine. Variables included:

clinical reasoning,
decision making,
clinical judgment, and
integrated clinical skills.
Evaluation was conducted using specific rubrics and structured practical exams (OSCEs). Psychometric validation was not specified.
Experimental group: training based on EPAs, clinical simulations, and performance-oriented assessments.
Control group: traditional theoretical teaching without the use of EPAs.
Duration: 6 months.
The EPA group showed statistically significant improvements in clinical reasoning and decision making, with a higher level of overall clinical competence compared to the control group.
Channegowda et al. (2025) [35]India Randomized controlled trial (RCT)Comparison between traditional small-group teaching (control group) and simulation-based teaching for first-year medical students (experimental group), further differentiated by high and low academic performanceTo evaluate the effectiveness of simulation-based teaching in learning respiratory physiology compared to traditional methodsN = 250 first-year medical students
Some were excluded for not completing the intervention; final sample: N = 107:
control group: N = 52;
experimental group: N = 55
Validated multiple-choice knowledge test (MCQ) reviewed by experts; academic performance assessed before and after the intervention.
Simulation-based teaching using interactive clinical case scenarios with simulators vs. traditional small-group teaching.
Duration: 4 weeks.
Pre- and post-intervention data collection to assess learning outcomes.
Low-performing students who received simulation-based teaching showed significant improvement in knowledge compared to those receiving traditional teaching. No significant differences were observed among high-performing students.
Chen et al. (2024) [36]ChinaQuasi-experimental pre-test–post-test designComparison between an experimental group (Three-Actions Model + learning plan guide) and a control group (traditional teaching)To evaluate the effect of the Three-Actions Model on student satisfaction, academic performance, and engagement95 medical students:
experimental group: N = 47;
control group: N = 48
Subjective Evaluation System (SES): measures learning, emotions, engagement, and achievement (validated).
Biggs’ Study Process Questionnaire: measures deep and surface learning approaches.
Course exams: objective knowledge tests.
The experimental group received an intervention that included
personalized study planning,
continuous reflection throughout the course, and
periodic evaluations with feedback.
The control group received traditional teaching without these elements.
The experimental group achieved higher exam scores (mean of 79.44 vs. 70.00).
The study is neuroeducational because it promotes self-regulation, metacognition, and active student engagement, stimulating executive functions such as planning and attentional control. Additionally, it integrates emotional and motivational factors, which are essential to learning from a brain-based perspective.
The experimental group, which used the Three-Actions Teaching Model with a learning plan guide, showed significantly higher academic performance compared to the control group. Students in the experimental group had a better overall experience in terms of learning methods, emotions, engagement, and performance. This suggests that the students in the experimental group experienced improvements in learning strategies, emotional involvement, participation, and academic achievement.
Note: The detailed data in the comprehensive table aim to provide transparency regarding all included studies despite its large format.
Table 3. Quality assessment components and EPHPP instrument ratings.
Table 3. Quality assessment components and EPHPP instrument ratings.
Articles 123456Overall Score
Dhungel et al. (2023) [16]MMLLHHL
Ballesta Claver et al. (2024) [17]MMLLHHL
Zheng et al. (2024) [18]HHHMHHH
Wang et al. (2024) [19]MMMLHHM
Dehghani et al. (2024) [20]HHMLHHM
Syväoja et al. (2024) [21]MHMLHMM
Drollinger-Vetter et al. (2025) [22]MHMLHMM
Vetter et al. (2025) [23]MHMLHMM
Rodríguez-López et al. (2020) [24]MHLLHMM
Coelho et al. (2021) [25]MHMLHMM
Brügge et al. (2024) [26]HHMLHMM
Kliziene et al. (2021) [27]MHMLHMM
Di Lieto et al. (2020) [28]MHMLHMM
Faure et al. (2024) [29]MHMLHMM
Mugo et al. (2024) [30]MHMLHMM
Zhang et al. (2023) [31]MHMLHMM
Zhao et al. (2024) [32]MHMLHMM
Hirt et al. (2025) [33]MHMLHMM
Wang et al. (2024) [34]MHMLHMM
Channgowda et al. (2025) [35]HHMMHMM
Chen et al. (2024) [36]MHMLHMM
Note: (1) H = high risk/low quality; M = moderate risk; L = low risk/high quality. (2) 1 = risk of bias; 2 = study design; 3 = confounding factors; 4 = blinding; 5 = data collection; 6 = withdrawals and dropouts.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Granado De la Cruz, E.; Gago-Valiente, F.J.; Gavín-Chocano, Ó.; Pérez-Navío, E. Education, Neuroscience, and Technology: A Review of Applied Models. Information 2025, 16, 664. https://doi.org/10.3390/info16080664

AMA Style

Granado De la Cruz E, Gago-Valiente FJ, Gavín-Chocano Ó, Pérez-Navío E. Education, Neuroscience, and Technology: A Review of Applied Models. Information. 2025; 16(8):664. https://doi.org/10.3390/info16080664

Chicago/Turabian Style

Granado De la Cruz, Elena, Francisco Javier Gago-Valiente, Óscar Gavín-Chocano, and Eufrasio Pérez-Navío. 2025. "Education, Neuroscience, and Technology: A Review of Applied Models" Information 16, no. 8: 664. https://doi.org/10.3390/info16080664

APA Style

Granado De la Cruz, E., Gago-Valiente, F. J., Gavín-Chocano, Ó., & Pérez-Navío, E. (2025). Education, Neuroscience, and Technology: A Review of Applied Models. Information, 16(8), 664. https://doi.org/10.3390/info16080664

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Article metric data becomes available approximately 24 hours after publication online.
Back to TopTop