Next Article in Journal
Injectable and Assembled Calcium Sulfate/Magnesium Silicate 3D Scaffold Promotes Bone Repair by In Situ Osteoinduction
Previous Article in Journal
Variable Selection for Multivariate Failure Time Data via Regularized Sparse-Input Neural Network
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Multimodal Neurophysiological Approach to Evaluate Educational Contents in Terms of Cognitive Processes and Engagement

1
Department of Computer, Control, and Management Engineering, Sapienza University of Rome, 00185 Rome, Italy
2
Department of Anatomical, Histological, Forensic and Orthopaedic Sciences, Sapienza University of Rome, 00185 Rome, Italy
3
Department of Technological Innovations and Safety of Plants, Products and Anthropic Settlements, Istituto Nazionale per L’Assicurazione Contro Gli Infortuni Sul Lavoro (INAIL), 00144 Rome, Italy
4
Department of Molecular Medicine, Sapienza University of Rome, 00185 Rome, Italy
*
Author to whom correspondence should be addressed.
Bioengineering 2025, 12(6), 597; https://doi.org/10.3390/bioengineering12060597
Submission received: 10 March 2025 / Revised: 24 April 2025 / Accepted: 28 May 2025 / Published: 31 May 2025
(This article belongs to the Section Biosignal Processing)

Abstract

Background: Understanding the impact of different learning materials in terms of comprehension and engagement is essential for optimizing educational strategies. While digital learning tools are increasingly used, offering and multiplying different educational solutions, their effects on learners’ mental workload, attention, and engagement remain underexplored. This study aims to investigate how different types of learning content—educational videos, academic videos, and text reading—affect cognitive processing and engagement. Methods: Neurophysiological signals, including electroencephalography (EEG), electrodermal activity (EDA), and photoplethysmography (PPG), were recorded from experimental participants while they were engaged with each learning content. Subjective assessments of cognitive effort and engagement, together with a quiz to assess the knowledge acquisition, were collected through questionnaires for each tested content. Key neurophysiological metrics, such as engagement and Human Distraction Index (HDI), were computed and compared across conditions. Results: Our findings indicate that video-based learning materials, particularly educational videos with visual enhancements, elicited higher engagement and lower cognitive load compared to text-based learning. The text reading condition was associated with increased mental workload and a higher distraction index, suggesting greater cognitive demands. Correlation analyses confirmed strong associations between neurophysiological indicators and subjective evaluations. Conclusions: The results highlight the potential of neurophysiological measures to objectively assess learning experiences, paving the way for designing more effective and engaging learning platforms.

1. Introduction

The quest to understand and optimize the learning process is a cornerstone of educational research. Unraveling the cognitive mechanisms underlying knowledge acquisition not only enhances theoretical understanding but also paves the way for more effective, personalized, and inclusive teaching methodologies. By integrating insights from neuroscience, psychology, and educational technology, researchers can develop personalized learning strategies that cater to diverse learner needs, ultimately improving educational outcomes. While traditional assessment methods like tests and questionnaires provide valuable information, they offer a limited window into the dynamic, internal cognitive, and affective states that shape learning. To gain a deeper understanding, researchers are increasingly turning to neurophysiological measures, capturing the subtle physiological changes that accompany relevant cognitive processes of the learner, such as mental effort, attention, engagement, and distraction. This approach, often termed educational neuroscience, promises to provide objective, real-time insights into the learner’s experience, moving beyond subjective self-reports and behavioral observations [1,2].
The application of neurophysiological signals-based approaches, such as the ones based on electroencephalography (EEG), electrodermal activity (EDA), and photoplethysmography (PPG), in real-world educational settings is gaining constant interest among the scientific community [3,4]. Unlike fMRI or other neuroimaging techniques that require constrained laboratory environments, EEG, EDA, and PPG are relatively portable and less sensitive to movement artifacts, making them suitable for studying learning in more naturalistic contexts. This is crucial for capturing the complexities of real-world learning experiences, which often involve dynamic interactions and a variety of stimuli. Researchers have begun to use these tools to investigate learning in classrooms, online environments, and even during outdoor activities [5,6,7,8]. The existing literature demonstrates the potential of these neurophysiological signals to reveal subtle differences in cognitive and affective states during various learning tasks. Studies have shown that EEG can differentiate between levels of mental workload during problem-solving [9,10,11], identify periods of inattention during lectures [12,13,14], and even predict learning outcomes based on neural activity patterns [15,16,17]. EDA and PPG have been used to assess stress levels during exams, monitor engagement during interactive simulations [15,18], and detect emotional responses to different types of learning materials [16,19,20,21].
However, a comprehensive investigation of how different types of learning materials, commonly used in real-world educational settings, impact cognitive responses from a neurophysiological point of view remains an area to be explored [22,23,24,25]. Specifically, the comparative effects of visually rich educational videos, interactive training videos designed to promote skill acquisition, and traditional text-based learning materials on students’ cognitive and affective states need to be systematically examined using a unified neurophysiological framework. Many existing studies focus on a single modality (e.g., only EEG) or compare very dissimilar learning activities, making it difficult to draw direct comparisons. Additionally, these previous approaches rely on neurophysiological signal collection equipment that is not fully compatible with real-life contexts. This study aims to address these gaps by investigating the neurophysiological impact of three common learning contents: an educational learning video with advanced graphic solutions and a simpler communicative language style, a more traditional academic content based on a PowerPoint presentation with a voice-over, and classical text reading. The overarching objective is to objectively characterize the distinct neurophysiological reactions associated with each learning content, using a multimodal neurophysiological approach and, more importantly, to provide objective insights about which one could be the most engaging and efficient learning environment, through a multimodal neurophysiological approach compatible with real-life contexts. This investigation was designed to be performed by neurophysiologically characterizing distraction and engagement, two of the most crucial aspects in learning and education. More specifically, the present research assessed the reliability of a Human Distraction Index (HDI), previously validated in a different context [26], and an Engagement index [5,27,28]. By simultaneously recording brain activity, electrodermal and cardiovascular responses, it was aimed to capture the interplay between cognitive workload, attentional focus, and engagement elicited by each type of material.

2. Materials and Methods

2.1. Participants

Ten volunteers from Sapienza University, including Master thesis students, Ph.D. candidates, and staff members, ranging in age from 24 to 37 years (M = 28.6, SD = 4.56) and with a similar and strong technical background, were involved in the presented study. They participated in the study without any reward. The experimental task consisted of using the three selected educational contents, related to the same topic, maintaining focus and striving to absorb as much information as possible, as they were informed of a subsequent questionnaire. Before starting, each participant received a clear explanation of the study’s procedures and provided written informed consent. While the overall procedure was explained, the specific focus on comparing cognitive engagement across different material types was not explicitly highlighted to avoid influencing their natural viewing behavior. Following the observation period, a comprehensive debriefing session clarified the study’s full objectives. Permission to use any visual recordings from the session was also secured. This research was conducted in accordance with the Declaration of Helsinki (1975, revised in 2008) and received ethical approval from the Sapienza University of Rome ethics committee (protocol code 2024/03-002 approved on 21 March 2024).

2.2. Experimental Protocol

This study presented each participant with three distinct educational contents concerning a single topic: Bluetooth wireless technology. This topic was chosen for its broad relevance in order to avoid any bias due to individual interests. Limiting the study to a single topic was deemed appropriate for this initial investigation. The three educational contents were as follows:
Participants performed each task individually, seated comfortably in front of a computer monitor with audio delivered via external speakers, inside a standard-like classroom in order to simulate an educational environment (Figure 1). The order of presentation of the three contents was randomized across participants to mitigate potential order effects, such as habituation or expectation. Neurophysiological data were recorded continuously from each participant during each task. Prior to the start of each experimental session, a 60 s resting-state baseline was recorded for each participant at their workstation. A five-minute break between each learning material access was foreseen.

2.3. Neurophysiological Data Collection and Processing

2.3.1. Electroencephalography (EEG)

During the experimental tasks, the participants’ cerebral activity (EEG) was captured using the Mindtooth Touch EEG wearable system (Brain Products GmbH, Gilching, Germany & BrainSigns srl, Rome, Italy, [29]). The device embeds eight recording EEG channels, placed over the prefrontal and parietal regions, specifically at the AFz, AF3, AF4, AF7, AF8, Pz, P3, and P4 locations, in accordance with the 10-10 International System, reference and ground specifically on the right and left mastoids, and sample rate of 125 Hz. All the electrodes’ impedances were kept below 100 (kΩ), and the quality of the EEG signals was checked before and during the experimental protocol, while the electrodes’ positioning was initially confirmed through scalp distance measurements.
Initially, a pre-processing phase was carried out to identify and correct both physiological and non-physiological artifacts unrelated to the cerebral activity of interest, such as ocular, muscular, and movement-induced signals. In this regard, the EEG signal was band-pass filtered with a 5th-order Butterworth filter in the interval 2–30 (Hz), beside a 50 Hz-notch filtering. The eye blink artifacts were detected and corrected by employing the o-CLEAN method [30], which corresponds to a novel method combining regression and multi-channel adaptive filtering for accurately identifying and correcting ocular-based artifacts. For further sources of artifacts, such as the ones derived from muscular activity and movements, ad hoc algorithms based on the EEGLAB toolbox [31] were applied. More specifically, one statistical criterion was applied to the 1 s-long pre-processed EEG signal. Firstly, EEG epochs with the signal amplitude exceeding ±80 μV were marked as “artifacts’’. Secondly, such EEG epochs marked as “artifacts” were removed from the EEG dataset in order to proceed with the EEG processing by considering exclusively the cleaned channels. All the EEG channels were preserved with an average artifact percentage below 8%, as shown in Table 1.
Once the EEG pre-processing steps were completed, the Global Field Power (GFP) was calculated for the EEG frequency band of interest for computing the mental states on which the present study focused. Therefore, the EEG GFP features were computed within the Theta, Alpha, and Beta frequency bands. It must be underlined that the GFP was chosen as the parameter of interest describing brain EEG activity since it has the advantage of representing, within the time domain, the degree of synchronization on a specific cortical region of interest in a specific frequency band [32,33]. In terms of technical implementation, the GFP was mathematically computed according to the approach described by Vecchiato and colleagues [34]. Concerning the EEG GFP features computation, the frequency bands were defined according to the Individual Alpha Frequency (IAF) value [35] computed for each participant. In order to compute the IAF, a 60 s-long experimental condition was collected while the participants kept their eyes closed, since the Alpha peak is consistently prominent in such a condition. Subsequently, the EEG GFP was computed over all the EEG channels for each 1 s-long epoch through a Hanning window of the same length (1 s, which means 1 Hz of frequency resolution according to the time resolution required from the presented approach) as follows:
G F P b a n d ,   r e g i o n = 1 N   i = 1 N x i ,   b a n d 2 ( t ) ,
where N is the number of the considered EEG channels and x i ,   b a n d 2 is the i-th EEG channel filtered within the selected EEG frequency band. After the EEG data preprocessing, the EEG GFP-derived features were computed according to the research objectives. More specifically, the mental workload, attention, and cognitive engagement were computed according to the following:
M e n t a l   w o r k l o a d = F r o n t a l   T h e t a G F P P a r i e t a l   A l p h a G F P = 1 5   i = 1 5 x i ,   t h e t a 2 ( t ) 1 3   i = 1 3 x i ,   a l p h a 2 ( t ) ,
A t t e n t i o n = F r o n t a l   B e t a G F P F r o n t a l   T h e t a G F P = 1 5   i = 1 5 x i ,   b e t a 2 ( t ) 1 5   i = 1 5 x i ,   t h e t a 2 ( t ) ,
E n g a g e m e n t = P a r i e t a l   B e t a G F P P a r i e t a l   T h e t a G F P + P a r i e t a l   A l p h a G F P = 1 3   i = 1 3 x i ,   b e t a 2 ( t ) 1 3   i = 1 3 x i ,   t h e t a 2 t + 1   3   i = 1 3 x i ,   a l p h a 2 ( t ) ,
where F r o n t a l   T h e t a G F P and F r o n t a l   B e t a G F P were computed by considering the AFz, AF3, AF4, AF7, and AF8 EEG channels, while the P a r i e t a l   T h e t a G F P , P a r i e t a l   A l p h a G F P , P a r i e t a l   B e t a G F P were computed by considering the Pz, P3, and P4 EEG channels. In this context, the mental workload index calculation was based on prior research using EEG to assess mental workload [7,36,37,38]. The Attention index was defined as the inverse of the Theta–Beta Ratio, an established EEG marker of attentional deficits in Attention Deficit Hyperactivity Disorder (ADHD) [39,40] and a validated measure of distributed attention during concurrent task performance [41,42]. The Engagement index was defined according to prior scientific works, which validated the cognitive aspect of engagement in a similar learning context [28]. Subsequently, the above-described neurophysiological metrics were z-score normalized according to the eyes open condition, considered as baseline. Then, the mental workload and attention metrics were furtherly combined to compute the Human Distraction index (HDI), based on the assumption that a high workload does not necessarily imply task engagement, since mental resources can also be devoted to activities not related to the task being performed (i.e., mind wandering [43,44,45]. Such an index was already successfully proposed by Ronca and colleagues [26] in a highly realistic simulated driving context. Such an index was defined according to the following:
H D I = M e n t a l   w o r k l o a d A t t e n t i o n = F r o n t a l   T h e t a G F P P a r i e t a l   A l p h a G F P F r o n t a l   B e t a G F P F r o n t a l   T h e t a G F P

2.3.2. Photoplethysmography (PPG)

Photoplethysmography (PPG) signals, crucial for evaluating volumetric variations in blood content within biological tissues, were collected using the advanced Shimmer3 GSR + apparatus (Shimmer Sensing, Dublin, Ireland). This instrument was securely positioned on the wrist of the participant’s non-dominant hand, ensuring stable and precise signal capture at a sampling rate of 64 Hz. The PPG datasets underwent a digital filtration process using a 5th-order Butterworth band-pass filter, specifically spanning a frequency range of 1–5 Hz. This filtering protocol was carefully designed to effectively exclude the underlying continuous components and mitigate the influence of any gradual signal drift, whilst accentuating the characteristic pulsatile oscillations inherent in the PPG signal, which are indicative of cardiac activity. To further delve into the detailed cardiac rhythms, the distinguished Pan–Tompkins’s algorithm [46] QRS Detection was deployed. This algorithm is adept at discerning pulse-associated peaks, which subsequently facilitates the computation of the Inter-Beat Intervals (IBI signal). However, the resultant IBI datasets occasionally contained aberrations or artifacts. To address this, the comprehensive HRVAS Matlab (MathWorks Inc., Natick, MA, USA) suite [47] was harnessed to refine and enhance the data quality. Subsequently, these purified IBI signals were algorithmically processed to deduce the Heart Rate (HR), typically represented in the metric “Beats per minute”. To ensure individual specificity and normalization, the derived HR metrics for each participant were systematically adjusted by deducting their personalized baseline HR mean and then normalizing the outcome using their unique HR standard deviation.

2.3.3. Electrodermal Activity (EDA)

Electrodermal activity (EDA) signals, indicative of fluctuations in the skin’s electrical conductance as a consequence of modulations in sweat gland activity, were acquired employing the advanced Shimmer3 GSR+ unit (Shimmer Sensing, Dublin, Ireland), the same cutting-edge apparatus mentioned previously. This device was positioned on the wrist of the participant’s non-dominant hand to guarantee optimal data fidelity, with EDA signals being recorded at a detailed sampling rate of 64 Hz. The raw EDA signals underwent a series of computational refinements. Initially, the signals were passed through a low-pass filter with a discriminative cut-off frequency set at 1 Hz, serving to emphasize the desired frequency components and attenuate unwanted high-frequency noise. Post this filtration step, an advanced artifact correction tool within the Matlab framework (version 2024b) was wisely employed to repair and eliminate any evident signal anomalies, such as sudden discontinuities and unwarranted spikes. To further refine and process these signals, the Ledalab suite [R] was deployed, a specialized open-source software toolbox seamlessly integrated within the Matlab ecosystem (version 2024b), tailored specifically for advanced EDA signal analysis. Employing the principles of continuous decomposition analysis [48], the suite enabled the separation of the EDA signal into its constituent tonic (SCL) and phasic (SCR) components. The SCL, or Skin Conductance Level, represents the gradual and sustained modulations in the EDA signal and predominantly encapsulates the overarching arousal state of the participant. Conversely, the SCR, or Skin Conductance Response, mirrors the transient and rapid fluctuations in the EDA and is conventionally attributed to discrete stimulus-driven physiological reactions. However, given the inherent limitations stemming from the involved devices’ relatively low sampling frequency, this study had to prudently prioritize and focus on the more steady-state SCL component for its analytical rigor. As a final step in the data normalization process, the extracted SCL values for each individual were systematically adjusted to a standard scale. This was achieved by deducting the participant-specific baseline SCL, followed by normalization using the unique SCL standard deviation pertinent to each participant.

2.4. Subjective and Behavioral Data Collection

Following each experimental condition, participants completed a five-question quiz designed to assess comprehension and retention of the material presented. Crucially, the quiz questions were specific to the content of each individual modality, preventing potential learning transfer effects across conditions. Practically speaking, even if all three contents were related to the same topic, the quiz was designed in order to be related to information contained only in that specific content. In addition to the comprehension quiz, participants answered five questions evaluating their subjective learning experience, adapted from previously validated questionnaires [49,50]. More specifically, these inquiries solicited participants to assign a rating, on a numerical continuum from 1 to 10, reflecting their subjective experiences concerning:
  • The simplicity with which they could comprehend the disseminated information.
  • The facility with which they could internalize the content.
  • The capacity to sustain attention during the entirety of the task.
  • The degree of interest elicited by the employed narrative modality.
  • The extent of engagement is provoked by the narrative approach.
In the evaluation process, careful attention was taken to ensure procedural consistency; every participant was presented with an identical set of questions, which bolstered the uniformity and comparability of the assessment metrics.

2.5. Statistical Analyses

As a preliminary step, the Shapiro–Wilk test [51] was selected to determine the normality of the distribution related to each of the considered statistical features. In the case of normal distributions, the Analysis of Variance (ANOVA) was selected to compare the two experimental groups (i.e., Expert vs. Novices). If the distributions’ normality was not confirmed by the Shapiro–Wilk test, the Friedman chi-squared test was performed. In case of statistically significant main effect resulting from the group comparisons, the post hoc tests were performed to assess eventual individual differences (i.e., experimental condition vs. another one). For all tests, statistical significance was set at α = 0.05. Additionally, the repeated measures correlation analysis [52] was additionally performed to validate the EEG-based Engagement index with respect to the subjective measurements collected from the participants, both at the single-participant level and in the entire group.

3. Results

This section was divided into different subparagraphs in order to organize all the results on the basis of the related data source, i.e., questionnaires (subjective results), learning performance (behavioral results), and neurophysiological ones.
Among all the comparisons, only the analyses providing statistically significant results are reported here.

3.1. Neurophysiological Results

The statistical analysis performed on the autonomic-derived parameters, i.e., SCL and SCR computed from the EDA and the PPG-derived features, did not reveal any statistical effect between the tested experimental conditions.
Concerning the EEG-based parameters, ANOVA showed a statistically significant main effect of the experimental conditions in terms of HDI and Engagement index (HDI: F = 6.560, p = 0.007; ω2 = 0.204; Engagement index: F = 19.641, p < 0.001; ω2 = 0.357) (Figure 2 and Figure 3). More specifically, the post hoc analysis showed that during the educational video condition, participants exhibited the lowest HDI (p < 0.01), and the highest Engagement index (p < 0.006). No statistically significant differences were observed between the Academic video and text reading conditions in terms of HDI and Engagement index (all p > 0.10).

3.2. Correlation Analysis

Finally, a conclusive statistical analysis was performed to assess the correlation between the neurophysiological measurements related to the impact of the different learning materials and their respective subjective scores provided by the participants across the experimental conditions. More specifically, the repeated measure correlation analysis [52] was performed between the EEG-based Engagement index and its respective subjective score (i.e., EP). As shown in Figure 4, the analysis revealed a strong and significant correlation between these two measurements (R = 0.621; p = 0.0002), indicating that the measured cognitive impact of the different learning materials in terms of engagement was temporally coherent with the subjective perceptions of the participants.

3.3. Subjective Results

The subjective data collected along the experimental protocol were combined in order to obtain two subjective scores. The first one is associated with the required cognitive resources perception (CRP), computed by averaging the subjective scores related to the first three dimensions of the above-described questionnaire (i.e., the higher the better, since high scores were associated with easy task perceptions). The second one is associated with the subjective engagement perception (EP) (i.e., the higher the better, since high scores were associated with engaging task perceptions), obtained by averaging the latter two dimensions of the above-described questionnaire.
The statistical analysis revealed a significant main effect of the experimental condition associated with the subjective CRP and EP scores (CRP: Friedman chi-squared = 10.474, p = 0.001, η2 = 0.335; EP: Friedman chi-squared = 15.436, p < 0.001, η2 = 0.459). More specifically, the post hoc analysis showed that the text reading condition was perceived as requiring more in terms of cognitive resources and as less engaging (all p < 0.01). Additionally, the post hoc analysis showed that the educational video condition resulted in statistically more engaging (p < 0.001) compared to the text reading one (Figure 5).

3.4. Behavioral Results

The non-parametric repeated measure analysis performed on the behavioral data, i.e., the correct answers recorded for each experimental condition, revealed that participants committed the highest error rate during the text reading condition (Friedman chi-squared = 6.178, p = 0.032, η2 = 0.116; post hoc: p = 0.02) (Figure 6).

4. Discussion

This study aimed to neurophysiologically characterize the impact of different learning materials by employing a multimodal approach that integrates EEG, EDA, and PPG recordings, in parallel with subjective measurements. The results demonstrated that different educational contents elicit distinct neurophysiological responses, highlighting how cognitive load, engagement, and attentional mechanisms vary depending on the format of the learning content. More importantly, the present research demonstrates how it is possible to the multimodal neurophysiological modeling of the impact of different learning materials, by proposing an innovative use of biomedical devices—specifically, wearable EEG—in a real-life context.
The findings indicate that video-based learning materials, particularly educational videos enriched with visual elements and structured narration, were associated with higher engagement levels, as evidenced by EEG-derived metrics. This is consistent with previous literature suggesting that dynamic and multimodal learning experiences enhance cognitive processing and memory retention. Conversely, the text-based learning condition (i.e., text reading) elicited higher cognitive distraction and a lower Engagement index, as reflected in EEG-derived parameters. This aligns with prior research showing that passive reading demands sustained attention and greater cognitive effort, potentially leading to increased mental fatigue [53]. The strong correlation between subjective engagement scores and neurophysiological engagement indices further validates the robustness of the proposed multimodal framework. This consistency between self-reported experiences and objective measures highlights the potential of neurophysiological monitoring as a reliable tool for assessing learning effectiveness in real-world educational settings.
However, some limitations must be acknowledged. The study involved a relatively small sample size, limiting the generalizability of the results. This aspect is particularly relevant when considering the potential discrepancy between the different learning modules. In this regard, an additional statistical analysis was carried out to ensure a similarity between the modules. In particular, the three learning modules were compared by considering their temporal presentation to the participants (i.e., as they were not randomized). The statistical analysis revealed an increase in the participants’ performance in terms of correct answers, resulting in a marginal statistical increase (p = 0.07) in the correct answers from accessing the first module to accessing the third one. Indeed, future research must include a larger sample size in order to independently assess subject matter retention and, possibly, assess the reliability of the proposed approach by considering different age ranges and sample sizes. Additionally, the experimental design focused on a single topic, which may not fully capture the variability of different subject matters. Future research should expand the dataset, include a broader range of learning materials and populations characterized by a wider scale range, since the scientific literature indicates that age-related differences in terms of EEG patterns could occur [54], and explore individual differences in cognitive responses. In this regard, it has to be underlined that the proposed HDI and Engagement indices would benefit from a wider and transversal validation. Finally, another interesting aspect to be considered through the presented neurophysiological approach consists of assessing the influence of group interaction on learning processes [28].
Despite these limitations, this study represents a step forward in understanding how learning materials shape cognitive and engagement, paving the way for more personalized and effective digital learning experiences.

5. Conclusions

This study investigated the impact of different learning materials on specific cognitive processes by employing a multimodal approach that integrated EEG, EDA, and PPG recordings with subjective measurements. The findings demonstrated that different educational modalities elicit distinct neurophysiological responses, providing objective insights into how cognitive load, attentional mechanisms, and engagement vary depending on the format of the learning content. This opens the way for the future development of adaptive technological solutions based on Brain–Computer Interface principles. The neurophysiological modeling of the impact of different learning materials consisted of a first investigation and demonstration of how an EEG-based approach is reliable in objectively assessing the brain mechanisms modulations in education. Given its exploratory nature, the work focused on a simple environment, but the observed results pave the way to further integrate such an approach in more complex learning scenarios.
The results underscore the potential of neurophysiological measures to optimize learning experiences by providing real-time, objective assessments of cognitive processes specifically related to learning. More importantly, the proposed multimodal approach relies on wearable equipment for collecting neurophysiological signals, coupled with signal processing methods fully compatible with real-world applications [6,55,56,57,58].

Future Trends

The effectiveness of the proposed neurophysiological framework in distinguishing between different learning conditions highlights its potential applicability in adaptive learning environments. By integrating real-time neurophysiological feedback, future e-learning systems could dynamically adjust content delivery to optimize learning efficiency and reduce cognitive overload.
Future e-learning platforms could leverage this approach to dynamically adjust instructional materials based on individual cognitive responses, enhancing learning efficiency and reducing distractive behaviors [59,60,61,62]. In conclusion, this research represents a preliminary, but significant step forward in understanding how different learning materials impact cognitive and engagement, paving the way for more adaptive and personalized educational experiences.

Author Contributions

Conceptualization, G.D.F., V.R. and L.T.; methodology, V.R., P.A. and G.D.F.; software, V.R., P.A. and G.D.F.; validation, V.R., L.T., A.B. and G.D.F.; formal analysis, V.R. and L.T.; investigation, V.R., L.T., A.B. and G.D.F.; resources, P.A., V.R. and G.D.F.; data curation, V.R.; writing—original draft preparation, V.R.; writing—review and editing, V.R., P.A., A.B. and G.D.F.; visualization, V.R.; supervision, P.A. and G.D.F.; project administration, G.D.F.; funding acquisition, V.R., P.A. and G.D.F. All authors have read and agreed to the published version of the manuscript.

Funding

This work was co-funded by the HORIZON 2.5 project “CODA: COntroller adaptive Digital Assistant” (GA n. 101114765), SESAR 3 Joint Undertaking project “TRUSTY: TRUStworthy inTellingent sYstem for remote digital tower” (GA n. 101114838), the ERASMUS+ Programme of the EU “WE-COLLAB” project under the Agreement 2021-1-HR01-KA220-HED-000027562, and “SOULSS” (KA2, 2022-1-IT02-KA220-HED-000090206).

Institutional Review Board Statement

The study was conducted in accordance with the Declaration of Helsinki and approved by the Institutional Ethics Committee of Sapienza University of Rome (protocol code 2024/03-002 approved on 21 March 2024).

Informed Consent Statement

Informed consent was obtained from all subjects involved in the study.

Data Availability Statement

The data that support the findings of this study are available from the corresponding authors upon reasonable request. The data are not publicly available since they are biometric data and they are considered sensitive data as of EU GDPR n. 2016/679.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Byrnes, J.P.; Vu, L.T. Educational neuroscience: Definitional, methodological, and interpretive issues. Wiley Interdiscip. Rev. Cogn. Sci. 2015, 6, 221–234. [Google Scholar] [CrossRef]
  2. Darvishi, A.; Khosravi, H.; Sadiq, S.; Weber, B. Neurophysiological Measurements in Higher Education: A Systematic Literature Review. Int. J. Artif. Intell. Educ. 2021, 32, 413–453. [Google Scholar] [CrossRef]
  3. Michel, C.M.; Murray, M.M.; Lantz, G.; Gonzalez, S.; Spinelli, L.; De Peralta, R.G. EEG source imaging. Clin. Neurophysiol. 2004, 115, 2195–2222. [Google Scholar] [CrossRef] [PubMed]
  4. Protzak, J.; Gramann, K. Investigating established EEG parameter during real-world driving. Front. Psychol. 2018, 9, 2289. [Google Scholar] [CrossRef] [PubMed]
  5. EEG Correlates of Task Engagement and Mental Workload in Vigilance, Learning, and Memory Tasks—PubMed. Available online: https://pubmed.ncbi.nlm.nih.gov/17547324/ (accessed on 3 March 2025).
  6. Di Flumeri, G.; Ronca, V.; Giorgi, A.; Vozzi, A.; Aricò, P.; Sciaraffa, N.; Zeng, H.; Dai, G.; Kong, W.; Babiloni, F.; et al. EEG-Based Index for Timely Detecting User’s Drowsiness Occurrence in Automotive Applications. Front. Hum. Neurosci. 2022, 16, 866118. [Google Scholar] [CrossRef]
  7. Di Flumeri, G.; Borghini, G.; Aricò, P.; Sciaraffa, N.; Lanzi, P.; Pozzi, S.; Vignali, V.; Lantieri, C.; Bichicchi, A.; Simone, A.; et al. EEG-based mental workload neurometric to evaluate the impact of different traffic and road conditions in real driving settings. Front. Hum. Neurosci. 2018, 12, 509. [Google Scholar] [CrossRef]
  8. Wang, Y.; Jung, T.-P. A collaborative brain-computer interface for improving human performance. PLoS ONE 2011, 6, e20422. [Google Scholar] [CrossRef]
  9. Zhu, Y.; Wang, Q.; Zhang, L. Study of EEG characteristics while solving scientific problems with different mental effort. Sci. Rep. 2021, 11, 23783. [Google Scholar] [CrossRef]
  10. Yu, Y.; Oh, Y.; Kounios, J.; Beeman, M. Dynamics of hidden brain states when people solve verbal puzzles. NeuroImage 2022, 255, 119202. [Google Scholar] [CrossRef]
  11. Kim, K.; Duc, N.T.; Choi, M.; Lee, B. EEG microstate features according to performance on a mental arithmetic task. Sci. Rep. 2021, 11, 343. [Google Scholar] [CrossRef]
  12. Varao-Sousa, T.L.; Kingstone, A. Memory for Lectures: How Lecture Format Impacts the Learning Experience. PLoS ONE 2015, 10, e0141587. [Google Scholar] [CrossRef]
  13. Wammes, J.D.; Boucher, P.O.; Seli, P.; Cheyne, J.A.; Smilek, D. Mind wandering during lectures I: Changes in rates across an entire semester. Sch. Teach. Learn. Psychol. 2016, 2, 13–32. [Google Scholar] [CrossRef]
  14. Munoz, D.A.; Tucker, C.S. Assessing Students’ Emotional States: An Approach to Identify Lectures That Provide an Enhanced Learning Experience. In Proceedings of the ASME Design Engineering Technical Conference, Buffalo, NY, USA, 17–20 August 2014; Volume 3. [Google Scholar] [CrossRef]
  15. Mazher, M.; Aziz, A.A.; Malik, A.S.; Amin, H.U. An EEG-Based Cognitive Load Assessment in Multimedia Learning Using Feature Extraction and Partial Directed Coherence. IEEE Access 2017, 5, 14819–14829. [Google Scholar] [CrossRef]
  16. Chen, C.-M.; Sun, Y.-C. Assessing the effects of different multimedia materials on emotions and learning performance for visual and verbal style learners. Comput. Educ. 2012, 59, 1273–1285. [Google Scholar] [CrossRef]
  17. Di Flumeri, G.; Giorgi, A.; Germano, D.; Ronca, V.; Vozzi, A.; Borghini, G.; Tamborra, L.; Simonetti, I.; Capotorto, R.; Ferrara, S.; et al. A Neuroergonomic Approach Fostered by Wearable EEG for the Multimodal Assessment of Drivers Trainees. Sensors 2023, 23, 8389. [Google Scholar] [CrossRef]
  18. Mutlu-Bayraktar, D.; Ozel, P.; Altindis, F.; Yilmaz, B. Split-attention effects in multimedia learning environments: Eye-tracking and EEG analysis. Multimedia Tools Appl. 2022, 81, 8259–8282. [Google Scholar] [CrossRef]
  19. Babiker, A.; Faye, I.; Mumtaz, W.; Malik, A.S.; Sato, H. EEG in classroom: EMD features to detect situational interest of students during learning. Multimedia Tools Appl. 2018, 78, 16261–16281. [Google Scholar] [CrossRef]
  20. Bashir, F.; Ali, A.; Soomro, T.A.; Marouf, M.; Bilal, M.; Chowdhry, B.S. Electroencephalogram (EEG) Signals for Modern Educational Research. In Innovative Education Technologies for 21st Century Teaching and Learning; CRC Press: Boca Raton, FL, USA, 2021; pp. 149–171. [Google Scholar] [CrossRef]
  21. Simonetti, I.; Tamborra, L.; Giorgi, A.; Ronca, V.; Vozzi, A.; Aricò, P.; Borghini, G.; Sciaraffa, N.; Trettel, A.; Babiloni, F.; et al. Neurophysiological Evaluation of Students’ Experience during Remote and Face-to-Face Lessons: A Case Study at Driving School. Brain Sci. 2023, 13, 95. [Google Scholar] [CrossRef] [PubMed]
  22. Zhang, N.; Liu, C.; Li, J.; Hou, K.; Shi, J.; Gao, W. A comprehensive review of research on indoor cognitive performance using electroencephalogram technology. Build. Environ. 2024, 257, 111555. [Google Scholar] [CrossRef]
  23. Yuvaraj, R.; Chadha, S.; Prince, A.A.; Murugappan, M.; Bin Islam, S.; Sumon, S.I.; Chowdhury, M.E.H. A Machine Learning Framework for Classroom EEG Recording Classification: Unveiling Learning-Style Patterns. Algorithms 2024, 17, 503. [Google Scholar] [CrossRef]
  24. Cuevas, J. Is learning styles-based instruction effective? A comprehensive analysis of recent research on learning styles. Theory Res. Educ. 2015, 13, 308–333. [Google Scholar] [CrossRef]
  25. Kumar, A.; Krishnamurthi, R.; Bhatia, S.; Kaushik, K.; Ahuja, N.J.; Nayyar, A.; Masud, M. Blended Learning Tools and Practices: A Comprehensive Analysis. IEEE Access 2021, 9, 85151–85197. [Google Scholar] [CrossRef]
  26. Ronca, V.; Brambati, F.; Napoletano, L.; Marx, C.; Trösterer, S.; Vozzi, A.; Aricò, P.; Giorgi, A.; Capotorto, R.; Borghini, G.; et al. A Novel EEG-Based Assessment of Distraction in Simulated Driving under Different Road and Traffic Conditions. Brain Sci. 2024, 14, 193. [Google Scholar] [CrossRef]
  27. Apicella, A.; Arpaia, P.; Frosolone, M.; Improta, G.; Moccaldi, N.; Pollastro, A. EEG-based measurement system for monitoring student engagement in learning 4.0. Sci. Rep. 2022, 12, 5857. [Google Scholar] [CrossRef] [PubMed]
  28. Rai, L.; Lee, H.; Becke, E.; Trenado, C.; Abad-Hernando, S.; Sperling, M.; Vidaurre, D.; Wald-Fuhrmann, M.; Richardson, D.C.; Ward, J.A.; et al. Delta-Band Inter-Brain Synchrony Reflects Collective Audience Engagement with Live Dance Performances. Available online: https://osf.io/uz573 (accessed on 5 May 2024).
  29. Sciaraffa, N.; Di Flumeri, G.; Germano, D.; Giorgi, A.; Di Florio, A.; Borghini, G.; Vozzi, A.; Ronca, V.; Babiloni, F.; Aricò, P. Evaluation of a New Lightweight EEG Technology for Translational Applications of Passive Brain-Computer Interfaces. Front. Hum. Neurosci. 2022, 16, 901387. [Google Scholar] [CrossRef] [PubMed]
  30. Ronca, V.; Di Flumeri, G.; Giorgi, A.; Vozzi, A.; Capotorto, R.; Germano, D.; Sciaraffa, N.; Borghini, G.; Babiloni, F.; Aricò, P. o-CLEAN: A novel multi-stage algorithm for the ocular artifacts’ correction from EEG data in out-of-the-lab applications. J. Neural Eng. 2024, 21, 056023. [Google Scholar] [CrossRef]
  31. Delorme, A.; Makeig, S. EEGLAB: An open source toolbox for analysis of single-trial EEG dynamics including independent component analysis. J. Neurosci. Methods 2004, 134, 9–21. [Google Scholar] [CrossRef]
  32. Klimesch, W. EEG alpha and theta oscillations reflect cognitive and memory performance: A review and analysis. Brain Res. Rev. 1999, 29, 169–195. [Google Scholar] [CrossRef]
  33. Sauseng, P.; Klimesch, W.; Schabus, M.; Doppelmayr, M. Fronto-parietal EEG coherence in theta and upper alpha reflect central executive functions of working memory. Int. J. Psychophysiol. 2005, 57, 97–103. [Google Scholar] [CrossRef]
  34. Vecchiato, G.; Babiloni, F.; Astolfi, L.; Toppi, J.; Cherubino, P.; Dai, J.; Kong, W.; Wei, D. Enhance of theta EEG spectral activity related to the memorization of commercial advertisings in Chinese and Italian subjects. In Proceedings of the 2011 4th International Conference on Biomedical Engineering and Informatics (BMEI), Shanghai, China, 15–17 October 2011; Volume 3, pp. 1491–1494. [Google Scholar] [CrossRef]
  35. Klimesch, W. Alpha-band oscillations, attention, and controlled access to stored information. Trends Cogn. Sci. 2012, 16, 606–617. [Google Scholar] [CrossRef]
  36. Babiloni, F. Mental Workload Monitoring: New Perspectives from Neuroscience. In Communications in Computer and Information Science; Springer: Cham, Switzerland, 2019; pp. 3–19. [Google Scholar]
  37. Borghini, G.; Ronca, V.; Vozzi, A.; Aricò, P.; Di Flumeri, G.; Babiloni, F. Monitoring performance of professional and occupational operators. In Handbook of Clinical Neurology; Elsevier B.V.: Amsterdam, The Netherlands, 2020; Volume 168, pp. 199–205. [Google Scholar] [CrossRef]
  38. Young, M.S.; Brookhuis, K.A.; Wickens, C.D.; Hancock, P.A. State of science: Mental workload in ergonomics. Ergonomics 2015, 58, 1–17. [Google Scholar] [CrossRef] [PubMed]
  39. Arns, M.; Conners, C.K.; Kraemer, H.C. A Decade of EEG Theta/Beta Ratio Research in ADHD: A Meta-Analysis. J. Atten. Disord. 2013, 17, 374–383. [Google Scholar] [CrossRef] [PubMed]
  40. Heinrich, H.; Busch, K.; Studer, P.; Erbe, K.; Moll, G.H.; Kratz, O. EEG spectral analysis of attention in ADHD: Implications for neurofeedback training? Front. Hum. Neurosci. 2014, 8, 611. [Google Scholar] [CrossRef] [PubMed]
  41. Ma, X.; Qiu, S.; He, H. Time-Distributed Attention Network for EEG-Based Motor Imagery Decoding From the Same Limb. IEEE Trans. Neural Syst. Rehabilitation Eng. 2022, 30, 496–508. [Google Scholar] [CrossRef]
  42. Morillas-Romero, A.; Tortella-Feliu, M.; Bornas, X.; Putman, P. Spontaneous EEG theta/beta ratio and delta–beta coupling in relation to attentional network functioning and self-reported attentional control. Cogn. Affect. Behav. Neurosci. 2015, 15, 598–606. [Google Scholar] [CrossRef]
  43. Di Flumeri, G.; De Crescenzio, F.; Berberian, B.; Ohneiser, O.; Kramer, J.; Aricò, P.; Borghini, G.; Babiloni, F.; Bagassi, S.; Piastra, S. Brain–Computer Interface-Based Adaptive Automation to Prevent Out-Of-The-Loop Phenomenon in Air Traffic Controllers Dealing With Highly Automated Systems. Front. Hum. Neurosci. 2019, 13, 296. [Google Scholar] [CrossRef]
  44. Zhang, Y.; Kumada, T. Relationship between workload and mind-wandering in simulated driving. PLoS ONE 2017, 12, e0176962. [Google Scholar] [CrossRef]
  45. Smallwood, J. Mind wandering and attention. In The Handbook of Attention; Boston Review: Cambridge, MA, USA, 2015; pp. 233–255. [Google Scholar]
  46. Pan, J.; Tompkins, W.J. A Real-Time QRS Detection Algorithm. IEEE Trans. Biomed. Eng. 1985, 32, 230–236. [Google Scholar] [CrossRef]
  47. Ramshur, J. Design, Evaluation, and Application of Heart Rate Variability Analysis Software (HRVAS). Electronic Theses and Dissertations. July 2010. Available online: https://digitalcommons.memphis.edu/etd/83 (accessed on 3 March 2025).
  48. Benedek, M.; Kaernbach, C. Decomposition of skin conductance data by means of nonnegative deconvolution. Psychophysiology 2010, 47, 647–658. [Google Scholar] [CrossRef]
  49. Kassab, S.E.; El-Baz, A.; Hassan, N.; Hamdy, H.; Mamede, S.; Schmidt, H.G. Construct validity of a questionnaire for measuring student engagement in problem-based learning tutorials. BMC Med. Educ. 2023, 23, 1–7. [Google Scholar] [CrossRef]
  50. Griffin, P.; Coates, H.; Mcinnis, C.; James, R. The Development of an Extended Course Experience Questionnaire. Qual. High. Educ. 2003, 9, 259–266. [Google Scholar] [CrossRef]
  51. González-Estrada, E.; Villaseñor, J.A.; Acosta-Pech, R. Shapiro-Wilk test for multivariate skew-normality. Comput. Stat. 2022, 37, 1985–2001. [Google Scholar] [CrossRef]
  52. Bakdash, J.Z.; Marusich, L.R. Repeated measures correlation. Front. Psychol. 2017, 8, 456. [Google Scholar] [CrossRef] [PubMed]
  53. Zivan, M.; Vaknin, S.; Peleg, N.; Ackerman, R.; Horowitz-Kraus, T. Higher theta-beta ratio during screen-based vs. printed paper is related to lower attention in children: An EEG study. PLoS ONE 2023, 18, e0283863. [Google Scholar] [CrossRef]
  54. Jabès, A.; Klencklen, G.; Ruggeri, P.; Antonietti, J.-P.; Lavenex, P.B.; Lavenex, P. Age-Related Differences in Resting-State EEG and Allocentric Spatial Working Memory Performance. Front. Aging Neurosci. 2021, 13, 704362. [Google Scholar] [CrossRef]
  55. Ronca, V.; Di Flumeri, G.; Vozzi, A.; Giorgi, A.; Arico, P.; Sciaraffa, N.; Babiloni, F.; Borghini, G. Validation of an EEG-based Neurometric for online monitoring and detection of mental drowsiness while driving. In Proceedings of the 2022 44th Annual International Conference of the IEEE Engineering in Medicine & Biology Society (EMBC), Glasgow, UK, 11–15 July 2022; Volume 2022, pp. 3714–3717. [Google Scholar] [CrossRef]
  56. Ronca, V.; Martinez-Levy, A.C.; Vozzi, A.; Giorgi, A.; Aricò, P.; Capotorto, R.; Borghini, G.; Babiloni, F.; Di Flumeri, G. Wearable Technologies for Electrodermal and Cardiac Activity Measurements: A Comparison between Fitbit Sense, Empatica E4 and Shimmer GSR3+. Sensors 2023, 23, 5847. [Google Scholar] [CrossRef] [PubMed]
  57. Capotorto, R.; Ronca, V.; Sciaraffa, N.; Borghini, G.; Di Flumeri, G.; Mezzadri, L.; Vozzi, A.; Giorgi, A.; Germano, D.; Babiloni, F.; et al. Cooperation objective evaluation in aviation: Validation and comparison of two novel approaches in simulated environment. Front. Neurosci. 2024, 18, 1409322. [Google Scholar] [CrossRef]
  58. Ronca, V.; Uflaz, E.; Turan, O.; Bantan, H.; MacKinnon, S.N.; Lommi, A.; Pozzi, S.; Kurt, R.E.; Arslan, O.; Kurt, Y.B.; et al. Neurophysiological Assessment of An Innovative Maritime Safety System in Terms of Ship Operators’ Mental Workload, Stress, and Attention in the Full Mission Bridge Simulator. Brain Sci. 2023, 13, 1319. [Google Scholar] [CrossRef]
  59. Khanal, S.; Pokhrel, S.R. Analysis, Modeling and Design of Personalized Digital Learning Environment. arXiv 2024. Available online: https://arxiv.org/abs/2405.10476v1 (accessed on 5 May 2024).
  60. Van Schoors, R.; Elen, J.; Raes, A.; Vanbecelaere, S.; Depaepe, F. The Charm or Chasm of Digital Personalized Learning in Education: Teachers’ Reported Use, Perceptions and Expectations. TechTrends 2023, 67, 315–330. [Google Scholar] [CrossRef]
  61. Technologies and Tools for Creating Adaptive E-Learning Content. Математика и инфoрматика 2020, 63, 382–390.
  62. Apoki, U.C.; Al-Chalabi, H.K.M.; Crisan, G.C. From Digital Learning Resources to Adaptive Learning Objects: An Overview. Commun. Comput. Inf. Sci. 2020, 1126, 18–32. [Google Scholar] [CrossRef]
Figure 1. Experimental settings. The participant sat in front of the PC to provide learning materials and wore the EEG, PPG, and EDA signals collection equipment.
Figure 1. Experimental settings. The participant sat in front of the PC to provide learning materials and wore the EEG, PPG, and EDA signals collection equipment.
Bioengineering 12 00597 g001
Figure 2. The ANOVA performed on the Human Distraction Index (i.e., HDI) revealed that the neurophysiological distraction evaluation was statistically lower when accessing the educational video material. * indicates the statistical significance (p < 0.05).
Figure 2. The ANOVA performed on the Human Distraction Index (i.e., HDI) revealed that the neurophysiological distraction evaluation was statistically lower when accessing the educational video material. * indicates the statistical significance (p < 0.05).
Bioengineering 12 00597 g002
Figure 3. The ANOVA performed on the EEG-based engagement index revealed that participants were neurophysiologically more engaged when exposed to the educational video material. * indicates the statistical significance (p < 0.05).
Figure 3. The ANOVA performed on the EEG-based engagement index revealed that participants were neurophysiologically more engaged when exposed to the educational video material. * indicates the statistical significance (p < 0.05).
Bioengineering 12 00597 g003
Figure 4. The repeated measure correlation analysis showed that the neurophysiological Engagement index and the respective subjective perceptions exhibited a similar temporal dynamic along the experimental conditions.
Figure 4. The repeated measure correlation analysis showed that the neurophysiological Engagement index and the respective subjective perceptions exhibited a similar temporal dynamic along the experimental conditions.
Bioengineering 12 00597 g004
Figure 5. The statistical analysis of the subjective scores revealed that participants perceived as easier and more engaging than the educational video in terms of cognitive impact. * indicates the statistical significance (p < 0.05).
Figure 5. The statistical analysis of the subjective scores revealed that participants perceived as easier and more engaging than the educational video in terms of cognitive impact. * indicates the statistical significance (p < 0.05).
Bioengineering 12 00597 g005
Figure 6. The statistical analysis showed that the participants provided fewer correct answers when accessing the text reading material. * indicates the statistical significance (p < 0.05).
Figure 6. The statistical analysis showed that the participants provided fewer correct answers when accessing the text reading material. * indicates the statistical significance (p < 0.05).
Bioengineering 12 00597 g006
Table 1. Mean artifact percentage for each participant over all the EEG channels.
Table 1. Mean artifact percentage for each participant over all the EEG channels.
SubjectMean Artifact (%)
Participant 15.88
Participant 26.93
Participant 37.41
Participant 46.89
Participant 57.71
Participant 66.22
Participant 77.09
Participant 86.57
Participant 97.81
Participant 106.02
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Ronca, V.; Aricò, P.; Tamborra, L.; Biagi, A.; Di Flumeri, G. A Multimodal Neurophysiological Approach to Evaluate Educational Contents in Terms of Cognitive Processes and Engagement. Bioengineering 2025, 12, 597. https://doi.org/10.3390/bioengineering12060597

AMA Style

Ronca V, Aricò P, Tamborra L, Biagi A, Di Flumeri G. A Multimodal Neurophysiological Approach to Evaluate Educational Contents in Terms of Cognitive Processes and Engagement. Bioengineering. 2025; 12(6):597. https://doi.org/10.3390/bioengineering12060597

Chicago/Turabian Style

Ronca, Vincenzo, Pietro Aricò, Luca Tamborra, Antonia Biagi, and Gianluca Di Flumeri. 2025. "A Multimodal Neurophysiological Approach to Evaluate Educational Contents in Terms of Cognitive Processes and Engagement" Bioengineering 12, no. 6: 597. https://doi.org/10.3390/bioengineering12060597

APA Style

Ronca, V., Aricò, P., Tamborra, L., Biagi, A., & Di Flumeri, G. (2025). A Multimodal Neurophysiological Approach to Evaluate Educational Contents in Terms of Cognitive Processes and Engagement. Bioengineering, 12(6), 597. https://doi.org/10.3390/bioengineering12060597

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop