NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports
Abstract
1. Introduction
2. Methods
2.1. Objective
2.2. Research Questions
- Is facial coding activity from the learner associated with self-reported valence levels in the NPFC-Test?
- Is physiological activity from the learner associated with self-reported arousal levels in the NPFC-Test?
- Is neuronal activity from the learner associated with self-reported concentration levels in the NPFC-Test?
- Are self-reported valence and arousal levels from the NPFC-Test associated with the emotional states inferred by various facial coding algorithms?
2.3. NPFC-Test Structure
2.4. Multimodal Devices
2.4.1. Muse 2 Headband
2.4.2. Empatica EmbracePlus Smartwatch
2.4.3. Azure Kinect
2.5. NPFC-Test Details
- Participant Profile (01:30): It is a brief questionnaire where the user is asked demographic data such as age, gender, dominant hand, and highest level of education, as well as their general physical condition with questions such as average hours of sleep, consumption of medication or psychoactive substances, and history of neurological and cardiac conditions.A summary of the participant demographics is presented in Table 2 to provide an overview of the dataset composition and support its potential for generalization.
Characteristic | Distribution |
---|---|
Number of participants | 41 |
Average age | 23.2 years |
Gender | Female (44%), Male (56%) |
Dominant hand | Right-handed (100%) |
Average sleep per night | 6–7 h (72%), more than 7 h (20%), 5–6 h (8%) |
Reported stress (at time of test) | A little stressed (48%), None (52%) |
Use of psychoactive substances (last 8 h) | None reported (100%) |
Neurological conditions (history) | None reported (100%) |
Cardiac conditions (history) | None reported (100%) |
Eyeglass use during session | Yes (56%), No (44%) |
Highest completed education | Bachelor degree (52%), High school (36%), Not reported (12%) |
- Initial meditation (02:25): An initial 2:10 min meditation, which serves to relax the user and establish a baseline of neutrality to achieve calibration of the user’s physiological states.
- Initial meditation self-report (00:25): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Mathematics task (01:00): A mathematical skill exercise where the user is asked to calculate the numbers of a series in descending order in a maximum time of one minute, i.e., 128, 121, 114, 107, 100, ...
- Mathematics task self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Auditory stimulation (02:37): The anthem of the institution is played for 2 min and 37 s in order to measure the stimuli generated by the anthem in the members of the university community.
- Auditory stimulation self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Concentration video 1 (00:40): A 22-s video where the user is asked to focus on 4 balls bouncing at different times while they change shape and the background changes color.
- Concentration video 1 task (00:40): A questionnaire where the user is asked how many times the balls bounced and if they noticed changes in the shape of the balls and in the color of the background. It also asks if any additional figures appeared in the video.
- Concentration video 2 (00:38): A 20-s video in which the user is asked to concentrate on the balls bouncing towards the center of the screen while the central target changes shape and the background changes color.
- Concentration video 2 task (00:50): A questionnaire asking the user how many times the balls bounced and if they noticed changes in the shape of the center target and in the color of the background. It also asks if any additional figures appeared in the video.
- Concentration videos self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Emotional video (03:00): It is the emotional story of two children where they show all the effort they make to be able to buy a mobile phone while always accompanying each other all the way to face all the problems together.
- Emotional video self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Writing tasks instructions (00:10): It is an information box where the user is told that the next 3 pages contain a open-ended question about emotions where they will have 45 s to answer each one.
- Writing task 1 (00:45): The user is asked to mention at least two things they love.
- Writing task 2 (00:45): The user is asked to mention at least two things they hate.
- Writing task 3 (00:45): The user is asked to mention at least two things that make them feel angry.
- Writing tasks self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- Final meditation (02:00): A final 1:47 min meditation, which serves to relax the user and return to a baseline of neutrality of the user’s physiological states.
- Final meditation self-report (00:20): A self-report for the user to indicate how they felt about their concentration, valence, and motivation regarding the task they just completed.
- User experience survey (01:00): A short questionnaire where participants are asked about the user experience, e.g., the comfort of using each of the 4 devices and the format of the test.
3. Experimental Protocol
3.1. Informed Consent Letter
3.2. Device Equipment
- EmbracePlus: The smartwatch was placed on the non-dominant hand of the participant and behind the wrist ulna, according to the recommendations by Empatica.
- Muse 2: Before the headband was installed on the participant, the forehead and the back of the ears were cleaned by the LL team using a saline solution to ensure proper connectivity with the electrodes. Once the headband had been correctly positioned, a connectivity check was performed using the data collection software, and the data recording process was initiated.
- Gaming Headphones: The headphones were carefully installed on the participants head, ensuring that the Muse 3 headband remained in the correct position and with good connectivity in the data collection software.
3.3. NPFC-Test
3.4. Closing Activities
4. Dataset Processes
4.1. Data Collection
- FlexiQuiz: The digital test, hosted on the web platform FlexiQuiz.com, was started by entering the unique ID assigned to the participant. Once this was carried out, the Muse 2 and Empatica EmbracePlus devices were placed on the user, ensuring proper synchronization and proper camera configuration for the Azure Kinect. Once ready, the participant started with the test. FlexiQuiz stored information on the time spent in each of the sections, as well as the answers selected in the self-reports.
- Muse 2: The Muse 2 headband was connected via Bluetooth to a mobile application, Mind Monitor [22]. This app allowed the recording of brainwave activity raw data and absolute band powers in a csv file, in addition to providing a real-time visualization of the measurements being taken. The app was configured to make a continuous measurement of data. This means that the device made approximately 260 measurements per second.
- Embraceplus: The Empatica EmbracePlus bracelet was synchronized with a mobile app developed by Empatica, called Care Lab. This app, when properly synchronized with the device, sends all data to the cloud, which can be accessed through an Amazon Secure Bucket. The data are stored in avro files, each containing a maximum of 15 min of information. Depending on the starting point of a 15-min cycle, a single participant generated up to 3 avro files.
- Azure Kinect: For the recording of the video with the Azure Kinect camera, the parameters were set to a duration of 1380 s (23 min), a quality of 720 pixels, and we deactivated the depth setting and the IMU mode of the camera. For the frequency of frames taken in the video, the default value was left, which is 30 frames per second. The latter to reduce the size of the video and later computational processing. For the recording of the videos, the Developer Kit (DK) Recorder was used, which uses a command-line utility to record data from the sensors SDK to a file.
4.2. Data Extraction
- FlexiQuiz: The platform generated xlsx files containing the start and end times of the NPFC-Test for each participant, as well as the duration of each task. These files were later used to process the video recordings.
- Muse 2: The Mind Monitor app, where the Muse 2 headband was paired and where the recording was carried out, generated csv files containing the participants’ brainwave activity and timestamps for the duration of the NPFC-Test. Once the recording was finished, the csv files were sent to a Dropbox folder for extraction and further processing.
- Embraceplus: The Avro files generated by the Embraceplus bracelet, which included the physiological activity and corresponding timestamps of the participants, were downloaded in a zip file via the Amazon Secure Bucket connected to the Care Lab monitoring platform.
- Azure Kinect: The video recordings of the participants, captured using the Azure Kinect camera, were stored on the computer utilized for the NPFC-Test. Subsequently, the recordings were transferred to a designated folder containing a Python script, where they were edited, processed, and analyzed for the purpose of emotion recognition. To enhance the efficiency of processing and emotion detection, as well as to ensure the manageability of the data, a sampling rate of one frame per second (i.e., one out of every 30 frames) was applied.
4.3. Data Processing
4.3.1. FlexiQuiz
4.3.2. EmbracePlus
- Temperature: The sampling frequency of this biomarker is 0.999755859375, which means it has been taken every second. The study is recorded every second, which means the only transformation we need for this biomarker is to check for null values and, once cleaned, move it into the dataframe.
- timestampStart → Timestamp: start of the temperature in the file.
- samplingFrequency → (float) Sampling frequency (Hz) of the sensor.
- values → Array of temperature values.
- Electrodermal Activity: This sensor measures the changes in conductivity produced in the skin due to the increases in the activity of sweat glands. This happens as a result of changing sympathetic nervous system activity.The sampling frequency for this biomarker is 3.9990363121032715, which means 4 samples per second.To start with the information transformation, we start to validate, and we do not have any null values. With the value of the sampling frequency, we know that we need to perform a transformation of the data for following up the granularity in the study.The sampling frequency shows us 4 samples per second. The transformation to the data was to aggregate the raw data per second, and we decided to use the arithmetic mean and calculate an average to obtain 1 sample per second.We continuously add it to the dataframe with the information of the temperature, assuring we have the same amount of records per biometric.dict_keys([‘timestampStart’, ‘samplingFrequency’, ‘values’]).
- TimestampStart -> Timestamp: start of the EDA in the file
- samplingFrequency -> (float) sampling frequency (Hz) of the sensors
- values -> array with electrodermal activity (µS) values
- Blood Volume Pulse (BVP): measures heart rate based on the volume of blood that passes through the tissue, its measurement is obtained by the use of a photoplethysmography (PPG) sensor. This component measures changes in blood volume in the arteries and capillaries that correspond to changes in the heart rate and blood flow (Jones,2018).Blood volume pulse is a popular method for monitoring the relative changes in peripheral blood flow, heart rate, and heart rate variability (Peper,Shaffer,Lin, ).BVP is a method of detecting heart beats by measuring the volume of blood passing the sensor in either red or infrared light. From BVP, you can calculate heart rate and heart rate variability (HRV). dict_keys([‘timestampStart’, ‘samplingFrequency’, ‘values’]).
- TimestampStart -> Timestamp: start of the BVP in the file
- samplingFrequency -> (float) sampling frequency (Hz) of the BVP
- values -> values of light absorption (nW)
Heart rate is one of the measurements that we take into account in the study, so since we do not have it as data per se, we calculated it from BVP with the help of a python library that allowed us to add the frequency at which we obtained the measurements, which allowed us to obtain the heart rate per second as a result.The typical output of the sensor is then a signal where each cardiac cycle is expressed as a pulse wave. From the BVP we can extract the heart rate (HR). Python has a package called biospy with a subpackage called signals. In this sub-package, we can find methods to process commmon physiological signals (biosignals).Package: Biospy Subpackage: biosppy.signals Module: biosppy.signals.bvpThe module biosppy.signals.bvp provides methods to process the blood volume pulse (BVP) signals.Process a raw BVP signal and extract the relevant signal features using default parameters.**Parameters:** - signal (array) – Raw BVP signal. - sampling_rate (int, float, optional) – Sampling frequency (Hz). - show (bool, optional) – If True, show a summary plot.**Returns:** - ts (array) – Signal time axis reference (seconds). - filtered (array) – Filtered BVP signal. - onsets (array) – Indices of BVP pulse onsets. - heart_rate_ts (array) – Heart rate time axis reference (seconds). - heart_rate (array) – Instantaneous heart rate (bpm).All the dataframes were joined using a common timestamp as the key to ensure uniform sampling rates across all datasets. This approach guarantees that each observation in the resulting dataframe corresponds to the same point in time, facilitating coherent analysis and interpretation of the data.
4.3.3. Muse 2
- Raw data from the four headband sensors, expressed in microvolts.
- Absolute band powers (Delta, Theta, Alpha, Beta, and Gamma) for each of the four sensors, expressed in decibels.
- Battery level, reported as a percentage.
- Connectivity status, encoded in binary format (connected/disconnected).
- Movement data, captured by the accelerometer and reported in units of gravity.
- Positioning data, obtained via the gyroscope, expressed in degrees per second.
4.3.4. Azure Kinect
4.4. Data Fusion
5. Data Records
- Timestamp (Datetime): A unique date and time identifier in the format yyyy/mm/dd hh:mm:ss. This allows for the temporal sequencing of participant interactions.
- SubjectID (String): A unique participant code formatted as IFE-EC-NPFC-T003-NN, enabling anonymous yet traceable user identification.
- TestTime (Time): Indicates the elapsed time for the overall test session, captured in the format HH:MM:SS.
- TaskNum (Number): Encodes the task type using numeric identifiers. For instance,
- -
- Represents demographic data collection,
- -
- Corresponds to initial task calibration, and subsequent numbers denote cognitive or sensor-based tasks.
- TaskTime (Time): Captures the amount of time spent on each individual task, also recorded in HH:MM:SS format.
- Neuronal Activity (EEG): Derived from the Muse 2 headband, includes absolute band power values (Delta, Theta, Alpha, Beta, Gamma) per sensor. Data are represented in decibels and structured by second-level timestamps.
- Physiological Signals captured through the EmbracePlus device, which include
- -
- Electrodermal Activity (EDA): Measured in microsiemens (µS), at 1 Hz.
- -
- Blood Volume Pulse (BVP): Processed to derive heart rate (beats per minute).
- -
- Temperature: Recorded in degrees Celsius, one sample per second.
- Facial Coding Data: Extracted using the Azure Kinect and processed with the Py-Feat library, includes action units (AUs), encoded facial movement metrics.
- Emotion Classification: Probability scores (via ResMaskNet) and binary presence indicators (via SVM) for emotions: anger, disgust, fear, happiness, sadness, surprise, and neutrality.
- Self-Report Metrics: Likert-scale responses related to valence, arousal, and concentration, obtained after each task segment.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
NPFC | Neuronal, physiological, and facial coding |
EC | Experiential classroom |
IFE | Institute for the Future of Education |
EEG | Electroencephalogram |
EDA | Electrodermal activity |
BVP | Blood volume pulse |
HRV | Heart rate variability |
HR | Heart rate |
PPG | Photoplethysmography |
CSV | Comma-separated values (data format) |
SDK | Software Development Kit |
ToF | Time-of-flight (used in depth cameras like Azure Kinect) |
HCI | Human–computer interaction |
AU | Action units (facial coding measurement units) |
SVM | Support vector machine (machine learning algorithm) |
Py-Feat | Python Facial Expression Analysis Toolbox |
API | Application programming interface |
LL | Living Lab |
ID | Identifier |
MDPI | Multidisciplinary Digital Publishing Institute |
DOAJ | Directory of Open Access Journals |
MMLA | Multimodal learning analytics |
References
- Ochoa, X. Multimodal Learning Analytics—Rationale, Process, Examples, and Direction. In Handbook of Learning Analytics, 2nd ed.; Lang, C., Wise, A.F., Siemens, G., Gašević, D., Merceron, A., Eds.; Society for Learning Analytics Research (SoLAR): Beaumont, AB, Canada, 2022; Chapter 6; pp. 54–65. [Google Scholar] [CrossRef]
- Rapanta, C.; Botturi, L.; Goodyear, P.; Guàrdia, L.; Koole, M. Online University Teaching During and After the Covid-19 Crisis: Refocusing Teacher Presence and Learning Activity. Postdigital Sci. Educ. 2020, 2, 923–945. [Google Scholar] [CrossRef] [PubMed]
- Morán-Mirabal, L.F.; Avarado-Uribe, J.; Ceballos, H.G. Using AI for Educational Research in Multimodal Learning Analytics. In What the AI Can Do: Knowledge Strengths, Biases and Resistances to Assume the Algorithmic Culture, 1st ed.; Cebral-Loureda, M., Rincón-Flores, E., Sanchez-Ante, G., Eds.; CRC Press: Boca Raton, FL, USA; Taylor & Francis Group: Boca Raton, FL, USA, 2023; Chapter 9; pp. 154–174. [Google Scholar] [CrossRef]
- Mu, S.; Cui, M.; Huang, X. Multimodal Data Fusion in Learning Analytics: A Systematic Review. Sensors 2020, 20, 6856. [Google Scholar] [CrossRef] [PubMed]
- Ouhaichi, H.; Spikol, D.; Vogel, B. Research trends in multimodal learning analytics: A systematic mapping study. Comput. Educ. Artif. Intell. 2023, 4, 100136. [Google Scholar] [CrossRef]
- Liu, Y.; Peng, S.; Song, T.; Zhang, Y.; Tang, Y.; Li, Z. Multi-Modal Emotion Recognition Based on Local Correlation Feature Fusion. Front. Neurosci. 2022, 16, 744737. [Google Scholar] [CrossRef]
- Horvers, A.; Tombeng, N.; Bosse, T.; Lazonder, A.W.; Molenaar, I. Emotion Recognition Using Wearable Sensors: A Review. Sensors 2021, 21, 7869. [Google Scholar] [CrossRef]
- Morán-Mirabal, L.F.; Ruiz-Ramírez, J.A.; González-Grez, A.A.; Torres-Rodríguez, S.N.; Ceballos, H. Applying the Living Lab Methodology for Evidence-Based Educational Technologies. In Proceedings of the 2025 IEEE Global Engineering Education Conference, EDUCON 2025, London, UK, 22–25 April 2025. [Google Scholar]
- Rojas Vistorte, A.O.; Deroncele-Acosta, A.; Martín Ayala, J.L.; Barrasa, A.; López-Granero, C.; Martí-González, M. Integrating artificial intelligence to assess emotions in learning environments: A systematic literature review. Front. Psychol. 2024, 15, 1387089. [Google Scholar] [CrossRef] [PubMed]
- Lian, H.; Lu, C.; Li, S.; Zhao, Y.; Tang, C.; Zong, Y. A Survey of Deep Learning-Based Multimodal Emotion Recognition: Speech, Text, and Face. Entropy 2023, 25, 1440. [Google Scholar] [CrossRef] [PubMed]
- Pekrun, R.; Linnenbrink-Garcia, L. Academic Emotions and Student Engagement. In Handbook of Research on Student Engagement; Christenson, S.L., Reschly, A.L., Wylie, C., Eds.; Springer: Boston, MA, USA, 2012; pp. 259–282. [Google Scholar] [CrossRef]
- Bustos-López, M.; Cruz-Ramírez, N.; Guerra-Hernández, A.; Sánchez-Morales, L.N.; Cruz-Ramos, N.A.; Alor-Hernández, G. Wearables for Engagement Detection in Learning Environments: A Review. Biosensors 2022, 12, 509. [Google Scholar] [CrossRef] [PubMed]
- Hernández-de Menéndez, M.; Morales-Menéndez, R.; Escobar, C.A.; Arinez, J. Biometric applications in education. Int. J. Interact. Des. Manuf. 2021, 15, 365–380. [Google Scholar] [CrossRef] [PubMed]
- Hernández-Mustieles, M.A.; Lima-Carmona, Y.E.; Pacheco-Ramírez, M.A.; Mendoza-Armenta, A.A.; Romero-Gómez, J.E.; Cruz-Gómez, C.F.; Rodríguez-Alvarado, D.C.; Arceo, A. Wearable Biosensor Technology in Education: A Systematic Review. Sensors 2024, 24, 2437. [Google Scholar] [CrossRef] [PubMed]
- Yu, S.; Androsov, A.; Yan, H.; Chen, Y. Bridging Computer and Education Sciences: A Systematic Review of Automated Emotion Recognition in Online Learning Environments. Comput. Educ. 2024, 220, 105111. [Google Scholar] [CrossRef]
- Apicella, A.; Arpaia, P.; Frosolone, M.; Improta, G.; Moccaldi, N.; Pollastro, A. EEG-based measurement system for monitoring student engagement in learning 4.0. Sci. Rep. 2022, 12, 5857. [Google Scholar] [CrossRef] [PubMed]
- Boothe, M.; Yu, C.; Lewis, A.; Ochoa, X. Towards a Pragmatic and Theory-Driven Framework for Multimodal Collaboration Feedback. In Proceedings of the LAK22: 12th International Learning Analytics and Knowledge Conference, New York, NY, USA, 21–25 March 2022; pp. 507–513. [Google Scholar] [CrossRef]
- Mirabal, L.F.M.; Álvarez, L.M.M.; Ramirez, J.A.R. Muse 2 Headband Specifications (Neuronal Tracking), Reporte, ITESM, 2024. Available online: https://hdl.handle.net/11285/685108 (accessed on 18 June 2025).
- Morán Mirabal, L.F.; Favaroni Avila, M.; Ruiz Ramirez, J.A. Empatica Embrace Plus Specifications (Physiological Tracking); Report; ITESM: Monterrey, Mexico, 2024; Available online: https://hdl.handle.net/11285/685107 (accessed on 18 June 2025).
- Bamji, C.S.; Mehta, S.; Thompson, B.; Elkhatib, T.; Prather, L.A.; Snow, D.; O’Connor, P.; Payne, A.D.; Fenton, J.; Akbar, M. A 0.13 μm CMOS System-on-Chip for a 512 × 424 Time-of-Flight Image Sensor with Multi-Frequency Photo-Demodulation up to 130 MHz and 2 GS/s ADC. IEEE J. Solid-State Circuits 2015, 50, 303–319. [Google Scholar] [CrossRef]
- Microsoft Corporation. Azure Kinect Developer Kit Documentation; Microsoft Docs, 2023; Available online: https://learn.microsoft.com/en-us/azure/kinect-dk/ (accessed on 18 June 2025).
- Mind Monitor. Available online: https://mind-monitor.com/ (accessed on 18 June 2025).
- Morán Mirabal, L.F.; Güemes Frese, L.E.; Favarony Avila, M.; Torres Rodríguez, S.N.; Ruiz Ramirez, J.A. NPFC-Test 23A: A Dataset for Assessing Neuronal, Physiological, and Facial Coding Attributes in a Human-Computer Interaction Learning Scenario; Tecnológico de Monterrey: Monterrey, Mexico, 2024. [Google Scholar] [CrossRef]
Section Contents | |
---|---|
1.1 Participant Profile | 5.5 Concentration videos self-report |
2.1 Initial meditation | 6.1 Emotional video |
2.2 Initial meditation self-report | 6.2 Emotional video self-report |
3.1 Mathematics task | 7.1 Writing task instructions |
3.2 Mathematics task self-report | 7.2 Writing task 1 |
4.1 Auditory stimulation | 7.3 Writing task 2 |
4.2 Auditory stimulation self-report | 7.4 Writing task 3 |
5.1 Concentration video 1 | 7.5 Writing task self-report |
5.2 Concentration video 1 task | 8.1 Final meditation |
5.3 Concentration video 2 | 8.2 Final meditation self-report |
5.4 Concentration video 2 task | 9.1 User experience survey |
Question | Response Options |
---|---|
How alert did you feel during the task? | Nothing, Almost Nothing, Little, Quite a lot, A lot |
How much did you enjoy the task? | Nothing, Almost Nothing, Little, Quite a lot, A lot |
This task demanded a high level of focus and/or concentration. | Strongly disagree, Disagree, Neither agree nor disagree, Agree, Strongly agree |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Morán-Mirabal, L.F.; Güemes-Frese, L.E.; Favarony-Avila, M.; Torres-Rodríguez, S.N.; Ruiz-Ramirez, J.A. NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports. Data 2025, 10, 103. https://doi.org/10.3390/data10070103
Morán-Mirabal LF, Güemes-Frese LE, Favarony-Avila M, Torres-Rodríguez SN, Ruiz-Ramirez JA. NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports. Data. 2025; 10(7):103. https://doi.org/10.3390/data10070103
Chicago/Turabian StyleMorán-Mirabal, Luis Fernando, Luis Eduardo Güemes-Frese, Mariana Favarony-Avila, Sergio Noé Torres-Rodríguez, and Jessica Alejandra Ruiz-Ramirez. 2025. "NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports" Data 10, no. 7: 103. https://doi.org/10.3390/data10070103
APA StyleMorán-Mirabal, L. F., Güemes-Frese, L. E., Favarony-Avila, M., Torres-Rodríguez, S. N., & Ruiz-Ramirez, J. A. (2025). NPFC-Test: A Multimodal Dataset from an Interactive Digital Assessment Using Wearables and Self-Reports. Data, 10(7), 103. https://doi.org/10.3390/data10070103