Previous Article in Journal
Journal Editorial: Welcome to the New Era of AI-Enabled Sensing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Systematic Review

Students’ Burnout Symptoms Detection Using Smartwatch Wearable Devices: A Systematic Literature Review

by
Paschalina Lialiou
* and
Ilias Maglogiannis
*
Department of Digital Systems, University of Piraeus, 18534 Piraeus, Greece
*
Authors to whom correspondence should be addressed.
AI Sens. 2025, 1(1), 2; https://doi.org/10.3390/aisens1010002
Submission received: 11 March 2025 / Revised: 18 April 2025 / Accepted: 28 April 2025 / Published: 8 May 2025

Abstract

:
(1) Background: The current uses of smartwatch wearable devices have expanded, not only being a part of everyday routine life but also playing a dynamic role in the early detection of many behavioral patterns of users. Furthermore, in the modern era, there is an increasing trend of mental disturbances even in early adolescence, a phenomenon that continues into academic life. Taking into account the current situation, the objective of this systematic literature review emphasizes the role of AI wearable devices in the early symptom detection of burnout in the student population. (2) Methods: A systematic literature review was designed based on the PRISMA guidelines. The general extracted aspect was to exploit all the current related research evidence about the effectiveness of wearable devices in the student population. (3) Results: The reviewed studies document the importance of physiological monitoring and AI-driven predictive models, with the collaboration of self-reported scales in assessing mental well-being. It is reported that stress is the most frequently studied burnout-related symptom. Meanwhile, heart rate (HR) and heart rate variability (HRV) are the most commonly used biomarkers that can be used to monitor and evaluate early burnout detection. (4) Conclusions: Despite the promising potential of these technologies, several challenges and limitations must be addressed to enhance their effectiveness and reliability.

1. Introduction

The mental health of children and young people is a global health challenge and is one of the core fundamental elements for health definition of the World Health Organization [1]. Many research studies perform associations between mental health and sociodemographic characteristics and the extensive prevalence of burnout symptoms in schoolchildren and academic students [2]. The results from surveys from several countries indicate, for example, that Portuguese medical students were diagnosed as mentally ill before medical school, with 15% being diagnosed during medical school [3]; also, it was confirmed that healthcare students suffer from the consequences of stress and burnout signs [4,5]. Furthermore, Swiss and Italian adolescents presented with school burnout, in which Italian students showed higher fatigue and cynicism than their Swiss peers [6]. The prevalence of burnout in French pediatric residents was 37.4%, which is not associated with the COVID-19 outbreak [4,6]; on the other hand, Danish schoolchildren seem to have generally good mental health [7].
Over the last decade, student well-being tools have been introduced, and many attempts have been made to explain and measure the effectiveness of the education system and its impact on the mental health of students [8]. However, current trends seem to reveal that more students than ever suffer from burnout symptoms [9]. In several studies, burnout has been found to affect schoolwork and future academic life as well as students’ later health as adults [4,9]. The major issue of this is that studies have shown a direct association between burnout and anxiety and depressive symptoms [9,10]. Students, due to the nature of education, are overwhelmed with a variety of curriculum activities and everyday tasks. Moreover, a research study conducted on nursing students indicated that stress is the most significant factor aggravating academic burnout [11]. More results support that stress can lead to exhaustion and disengagement. On the other hand, research on medical students showed that the demanding curriculum of coursework, clinical responsibilities, and the emotional strain of patient care affects the manifestation of burnout [12]. Furthermore, students in higher education develop mental health issues including academic burnout as a result of multiple stressors that they face [13], such as gender, grade, monthly living expenses, smoking, parental education level, and study and stress factors, which all impact academic burnout [14].
Student burnout is defined as a psychological syndrome caused by long-term exposure to stressful events and pressure in school or the academic environment. It is described using three dimensions: emotional exhaustion, cynicism, and a sense of inadequacy [4], and there is a high risk for depression and anxiety. Emotional exhaustion can cause a lack of satisfaction with academic liabilities; cynicism is due to the lack of interest in social activities; and the last symptom causes decreasing academic performance, competence, and achievement [13]. As the research shows, burnout could cause the student to drop out of studies. As a result, several impacts in social and personal costs may appear and are associated with low mental and physical health, which can be connected with the emergence of suicidal ideation [15].
Based on the information given above, the early detection of burnout signs and symptoms is of high value nowadays. Wearable artificial intelligence (AI) has emerged as a valuable instrument for researchers as a non-invasive approach to psychobiological monitoring [16]. Wearable AI technology seems to be promising, especially for stress and burnout detection in students [17]. Using the recording of biomarkers such as heart rate (HR), heart rate variability (HRV), and electrodermal activity (EDA) in real time, the detection of stress and anxiety is possible [18,19]. Due to the increasing appearance of innovative AI applications, there is an urgent need to assess the accuracy and performance of such technological devices. The limited existing literature focusing on the use of smartwatch devices as a measurement indicator of burnout symptoms in the student population has led to the implementation of this literature review. The aim of this work is to highlight the role of smartwatches in mental health assessment, especially in stress and burnout detection, and consider the comparative AI predictive models and algorithms, ethical issues, future challenges, and perspectives. Moreover, the current work clearly defines the existing wearable AI technology that is used in stress and burnout detection, the accuracy and performance of these AI predictive models, and, finally, the potential need for AI application in early mental health issues like stress. These research scopes and perspectives are structured below into seven research questions.

2. Objectives

We consider the importance of burnout symptoms such as stress, fatigue, or anxiety in education and their impact on mental health in combination with the availability of a variety of wearable technologies and have synthesized the necessity of retrospective work that summarizes previous experiences and gives clear future directions. Therefore, the present study constitutes a systematic literature review of existing empirical studies and reveals the research aspects related to burnout syndrome in students (Figure 1). The objective of this systematic literature review is to uncover the current uses of smartwatch wearable devices in detection of early behavioral patterns related to burnout syndrome in student populations. To find evidence about the effectiveness of AI wearable devices in burnout identification, the research questions (RQs) that oriented and built this review were as follows:
RQ1.
What are the research purposes, subjects, and the behavioral patterns of the reviewed studies?
RQ2.
Which wearable devices, AI technology, and AI predictive models are adopted in the reviewed studies?
RQ3.
Which surveys have been used in the reviewed studies and which mental disorders have been verified?
RQ4.
What challenges and limitations are stated in the reviewed studies?
RQ5.
What are the ethical considerations that participants had to handle during the usage of wearable AI technology?
RQ6.
What are the accuracy and the performance of the surveyed systems and how are they calculated?
RQ7.
How are the results of each study exploited and what are the main findings of each of them?

3. Materials and Methods

This current review follows the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) Statement [20]. The PRISMA Statement is the most common guidance that is used by authors and reviewers, and it reports the whole literature search procedure [21]. Also, PRISMA ensures the quality of literature reports and solves any methodological issue related to the search strategy and study assessment [22]. The current study utilizes the 27-item PRISMA checklist to verify that each search component is completely reported and reproducible.
In Figure 2 are represented the steps of the reviewing process. Firstly, the purposes and the specific research questions of the present SLR which motivated this research were defined. In turn, many digital databases were used with specific predetermined search terms. Then, a primary selection of papers was performed and the initial database inclusion and exclusion criteria were applied. The final dataset of studies and records were coded and coincided with the information that was associated with RQs. In the final step, the extracted information of selected studies was organized, compared, and discussed in order to answer the research questions. Zotero 7 for Windows [23,24], an open-access reference management tool, was used to extract the duplicate papers, track the citations, and synthesize the reference sources.

3.1. Inclusion and Exlusion Criteria

Following the PRISMA process, analyzed above [20], it was intended to assess the effectiveness of wearable devices in burnout prediction. As a consequence, the studies that are related to the use of smartwatches or any other wristband technology for data collection were utilized. Furthermore, the included studies were conducted in student populations. The minimum age was defined as 6 years old and the maximum 28 years old. The types of studies that were included were pilot studies, randomized control trials, and experimental studies.
The exclusion criteria were studies that were conducted in an adult population without the students and clinical populations (Table 1). Moreover, studies associated with participants with any other mental disorders such as schizophrenia, bipolar disorder, or psychosis were excluded from this review. Also, the research studies that were conducted in populations with autism spectrum or any other learning disorder were removed from the current SLR. As the primary goal of this SLR was to focus on the effectiveness of wearable devices, so studies that were associated with the usage of any other wearable digital sensor were excluded. The range of the publication date of research papers was defined as the previous 10 years. Also, review papers were excluded.

3.2. Searching Strategies

Three databases were used: PubMed, Scopus and Web of Science. A series of keywords such as “burnout”, “stress”, “wearable devices”, “smart watches”, and “students” were identified and formed as queries using the Boolean operators “AND” and “OR”. The search queries were as follows: for PubMed database, (burnout) OR (stress) AND (wearable devices) or (smart watches), (anxiety) AND (wearable devices) OR (smart watches), for Scopus database, (burnout) OR (stress) AND (wearable) AND (devices) AND (students), and for Web of Science database, (TS = (wearable devices) AND TS = (burnout OR stress)) AND TS = (students). Then, the inclusion and exclusion criteria were applied. After that, the metadata of retrieved studies, particularly titles and abstracts, were reviewed, assessing their relevance to the defined research questions.

3.3. Open Data Repositories

Nowadays, there is a crucial need to support open science and data-driven healthcare innovations. As a consequence, public-health-related databases play a crucial role in advancing scientific research, offering diverse data ranking from a variety of physiological signals to social health determinants [51]. There are many challenges which have emerged such as access restriction and data standardization. Nevertheless, the increasing number of open-access health-related repositories has undoubtedly transformed scientific research in its totality [52]. These data warehouses offer the scientific community the opportunity to explore various aspects of the healthcare sector [20], taking advantage of health data mining (Table 2). For instance, Kang et al., 2023 [53] provided research data to the Zenodo data warehouse for future research utilization and management. A table of the most prominent public repositories based on their primary goal in health research is presented below (Table 2).

3.4. Bias of the Selected Studies

In this current review, thirteen studies utilizing wearable devices and machine learning techniques in mental health indicator detection were examined. A detailed analysis revealed bias in some studies due to the fact that this common technological innovation performs seven kinds of bias. Table 3 summarizes this bias. In more detail, several studies [33,63,64,65] tend to emphasize successful outcomes without disclosing any methodological limitations or negative findings. Such reporting may inflate the perceived efficacy or the generalizability, particularly in those studies that used commercial apps. Some studies [53,66,67,68] assumed that physiological indicators such as heart rate variability (HRV) or galvanic skin response could fully capture emotional patterns. This assumption reflects a tendency capture human stress responses. Moreover, in some studies researchers [53,66,68,69,70,71] often relied on a specific participant population and this lack of diversity reduces generalizability to vulnerable populations. Moreover, a measurement bias was defined in some devices like Fitbit, which often shows a lack of clinical accuracy and may yield unreliable results under some stressful conditions. Moreover, small sample size, lack of randomization, and insufficient control groups limit the strength of study results. Furthermore, the studies [53,68,70,71,72] referring to pandemic environments, artificial settings, or other global circumstances like COVID-19 may not reflect real-life stressors and the data conflict with the results outside those settings. The limited socioeconomic characteristic, age, and cultural diversity capture a demographic bias that may compromise the inclusiveness of the wearable health devices [53,64,72].

4. Results

After a primary literature search, a total number of 272 studies were gathered. Three scientific databases were used. In this initial search, the inclusion and exclusion criteria were applied. The remaining records were screened by their title and abstract and the studies that did not meet all the study requirements were removed. Also, duplicate records were removed using the Zotero 7 software. A total number of 43 reports were assessed for their eligibility, of which 31 records were excluded because they were not relevant to the review’s purposes. Finally, 13 studies were included in the present SLR. The majority of the papers were published as journal articles and only one was published in conference proceedings. Two of these papers (15%) were published in the International Journal of Artificial Intelligence and the remaining papers were published in the Journal of Medical Internet Research, JMIR mHealth and uHealth, International Journal of Environmental Research and Public Health, Scientific Data, Diagnostics, BMC Psychiatry, Archives of Design Research, Sensors, and PLOS One. Furthermore, one paper was published in 2018 and one each in the years 2019 and 2021, followed by two papers each in 2020 (15%), 2023 (15%), and 2024 (15%). Three (23%) of the reviewed papers were published in 2022. The results of the SLR, according to the defined research questions, are given in this section.

4.1. Purposes, Subjects, and Behavioral Patterns

Researchers have focused on a variety of purposes related to students’ burnout symptoms and their associated behavioral patterns. The reviewed papers were summarized into four categories, which defined their specific research purposes: (a) the purpose of the study was to predict a mental disorder associated with burnout symptoms, (b) the purpose of the study was to assess the efficacy of a technological innovation, (c) the purpose of the study was to detect burnout symptoms using an AI wearable application, and (d) the purpose of the study was to manage a mental disorder using an AI smart device. In Table 4 are presented the summarized categories.
In more detail, five studies (38%) were focused on one of the main objectives which is the prediction of stress levels using advanced AI technologies. Specifically, the use of deep learning machines has proven effective in predicting stress levels [69]. Additionally, the prediction of mental stress, as well as general mental well-being, depression, anxiety, and stress, has been extensively studied [33,66]. One study also focused on the predictive utility of pretreatment heart rate variability (HRV) in the effectiveness of cognitive behavioral therapy (GCBT) in reducing symptoms of depression and anxiety [33]. Lastly, the prediction of stress when individuals are exposed to an acute stressor has also been examined [67].
Another important issue of the reviewed studies is related to the efficacy of specific interventions. For instance, a study [64] investigated the effectiveness of the BioBase application in managing anxiety and stress, showing positive effects. Other researchers have also focused on stress level detection in various contexts. They have studied stress levels using different methods and tools [70,72], as well as ecological stress in everyday life [68]. Furthermore, fatigue detection and the response to psychological stress in everyday life have been explored [65,71]. Finally, stress management is a key area of research interest. A research study has examined the management of attention as a means of reducing stress [53], as well as stress management through interventions using smart devices and cognitive processes [73]. As a consequence, they advocated in favor of new guidelines that can be promising in future improvement of quality of life.

4.2. Wearable Devices, AI Technology, and AI Predictive Models

Table 5 presents the total number of reviewed studies that use a wearable device to examine a mental symptom. These devices were grouped by the symptom that is measured. In more detail, Fitbit Versa 2 (Fitbit) is extensively used by the most studies (46%) and measures anxiety, depression, and stress. The Empatica E4 wristband is used in 23% of the studies, among which two of them assess stress level and one fatigue. This is followed by BioBeam, which is used by two research studies for anxiety and stress detection. The Apple Watch is also used for stress and attention studies. Huawei Band 6, which is utilized for its photoplethysmography sensors, was used in two research studies for the evaluation of stress and depression. Finally, one study used Microsoft Band 2 to underline the stress levels of the participants. The results demonstrate that stress is a core element of burnout and the majority (76.9%) of the reviewed studies have utilized a variety of smart wrist devices for symptom tracking. Moreover, 61.5% of the reviewed studied have utilized AI supportive technology to integrate a biomarker database, Biopac MP150, OpenBCI helmet, K-Emo EPOC Headset, NetHealth dataset, MacBook, iPad, iPhone, 3-lead ECG, and Fitbit API. Furthermore, four studies (30%), have developed or used existing AI predictive models. Regarding the Fitbit, which is the most popular wearable activity monitor, ref. [70] showed that the Fitbit had comparably greater error rates (ranging from 4.91% for very light exercise to 13.04% for very vigorous exercise) compared to the Apple Watch. Moreover, results of HR mean differences across the three experimental phases (anticipation, oral, and arithmetic) reported the capacity of Fitbit Versa 2 to detect short-term variations in levels of psychological stress. The mean HR from the anticipation phase was significantly lower than those from the oral (p < 0.001) and arithmetic (p = 0.02) phases. Finally, the mean HR from the oral phase was significantly higher than that of the arithmetic phase (p = 0.02) [70].

4.3. Measuring Mental Disorders Using Physiological Signals, Mental Scales, and AI Predictive Models

Many studies have explored the possibility of using various physiological signals to assess the presence of mental disorders. Some examples of such signals are electrocardiogram (ECG), electroencephalogram (EEG), galvanic skin temperature (GSR), and respiration. On the other hand, heart rate (HR) and heart rate variability (HRV) seem to be the most commonly used in research studies [34]. HRV depicts the increases and decreases between consecutive heartbeat intervals, reflecting the balance between sympathetic and parasympathetic nervous systems and the status of the cardiovascular system. Taking into account all of the above, the measurement of these biomarkers indicates general cardiac activity which leads to identification of multiple stress levels and the detection of depression and burnout symptoms generally. Based on the above, wearable devices utilize a variety of sensors and collect data to detect and evaluate the mental status of healthy or diseased populations.
On the other hand, the most common methods, according to the literature, in mental status assessment are based on self-reported scales, such as the Perceived Stress Scale (PSS), the Stress Response Inventory (SRI), the State–Trait Anxiety Inventory (STAI), the Hamilton Depression Scale (HAMD), and the Beck Depression Inventory Scale (BDI), and the Diagnostic and Statistical Manual of Mental Disorders (DSM-V).
In Figure 3, the data of the 13 reviewed research papers are represented, showing the association between physiological signals and burnout symptom detection, verified by a measure scale. The interpretation of the results shows that stress is the most frequently measured mental symptom, spanning all of the physiological signals. Of the reviewed papers, 38.5% used HR for stress detection, 23% used HRV, then, the ecological momentary assessments (EMAs) or ecological physiological assessments (EPAs) ranked, and 15% used ST or EDA. Moreover, data related to activity patterns like rest time, sleep, motion acceleration, steps, and total physical activity (indoor and outdoor) were collected. Thirty percent of the reviewed papers focused on the relaxation and the rest phases and thirty percent incorporated time of walking acceleration and the total number of steps to calculate the activity level as an indicator of overall movement. Physical activity is estimated as an indicator in studies focused on stress assessment. Of the reviewed studies, 30.7% focused on behavioral traits like openness, conscientiousness, neuroticism, extraversion, agreeableness, lifestyle, self-characteristics, and feelings of arousal. Finally, some behavioral patterns like event stress, emotion, and attention were obtained and incorporated into the applied methods. Of the total number of studies, two of them (15%) used a self-reported scale for stress verification, the same as anxiety. HRV was used for depression assessment by one (7.6%) research study and two (15%) studies used only self-reported scales. In more detail, Lin et al. [33] concluded that HRV may be a useful predictor in GCBT treatment efficacy. The correlation analysis revealed a significant negative association between HRV, heart rate (HR), and the scale scores of the PSS and PHQ. Specifically, the mean HR exhibited a negative correlation with both PHQ scores (rho = −0.313, p = 0.01) and PSS scores (rho = −0.313, p = 0.01). Similarly, the minimum HR showed negative correlations with PHQ scores (rho = −0.228, p = 0.048) and PSS scores (rho = −0.253, p = 0.03).
Finally, it is mentioned by one (7.6%) study that fatigue is detected taking into account the HRV, HR, skin temperature (ST), ECG, and EDA. In Figure 3, it is further depicted that four (30.7%) research studies additionally used artificial predictive models to support stress detection, in combination with the use of the values of biomarkers and self-reported scales. Nonetheless, in the study of Chalmers et al. [67] both the resting and the stress phase HR were significantly higher in the medical student population than the general population (resting, p < 0.001; stress, p = 0.022). Moreover, they found a significant rise in low frequency (LF), high frequency (HF), and the LF:HF ratio (p < 0.001) during the stress task with medical students having a narrower increase in HRV parameters than the general population.

4.4. Challenges and Limitations in the Reviewed Studies

Wearable biosensors are characterized by some challenges related to data reliability in real-world conditions. Some reviewed papers have mentioned that physiological data can be noisy, especially in non-laboratory settings. The need to despike and filter the data to reduce the noise raises the concern of data manipulation and the integrity of the original signals must be taken into account [68]. Wearable devices can malfunction or the participants may not follow the instructions properly, leading to low-quality data. In the K-EmoPhone study, some participants provided some faulty data due to device errors or the failure to adhere to the instructions, with consequences for the dataset quality [53]. Also, in the study of Cagnon et al., 2022 [70], it is mentioned that the variability in accuracy, especially during high-intensity activities or in stress conditions, is a significant challenge in the reliability of wearables in precise stress monitoring. A core limitation of wearable technology is the accuracy of sensors to achieve reliable and valid data, particularly in naturalistic settings [70,71].
Tutunji et al., 2023 [68] mentioned the challenge of results’ generalizability. Physiological responses vary widely between individuals and the development of algorithms that work across diverse populations remains a difficult task. This variability must be addressed by ensuring that AI models are personalized to avoid false positives or negatives, which could lead to a misdiagnosis or to inappropriate interventions. Also, data variability can be caused by different participants that are influenced by their individual differences. One method to address this issue is by adjusting the threshold for each participant when analyzing subjective data such as self-reported stress or emotion [53]. Individual variability is a challenge and highlights the ethical concern of fairness in data interpretation. In one study, the use of machine learning models for ego-centric data was a key predictor, suggesting that more personalized models can achieve higher accuracy. However, this reliance on personalization raises some ethical concerns about overfitting to individual characteristics, which may result in biased outcomes [63].
Some of the reviewed studies, furthermore, discussed limitations that are raised by class imbalances (like the proportion of fatigue versus non-fatigue states) and small sample sizes. This ethical challenge may lead to biased or non-generalizable results. Increasing sample sizes and ensuring a balanced representation of various states (such as stress and non-stress conditions) are essential to improve the statistical validity of the results [68]. Moreover, a small sample size in two of the reviewed studies may affect the generalization of the results and a demand for further larger studies is noted [70,71].
Furthermore, two major challenges that are mentioned in two of the reviewed studies are participant compliance when wearing a device and research dropout. Non-compliance can lead to skewed results, and participants with higher stress or mental health symptoms are more likely to drop out, leading to potential bias in outcomes [33,70].
The health risks and the mental well-being of participants are also major issues that the reviewed research studies have mentioned and must be handled due to the fact that participants have to face distress or anxiety caused by continuous monitoring or because of the excessive recording of their health data. Thus, the psychological impact of such studies must be considered [53]. The recalling of stressful events, such as in ref. [71], or participating in conditions that may induce stress such as exams or arithmetic tasks can cause excessive distress to participants, too.

4.5. Ethical Considerations That Participants Had to Handle During the Usage of Wearable AI Technology

In all of the reviewed papers some ethical considerations that are essential and are related to health-related data handling were addressed. In more detail, in all the reviewed research papers, the required informed consent was obtained from the participants for data handling. The participants were aware of the studies’ aims, the procedures, and the potential risks that are involved in research related to mental health issues. Ethical clearance was obtained from the regional review boards, ensuring that all the procedures of institutional, national, or international ethical standards, such as the Helsinki Declaration, were adhered to.
Some of the reviewed papers also highlight the need to safeguard participants’ privacy in relation to sensitive physiological data like heart rate, the skin conductance, stress levels, etc. [68,70]. The protection of privacy is a significant ethical concern which is ensured through data anonymization, identification, and the provided informed consent. In the K-EmoPhone study, the anonymity of sensitive data is ensured via encryption while GPS is used due to the addition of noise [53]. Similarly, in the NetHealth study, due to privacy concerns, some of the participants’ data are not shared publicly [63].

4.6. Performance and Accuracy of the Surveyed Systems

The evaluation of the accuracy and the performance of the used applications for physiological and psychological assessments is crucial to ensure that the outcomes are reliable. Below are described the various statistical and machine learning techniques that have been employed to measure error rates and the model generalizability and to identify the best predictive features for mental fatigue and other physiological states.

4.6.1. Statistical and Analytical Techniques

Oweis et al. evaluate the application accuracy using the one-way ANOVA and further data analytics were employed [72]. The analysis [69] of the measurement error between the Biopac system and the Fitbit Versa 2 revealed a mean absolute error (MAE) of 5.87 (SD 6.57, 95% CI 3.57–8.16) beats per minute (bpm). This value is below the predefined clinically acceptable difference (CAD) of ±10 bpm, demonstrating a good accuracy of the Fitbit Versa 2 in heart rate monitoring.
The correlations between self-reported mental fatigue levels were used by Ramírez-Moreno, M. et al. to calculate the best mental fatigue predictors. Three-class mental fatigue models were evaluated, and the best model obtained an accuracy of 88% using three features, β/θ (C3), and the α/θ (O2 and C3) ratios, from one minute of electroencephalography measurement [65].
The study of Lin et al. aimed to predict the efficacy of the group cognitive behavioral therapy (GCBT) for depression and anxiety using heart rate variability though the recording of data using smart wearable devices. The accuracy and the performance of the proposed models were evaluated using statistical analysis, such as repeated measures ANOVA (RANOVA), Spearman’s rank correlations, and multiple linear regression. The best predictive model for depression achieved R2 = 0.936 (p = 0.02), indicating strong predictive accuracy. The best predictive model for anxiety achieved R2 = 0.954 (p = 0.002), demonstrating even stronger predictive performance. Moreover, ref. [73] used 14 stress measurement questions to calculate the PSS (PSS maximum = 56). They also used a one-way ANOVA to analyze the PSS results. Results of one-way ANOVA for total PSS indicated a noticeable difference between before and after the prototype (F = 33.47; p < 0.01). The mean scores were 33.25 before the prototype and 28.13 after the prototype in the statistical analysis. Findings from this study advocated a proposal of a new approach to guidelines with smart devices that will help in stress relief.
Statistical analysis of collected data, using a paired t-test, Holm–Bonferroni correction, and Pearson’s correlations, was used by Chalmers et al. [67] for accuracy and performance evaluation of a physiological algorithm for stress detection integrated into wearable technologies. This study demonstrated that the HRV feature can predict stress responses, but the model’s accuracy depends on the baseline stress levels and the individual differences. Results also suggest that future smartwatch devices based on stress detection algorithms should create a personal baseline statement to improve individuals’ prediction accuracy [67].
Ponzo et al. [64] evaluated the BioBase mobile app and BioBeam wearable device in terms of their efficacy to reduce stress and anxiety among university students. The collected data, of both the physical biomarkers and the psychometrics, were evaluated using linear mixed models (LMMs), paired-sample t-tests, and effect size analysis (Cohen’s d). Significant reductions of anxiety and depression were found after 4 weeks in the intervention group, sustained effects at 6-week follow-up, and no correlation between app engagement and outcome measures, suggesting the effect was not solely driven by user interaction [64]. The same suggested model was proposed by Pakhomov et al. to investigate whether Fitbit fitness trackers can detect physiological responses to psychosocial stress in everyday life. The appropriate statistical controls were used to evaluate the model and it is concluded that the Fitbit is able to effectively detect stress, including HR ripples, supporting its use in real-world stress monitoring [71].

4.6.2. Machine Learning Approaches

Various machine learning models (Figure 4), including random forest models, were applied to enhance the accuracy in the prediction of physiological states. The robustness of the models used in the detection of ecologically relevant stress states [68] was tested. Different cross-validation techniques such as the leave-one-beep-out (LOBO) and leave-one-trial-out (LOTO) methods were used against a bootstrap error distribution. The performance of the LOBO models was compared to that of the leave-one-subject-out (LOSO) method to determine the generalizability of machine learning models. The results indicated that all the models performed significantly better than by chance at the individual level except for one participant.
In Kang et al. [73] the proposed models’ performance for predicting valence, arousal, stress, and task disturbance across different learning algorithms and oversampling usages was studied. Overall, the performance of these predictive models surpassed the baseline of the model accuracy. The random forest and XGBoost models were trained to classify the states of valence, arousal, stress, and task disturbance in high and low categories. The model performance was evaluated using leave-one-subject-out (LOSO) cross-validation, ensuring that the predictions were generalized to unseen participants, and they performed better than the baseline of random forest, except when predicting arousal [53]. The study demonstrates that the multimodal data derived from smartphones and wearables can predict real-world emotional and cognitive states with a reasonable accuracy.
Data completion with diurnal regularizers (DCDR) and temporally hierarchical attention (THAN) are proposed by Jiang et al. to deal with data sparsity and precisely predict human stress levels. It is presented that diurnal behavioral patterns can significantly benefit missing data recovery while user behaviors can be more effectively captured by exploiting temporally hierarchical structures of sensor data. After an appropriate parameter sensitivity analysis to demonstrate the robustness and the effectiveness of the proposed approach, it is concluded that the model does not require data training for satisfactory prediction performance while parameter tuning is required to reach the best performance for both DCDR and THAN in sensor data completion and stress level prediction [69].
In the research work of [66] two machine learning classifiers were implemented, random forest and k-nearest neighbors, using the scikit-learn module of Python 3 on local computers. These proposed models classified stress into three levels: rest, moderate, and high. They achieved a classification accuracy of 99.98% using EEG signals’ time–frequency domain features and an accuracy of 99.51% using EDA, HR, and ST signals.
Overall, the ecological momentary assessment (EMA) mood models exhibited superior performance in ref. [68] and physiology-based models classified beeps with an accuracy difference of only 3.85% compared to mood models. Combination models, integrating multiple data sources, yielded the highest accuracy.
The deep learning methodology and conventional prediction algorithms are proposed by Saylam and Incel, 2024 [63]. The updated analyses for mental health multitasking [64] from the baseline of multitask learning (MTL) performances align closely, with no significant improvement observed in the multitask scenario. The RF and XGBoost results have minimal differences, and substantial performance enhancements are achieved by incorporating the temporal aspect.

4.7. Results and Main Findings of Each Survey

Various studies have explored the accuracy and the effectiveness of wearable devices, machine learning models, and statistical methods in monitoring physiological and psychological states. Oweis et al. (2018) [72] used one-way ANOVA and data analytics to evaluate application accuracy of the galvanic skin response (GSR) of students under preset conditions over a whole semester, using a wearable smartwatch. The findings of this work show significant correlations between the GSR values and the activity level, results that should be confirmed in future work because this is the first study of its kind. Similarly, Gagnon et al. (2022) [70] assessed the measurement error of the Fitbit Versa 2 compared to the Biopac system, finding a mean absolute error of 5.87 bpm, which is within the clinically acceptable difference, indicating good heart rate monitoring accuracy. These results support the use of Fitbit Versa 2 in capturing short-term stress variations. Notwithstanding Fitbit devices’ levels of accuracy in HR measurement for stress recording, there is a poor agreement with the ECG gold standard, so the Fitbit cannot replace ECG instruments when precision is of the utmost importance.
The reviewed studies indicate that machine learning models played a crucial role in mental fatigue and stress detection. Furthermore, Ramírez-Moreno et al. (2021) [65] employed EEG-based features to develop three-class mental fatigue models, achieving an 88% accuracy in prediction. This pilot study demonstrates the viability and the potential of short-calibration procedures and intersubject classifiers in mental fatigue modeling. These results will support the use of wearable devices in developing tools that aim to enhance the well-being of workers and students, as well as improve daily life activities. Lin et al. [33] investigated the use of smart wearables in predicting the efficacy of group cognitive behavioral therapy (GCBT) for depression and anxiety, with predictive models achieving strong accuracy (R2 = 0.936 for depression and R2 = 0.954 for anxiety). Consequently, the findings of this study show that HRV may be a useful predictor of GCBT treatment efficacy and identifying predictors of treatment response can help in personalized treatment to improve outcomes for individuals with depression and anxiety.
Chalmers et al. (2022) [67] used statistical methods such as paired t-tests, Holm–Bonferroni correction, and Pearson’s correlations to validate a physiological algorithm for stress detection in wearable technology. The findings emphasize that HRV supports stress detection, with the collected data depending on the baseline stress levels and individual differences. It is suggested that future models should be incorporated with a personal baseline for accuracy improvement.
Moreover, Ponzo et al. (2020) [64] evaluated the BioBase mobile app and BioBeam wearable, confirming significant reductions of anxiety and depression after four weeks, with sustained effects at six weeks. Results from this study demonstrate that digital interventions are effective in lowering self-reported anxiety and enhancing perceived well-being among UK university students. The findings also indicate that digital mental health interventions could offer an innovative approach in stress and anxiety management of students, either as a standalone solution or in combination with existing therapeutic methods. Similarly, Pakhomov et al. (2020) [71] confirmed the Fitbit’s capability in detecting response to stress of real-life settings. These findings align with the previous laboratory research and suggest that consumer wearable fitness trackers could be a valuable tool for monitoring exposure to psychological stressors in real-world settings.
Machine learning techniques further enhance physiological state predictions. Tutunji et al. (2023) [68] applied various models, including random forests, testing their robustness using cross-validation techniques like leave-one-beep-out (LOBO) and leave-one-trial-out (LOTO). The results demonstrate that all the models performed significantly better than chance for nearly of all the participants. Kang et al. (2023) [53] utilized multimodal data from smartphones and wearables, applying random forest and XGBoost models for predicting valence, arousal, stress, and task disturbances, outperforming baseline accuracy in most cases. These studies highlight the potential of wearable biosensors for monitoring stress which is related to an existing mental disorder. They emphasize the importance of psychological content in interpretation of psychological arousal, such as responses that can be linked to both positive and negative experiences. Additionally, the findings support a personalized approach, suggesting that stress is the most accurately detected symptom when compared to the individual’s own baseline data.
Jiang et al. (2019) [69] proposed data completion with diurnal regularizers (DCDR) and temporally hierarchical attention (THAN) for the prediction of human stress levels despite data sparsity. Their findings highlighted the importance of diurnal behavioral patterns in missing data recovery. Chandra and Sethia (2024) [66] implemented machine learning classifiers, like random forest and k-nearest neighbors, for stress classification, achieving near-perfect accuracy of 99.98% using EEG signals and 99.51% using EDA, HR, and SKT signals. Furthermore, the proposed machine models outperform all the previous studies on stress classification using EEG, EDA, HR, and SKT signals. This study is particularly innovative because of its demonstration of the effectiveness of wearable devices in developing accurate stress classification models, paving the way for real-time stress monitoring systems, a conclusion that is linked to the results of Jiang et al. (2019) [69] which reveal the robustness and the effectiveness of the proposed machine models. Meanwhile, Saylam and Incel (2024) [63] compared the deep learning with conventional prediction algorithms in a mental health multitask scenario. They concluded that while multitask learning did not significantly outperform single-task models, the incorporation of temporal aspects substantially improves results.
Overall, ecological momentary assessment (EMA) based on mood models exhibited superior performance according to ref. [68], while physiology-based models had only a slight accuracy gap (3.85%). Combining multiple data sources yields the highest accuracy, emphasizing the potential of multimodal approaches in real-world psychological and physiological state monitoring.

5. Discussion

The findings of this systematic literature review highlight the increasing role of wearable AI technology, particularly smartwatches, in the detection and management of burnout symptoms among students. The reviewed studies emphasize the importance of physiological monitoring, AI-driven predictive models, and self-reported scales in assessing mental well-being.
Regarding the first research question, stress was the most frequently assessed symptom of all the reviewed studies, using wearable AI devices, followed by anxiety and depression. Stress is a major issue for students as they have to face a variety of academic, social, and personal challenges. Moreover, the studies indicated that stress is a significant predictor of physiological distress and can manifest as anxiety and depressive symptoms [74]. As a matter of fact, during the outbreak of the COVID-19 pandemic, many studies emerged as a global emergency alert for mental health observation [75,76,77]. Many studies focused on student populations and how their academic performance is influenced under high-stress circumstances [74]. Academic performance is inextricably linked with academic burnout and, consequently, burnout syndrome is a result of experiencing high levels of workplace stress and is influenced by multiple factors [11,13,76,77,78]. Academic burnout can be regarded as an extension of career burnout in students that are in a learning process environment, attending lectures and performing structured activities [37,79,80,81]. Academic stress also includes mental and emotional pressure, tension, and anxiety, with unquestioned significance in burnout syndrome [82]. Student burnout, as a phenomenon, can influence their life and relationships later on, including academic dropout, future development, career motivation, and quality of relationships in the workplace [11,13,80].
According to the findings related to the second research question, the most popular AI wearable device was the Fitbit, playing an important role in anxiety, depression, and stress tracking. Lately, wearable sensors have garnered significant attention due to their vast potential in health monitoring [83,84,85,86]. Biosensor technology provides a variety of products covering different categories of usage. It includes strain sensors, chemical and biochemical wearable sensors, and optical wearable sensors [84]. Such wearables hold significant interest regarding multimodal signals, including HR, HRV, EEG, and EMG, for comprehensive and acute assessment of mental health [87]. Nonetheless, ref. [87] concluded that EEG is the most promising physiological signal for evaluating emotions and mental states. Results related to research question three of our study reveals that heart rate (HR) and heart rate variability (HRV) are the most commonly used biomarkers for stress tracking, as the most frequent burnout-related symptom. Subsequently, the consistent use of these physiological signals suggests their viability as indicators of stress and mental fatigue [45], while focusing on physical and mental monitoring encourages supportive educational settings [88].
This review also underscores the significance of self-reported mental health assessments, which are frequently used alongside physiological data for validation. Some instruments such as the Perceived Stress Scale (PSS) and the State–Trait Anxiety Inventory (STAI) remain widely utilized, reinforcing the need for a hybrid approach that combines subjective self-assessments with objective physiological measurements [82,89,90]. However, reliance on self-reported data introduces biases related to participants’ perceptions and reporting tendencies, which could impact the overall validity of results.
While wearable biosensors offer many promising opportunities for physiological and emotional monitoring, several critical challenges are addressed in research question four. Issues such as data noise, device malfunctions, and participant non-compliance compromise the reliability and the validity of the collected data, especially in real-world conditions. Additionally, the variability in individual physiological responses underscores the importance of personalization in algorithm design, though this brings many ethical concerns regarding fairness and potential bias [80]. Small sample sizes and class imbalances further limit the generalizability of findings, highlighting the need for larger, more diverse datasets. Lastly, the psychological impact on participants in terms of mental well-being must be carefully managed in future studies. Addressing these challenges is essential for advancing the responsible use of wearable technologies in research and clinical applications.
Due to the fact that all the reviewed papers are compliant with the ethical considerations of research conducted on humans, all researchers mention the potential concern of privacy assurance. Additionally, results related to research question five reveal that participants’ privacy is related to sensitive physiological data like heart rate, skin conductance, stress levels, and body motion [68,70,91]. Data anonymization, identification, and informed consent are some considerations of privacy protection. The greatest threat to privacy is via data (usernames, passwords, personal information) [92]. The threat of hacking increases the security risk and the use of encryption software is a necessity. Moreover, ensuring data security, participant privacy, and informed consent are paramount, given the sensitive nature of mental health data [67]. Sensor technology records day by day activity, which may be perceived as a mere inconvenience to something more emotionally troubling, such as a loss of independence [93]. Based on the definition [93], telemonitoring applications may cause cumulative obtrusiveness based on a number of characteristics or effects associated with the technology or on one characteristic or effect that is especially important or prominent to the user. As clearly mentioned [93], technology is subjectively assigned meaning by each user, concluding that monitoring applications enter people’s private spaces with different psychological dynamics. Mitigating risks the with the potential benefits of advanced information technology requires ethically astute researchers who can address and distinguish the challenges that might arise from innovative sensor technology, improving the lives of patients and families [89]. Future steps must incorporate privacy-related machine learning techniques, such as federated learning, where data remain on a user’s device rather than being transferred to external servers [80]. Ethical compliance must also be ensured by transparent data government policies and user consent protocols.
Research question six analyzes the unique role of AI in burnout detection and prediction [94]. According to this section, statistical techniques like linear mixed methods (LMMs) and ANOVA evaluate applications’ accuracy. Notably, results refer to the proposed machine learning algorithms that have been used in stress and burnout detection. RF, MTL, and XGBoost accuracy and performance show small differences, while such machine learning models have been used by a variety of researchers as predictive models in a variety of health disturbances. More machine learning algorithms like linear regression, AdaBoost, decision tree, support vector machine, k-nearest neighbor, and naïve Bayes classifier have contributed to disease classification like Alzheimer’s disease, Parkinson’s disease, diabetes, breast cancer, tumor classification, chronic kidney disease, coronary heart disease, COVID-19, brain tumor, pediatric colonic inflammatory bowel disease, hypertension, melanoma skin cancer, liver cancer, and hepatocellular carcinoma [66,95,96,97,98,99,100,101]. In the last decade, machine learning and deep learning have grown in prominence in disease diagnosis. According to the increasing manifestation of different kinds of diseases, machine learning algorithms exploit the emerging need for massive datasets to innovatively extract patient outcomes in terms of diagnosis and precision [95]. Addressing these issues is required for the development of standardized AI frameworks that ensure equitable and unbiased outcomes across diverse populations [102]. Moving forward, researchers must establish uniform validation criteria, standardized datasets, and benchmarking protocols for stress detection and mental health prediction models.
According to the last research question, the Fitbit Versa 2 shows good short-term heart rate monitoring accuracy (mean absolute error—5.87 bpm) but cannot replace ECG when precision is critical. On the other hand, wearables like BioBase, BioBeam, and the Fitbit are effective for real-life stress and anxiety detection and therefore can enhance well-being in university students. ML models accurately predict stress, arousal, valence, and task disturbances with data collected from smartphones and wearable devices as well. EEG-based models achieved up to 99.98% accuracy for stress classification, significantly outperforming previous studies [103,104,105]. Multimodal and personalized models like DCDR and THAN seem to enhance accuracy in disease prediction [83,106,107].
According to our study, despite all the challenges discussed, wearable AI technology offers a promising, non-invasive means of identifying early signs of burnout in student populations. Future perspectives should prioritize the development of personalized, multimodal, and real-world adaptive systems. To achieve this, researchers must adjust AI models towards sensor accuracy, develop adaptive machine learning models, standardize validation methods, and enhance the applicability and reliability of findings. Interdisciplinary collaboration between mental health professionals, policymakers, the UN perspectives, AI researchers, and wearable technology developers is essential to ensure that these tools are effectively integrated into academic and clinical settings, ensuring ethical and security considerations for all (Figure 5).

6. Conclusions

In conclusion, while wearable AI devices present a transformative approach to burnout detection and stress management, their implementation must be accompanied by rigorous validation, ethical safeguards, and continuous refinement to maximize their potential benefits.
To our knowledge, this systematic literature review is one of the first in its field. Despite the fact that students’ mental health is crucial, it does not often receive the attention it deserves. Enhancing mental health can also significantly improve physical health and well-being. As a result, universities and policy stakeholders should invest more in mental support services. This is why this current review underscores the potential of wearable AI devices in the early detection and management of burnout symptoms among students. The integration of physiological monitoring, AI predictive models, and self-reported assessments presents a comprehensive approach to understanding mental health trends in academic settings. However, several limitations, including sensor accuracy, data reliability, ethical considerations, and model generalizability, must be addressed to optimize the effectiveness of these technologies.
While existing studies have focused on stress detection in students, there remains a lack of research on whether burnout symptoms can be detected during academic studies or not. Surprisingly, smartwatches in education offer a variety of affordances in relation to educational settings. Researchers have used many wearable devices for educational interventions, monitoring, and self-monitoring [86]. With smartwatch applications, learners are supported with their intellectual and developmental disabilities in university classrooms, increasing their physical activity. Moreover, future burnout implications can be predicted, study behaviors can be captured during new curricula programs, and learners’ experiences can be predicted [108]. Therefore, wearable monitoring contributes to building an academic system that helps students self-regulate their learning process, prevent stress, anxiety, and burnout implications, and integrate working into a healthy learning ecosystem [109]. Additionally, according to the Sustainable Development Agenda of the United Nations [110], there is a global demand to transform education due to economic, social, and environmental aspects [111] (Figure 5). A well-balanced distribution of academic workload, combined with clearly defined take-home lessons, can play a significant role in preventing student burnout by reducing stress and enhancing learning clarity. These goals require the prerequisite of students’ involvement in the academic landscape and building affordable academic workplaces [112] for both students and educators [113].
Moving forward, future studies should be designed and focused on refining AI algorithms, enhancing wearable sensor capabilities, and ensuring ethical safeguards for data privacy and participants’ well-being. Collaborative efforts among researchers, healthcare professionals, and technology developers are crucial in advancing the application of AI-driven wearable devices in mental health monitoring. Indeed, the future of burnout is undoubtable and controversial, and it is essential to consider its symptoms and appropriate recommendations. Depending on the severity of each case, early integrated interventions may prevent the onset of this phenomenon [74,80]. By overcoming current challenges, wearable AI technology can become a vital tool in promoting student well-being and preventing long-term mental health consequences associated with burnout.

Author Contributions

Conceptualization, P.L. and I.M.; methodology, P.L.; formal analysis, P.L.; investigation, P.L.; resources, P.L.; data curation, I.M.; writing—original draft preparation, P.L.; writing—review and editing, I.M.; visualization, I.M.; supervision, I.M.; project administration, I.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by AI4Work EU Project GA No 101135990.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No new data were created or analyzed in this study. Data sharing is not applicable to this article.

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
AIArtificial Intelligence
HRHeart Rate
HRVHeart Rate Variability
EDAElectrodermal Activity
EMAEcological Momentary Assessment
EPAEcological Physiological Assessment
ECGElectrocardiogram
EEGElectroencephalogram
STSkin Temperature
CGBTCognitive Behavior Therapy
GSTGalvanic Skin Temperature
PSSPerceived Stress Scale
SRIStress Response Inventory
STAIState–Trait Anxiety Inventory
HAMDHamilton Depression Scale
BDIBeck Depression Inventory Scale
DSM-IVDiagnostic and Statistical Manual of Mental Disorders
LMMsLinear Mixed Methods
LOBOLeave-One-Beep-Out
LOSOLeave-One-Subject-Out
LOTOLeave-One-Trial-Out
RFRandom Forest
DCDRData Completion with Diurnal Regularizers
THANTemporally Hierarchical Attention
MTLMultitask Learning

References

  1. WHO|Basic Documents. Available online: https://apps.who.int/gb/bd/ (accessed on 29 August 2024).
  2. Di Mario, S.; Rollo, E.; Gabellini, S.; Filomeno, L. How Stress and Burnout Impact the Quality of Life Amongst Healthcare Students: An Integrative Review of the Literature. Teach. Learn. Nurs. 2024, 19, 315–323. [Google Scholar] [CrossRef]
  3. Silva, E.; Aguiar, J.; Reis, L.P.; Sá, J.O.E.; Gonçalves, J.; Carvalho, V. Stress among Portuguese Medical Students: The EuStress Solution. J. Med. Syst. 2020, 44, 45. [Google Scholar] [CrossRef]
  4. Gabola, P.; Meylan, N.; Hascoët, M.; De Stasio, S.; Fiorilli, C. Adolescents’ School Burnout: A Comparative Study between Italy and Switzerland. Eur. J. Investig. Health Psychol. Educ. 2021, 11, 849–859. [Google Scholar] [CrossRef] [PubMed]
  5. Treluyer, L.; Tourneux, P. Burnout among Paediatric Residents during the COVID-19 Outbreak in France. Eur. J. Pediatr. 2021, 180, 627–633. [Google Scholar] [CrossRef]
  6. Walburg, V. Burnout among high school students: A literature review. Child. Youth Serv. Rev. 2014, 42, 28–33. [Google Scholar] [CrossRef]
  7. Beck, M.S.; Fjorback, L.O.; Juul, L. Associations between Mental Health and Sociodemographic Characteristics among Schoolchildren. A Cross-Sectional Survey in Denmark 2019. Scand. J. Public Health 2022, 50, 463–470. [Google Scholar] [CrossRef]
  8. Govorova, E.; Benítez, I.; Muñiz, J. How Schools Affect Student Well-Being: A Cross-Cultural Approach in 35 OECD Countries. Front. Psychol. 2020, 11, 431. [Google Scholar] [CrossRef]
  9. Fiorilli, C.; De Stasio, S.; Di Chiacchio, C.; Pepe, A.; Salmela-Aro, K. School Burnout, Depressive Symptoms and Engagement: Their Combined Effect on Student Achievement. Int. J. Educ. Res. 2017, 84, 1–12. [Google Scholar] [CrossRef]
  10. Liu, W.; Zhang, R.; Wang, H.; Rule, A.; Wang, M.; Abbey, C.; Singh, M.K.; Rozelle, S.; She, X.; Tong, L. Association between Anxiety, Depression Symptoms, and Academic Burnout among Chinese Students: The Mediating Role of Resilience and Self-Efficacy. BMC Psychol. 2024, 12, 335. [Google Scholar] [CrossRef]
  11. Sarfika, R.; Azzahra, W.; Ananda, Y.; Saifudin, I.M.; Moh., Y.; Abdullah, K.L. Academic Burnout among Nursing Students: The Role of Stress, Depression, and Anxiety within the Demand Control Model. Teach. Learn. Nurs. 2025, in press. [Google Scholar] [CrossRef]
  12. Kong, L.-N.; Yao, Y.; Chen, S.-Z.; Zhu, J.-L. Prevalence and Associated Factors of Burnout among Nursing Students: A Systematic Review and Meta-Analysis. Nurse Educ. Today 2023, 121, 105706. [Google Scholar] [CrossRef]
  13. Al-Awad, F.A. Academic Burnout, Stress, and the Role of Resilience in a Sample of Saudi Arabian Medical Students. Med. Arch. Sarajevo Bosnia Herzeg. 2024, 78, 39–43. [Google Scholar] [CrossRef]
  14. Liu, X.; Zhang, L.; Wu, G.; Yang, R.; Liang, Y. The Longitudinal Relationship between Sleep Problems and School Burnout in Adolescents: A Cross-Lagged Panel Analysis. J. Adolesc. 2021, 88, 14–24. [Google Scholar] [CrossRef] [PubMed]
  15. Chirkowska-Smolak, T.; Piorunek, M.; Górecki, T.; Garbacik, Ż.; Drabik-Podgórna, V.; Kławsiuć-Zduńczyk, A. Academic Burnout of Polish Students: A Latent Profile Analysis. Int. J. Environ. Res. Public. Health 2023, 20, 4828. [Google Scholar] [CrossRef] [PubMed]
  16. Koulouris, D.; Menychtas, A.; Maglogiannis, I. An IoT-Enabled Platform for the Assessment of Physical and Mental Activities Utilizing Augmented Reality Exergaming. Sensors 2022, 22, 3181. [Google Scholar] [CrossRef]
  17. Abd-Alrazaq, A.; Alajlani, M.; Ahmad, R.; AlSaad, R.; Aziz, S.; Ahmed, A.; Alsahli, M.; Damseh, R.; Sheikh, J. The Performance of Wearable AI in Detecting Stress Among Students: Systematic Review and Meta-Analysis. J. Med. Internet Res. 2024, 26, e52622. [Google Scholar] [CrossRef]
  18. Agarwal, A.K.; Gonzales, R.; Scott, K.; Merchant, R. Investigating the Feasibility of Using a Wearable Device to Measure Physiologic Health Data in Emergency Nurses and Residents: Observational Cohort Study. JMIR Form. Res. 2024, 8, e51569. [Google Scholar] [CrossRef]
  19. Mason, R.; Godfrey, A.; Barry, G.; Stuart, S. Wearables for Running Gait Analysis: A Study Protocol. PLoS ONE 2023, 18, e0291289. [Google Scholar] [CrossRef]
  20. Page, M.J.; McKenzie, J.E.; Bossuyt, P.M.; Boutron, I.; Hoffmann, T.C.; Mulrow, C.D.; Shamseer, L.; Tetzlaff, J.M.; Akl, E.A.; Brennan, S.E.; et al. The PRISMA 2020 Statement: An Updated Guideline for Reporting Systematic Reviews. BMJ 2021, 372, n71. [Google Scholar] [CrossRef]
  21. Rethlefsen, M.L.; Kirtley, S.; Waffenschmidt, S.; Ayala, A.P.; Moher, D.; Page, M.J.; Koffel, J.B.; PRISMA-S Group. PRISMA-S: An Extension to the PRISMA Statement for Reporting Literature Searches in Systematic Reviews. Syst. Rev. 2021, 10, 39. [Google Scholar] [CrossRef]
  22. Amir-Behghadami, M.; Janati, A. Reporting Systematic Review in Accordance with the PRISMA Statement Guidelines: An Emphasis on Methodological Quality. Disaster Med. Public Health Prep. 2021, 15, 544–545. [Google Scholar] [CrossRef]
  23. Zotero|Your Personal Research Assistant. Available online: https://www.zotero.org/ (accessed on 4 April 2025).
  24. Morgan, D.E. Zotero as a Teaching Tool for Independent Study Courses, Honors Contracts, and Undergraduate Research Mentoring. J. Microbiol. Biol. Educ. 2024, 25, e0013224. [Google Scholar] [CrossRef] [PubMed]
  25. Islam, T.Z.; Wu Liang, P.; Sweeney, F.; Pragner, C.; Thiagarajan, J.J.; Sharmin, M.; Ahmed, S. College Life Is Hard!—Shedding Light on Stress Prediction for Autistic College Students Using Data-Driven Analysis. In Proceedings of the 2021 IEEE 45th Annual Computers, Software, and Applications Conference (COMPSAC), Madrid, Spain, 12–16 July 2021; pp. 428–437. [Google Scholar] [CrossRef]
  26. Nguyen, J.; Cardy, R.E.; Anagnostou, E.; Brian, J.; Kushki, A. Examining the Effect of a Wearable, Anxiety Detection Technology on Improving the Awareness of Anxiety Signs in Autism Spectrum Disorder: A Pilot Randomized Controlled Trial. Mol. Autism 2021, 12, 72. [Google Scholar] [CrossRef]
  27. Van Laarhoven, T.R.; Johnson, J.W.; Andzik, N.R.; Fernandes, L.; Ackerman, M.; Wheeler, M.; Melody, K.; Cornell, V.; Ward, G.; Kerfoot, H. Using Wearable Biosensor Technology in Behavioral Assessment for Individuals with Autism Spectrum Disorders and Intellectual Disabilities Who Experience Anxiety. Adv. Neurodev. Disord. 2021, 5, 156–169. [Google Scholar] [CrossRef]
  28. Conderman, G.; Van Laarhoven, T.; Johnson, J.; Liberty, L. Wearable Technologies for Students with Disabilities. Support Learn. 2021, 36, 664–677. [Google Scholar] [CrossRef]
  29. Thammasan, N.; Stuldreher, I.; Schreuders, E.; Giletta, M.; Brouwer, A.-M. A Usability Study of Physiological Measurement in School Using Wearable Sensors. Sensors 2020, 20, 5380. [Google Scholar] [CrossRef]
  30. Lim, K.Y.T.; Nguyen Thien, M.T.; Nguyen Duc, M.A.; Posada-Quintero, H.F. Application of DIY Electrodermal Activity Wristband in Detecting Stress and Affective Responses of Students. Bioengineering 2024, 11, 291. [Google Scholar] [CrossRef]
  31. Harvey, R.H.; Peper, E.; Mason, L.; Joy, M. Effect of Posture Feedback Training on Health. Appl. Psychophysiol. Biofeedback 2020, 45, 59–65. [Google Scholar] [CrossRef]
  32. Chiovato, A.; Demarzo, M.; Notargiacomo, P. Evaluation of Mindfulness State for the Students Using a Wearable Measurement System. J. Med. Biol. Eng. 2021, 41, 690–703. [Google Scholar] [CrossRef]
  33. Lin, B.; Prickett, C.; Woltering, S. Feasibility of Using a Biofeedback Device in Mindfulness Training-a Pilot Randomized Controlled Trial. Pilot Feasibility Stud. 2021, 7, 84. [Google Scholar] [CrossRef]
  34. Jiao, Y.; Wang, X.; Liu, C.; Du, G.; Zhao, L.; Dong, H.; Zhao, S.; Liu, Y. Feasibility Study for Detection of Mental Stress and Depression Using Pulse Rate Variability Metrics via Various Durations. Biomed. Signal Process. Control 2023, 79, 104145. [Google Scholar] [CrossRef]
  35. Radhakrishnan, S.; Duvvuru, A.; Kamarthi, S.V. Investigating Discrete Event Simulation Method to Assess the Effectiveness of Wearable Health Monitoring Devices. Procedia Econ. Financ. 2014, 11, 838–856. [Google Scholar] [CrossRef]
  36. Nelson, B.W.; Harvie, H.M.K.; Jain, B.; Knight, E.L.; Roos, L.E.; Giuliano, R.J. Smartphone Photoplethysmography Pulse Rate Covaries with Stress and Anxiety During a Digital Acute Social Stressor. Psychosom. Med. 2023, 85, 577–584. [Google Scholar] [CrossRef] [PubMed]
  37. Mocny-Pachońska, K.; Doniec, R.J.; Sieciński, S.; Piaseczna, N.J.; Pachoński, M.; Tkacz, E.J. The Relationship between Stress Levels Measured by a Questionnaire and the Data Obtained by Smart Glasses and Finger Pulse Oximeters among Polish Dental Students. Appl. Sci. 2021, 11, 8648. [Google Scholar] [CrossRef]
  38. Ma, C.; Xu, H.; Yan, M.; Huang, J.; Yan, W.; Lan, K.; Wang, J.; Zhang, Z. Longitudinal Changes and Recovery in Heart Rate Variability of Young Healthy Subjects When Exposure to a Hypobaric Hypoxic Environment. Front. Physiol. 2022, 12, 688921. [Google Scholar] [CrossRef]
  39. Wu, W.; Pirbhulal, S.; Zhang, H.; Mukhopadhyay, S.C. Quantitative Assessment for Self-Tracking of Acute Stress Based on Triangulation Principle in a Wearable Sensor System. IEEE J. Biomed. Health Inform. 2019, 23, 703–713. [Google Scholar] [CrossRef]
  40. Abromavičius, V.; Serackis, A.; Katkevičius, A.; Kazlauskas, M.; Sledevic, T. Prediction of Exam Scores Using a Multi-Sensor Approach for Wearable Exam Stress Dataset with Uniform Preprocessing. Technol. Health Care 2023, 31, 2499–2511. [Google Scholar] [CrossRef]
  41. Choi, A.; Ooi, A.; Lottridge, D. Digital Phenotyping for Stress, Anxiety, and Mild Depression: Systematic Literature Review. JMIR mHealth uHealth 2024, 12, e40689. [Google Scholar] [CrossRef]
  42. Odaka, T.; Misaki, D. A proposal for the stress assessment of online education based on the use of a wearable device. J. Res. Appl. Mech. Eng. 2021, 9, 1–9. [Google Scholar] [CrossRef]
  43. De Arriba Perez, F.; Santos-Gago, J.M.; Caeiro-Rodriguez, M.; Fernandez Iglesias, M.J. Evaluation of Commercial-Off-The-Shelf Wrist Wearables to Estimate Stress on Students. JOVE-J. Vis. Exp. 2018, 136, e57590. [Google Scholar] [CrossRef]
  44. Johnson, J.; Conderman, G.; Van Laarhoven, T.; Liberty, L. Wearable Technologies: A New Way to Address Student Anxiety. Kappa Delta Pi Rec. 2022, 58, 124–129. [Google Scholar] [CrossRef]
  45. Price, M.; Hidalgo, J.E.; Bird, Y.M.; Bloomfield, L.S.P.; Buck, C.; Cerutti, J.; Dodds, P.S.; Fudolig, M.I.; Gehman, R.; Hickok, M.; et al. A Large Clinical Trial to Improve Well-Being during the Transition to College Using Wearables: The Lived Experiences Measured Using Rings Study. Contemp. Clin. Trials 2023, 133, 107338. [Google Scholar] [CrossRef] [PubMed]
  46. Takpor, T.O.; Atayero, A.A. Integrating Internet of Things and EHealth Solutions for Students’ Healthcare. In World Congress on Engineering, WCE 2015; Ao, S.I., Gelman, L., Hukins, D.W.L., Hunter, A., Korsunsky, A.M., Eds.; Lecture Notes in Engineering and Computer Science; International Association Engineers-IAENG: Hong Kong, China, 2015; Volume I, pp. 265–268. [Google Scholar]
  47. Bopp, T.; Vadeboncoueur, J.D. “It makes me want to take more steps”: Racially and economically marginalized youth experiences with and perceptions of Fitbit Zips® in a sport-based youth development program. J. Sport Dev. 2021, 9, 54–57. Available online: https://jsfd.org/2021/10/01/it-makes-me-want-to-take-more-steps-racially-and-economically-marginalized-youth-experiences-with-and-perceptions-of-fitbit-zips-in-a-sport-based-youth-development-program/ (accessed on 30 August 2024).
  48. Shui, X.; Chen, Y.; Hu, X.; Wang, F.; Zhang, D. Personality in Daily Life: Multi-Situational Physiological Signals Reflect Big-Five Personality Traits. IEEE J. Biomed. Health Inform. 2023, 27, 2853–2863. [Google Scholar] [CrossRef]
  49. Yen, H.-Y. Smart Wearable Devices as a Psychological Intervention for Healthy Lifestyle and Quality of Life: A Randomized Controlled Trial. Qual. Life Res. 2021, 30, 791–802. [Google Scholar] [CrossRef]
  50. Kim, S.Y.; Kim, D.H.; Kim, M.J.; Ko, H.J.; Jeong, O.R. XAI-Based Clinical Decision Support Systems: A Systematic Review. Appl. Sci. 2024, 14, 6638. [Google Scholar] [CrossRef]
  51. Wack, M.; Coulet, A.; Burgun, A.; Rance, B. Enhancing Clinical Data Warehouses with Provenance and Large File Management: The gitOmmix Approach for Clinical Omics Data. arXiv 2024, arXiv:2409.03288. [Google Scholar] [CrossRef]
  52. Tang, X.; Upadyaya, K.; Salmela-Aro, K. School Burnout and Psychosocial Problems among Adolescents: Grit as a Resilience Factor. J. Adolesc. 2021, 86, 77–89. [Google Scholar] [CrossRef]
  53. Kang, S.; Choi, W.; Park, C.Y.; Cha, N.; Kim, A.; Khandoker, A.H.; Hadjileontiadis, L.; Kim, H.; Jeong, Y.; Lee, U. K-EmoPhone: A Mobile and Wearable Dataset with In-Situ Emotion, Stress, and Attention Labels. Sci. Data 2023, 10, 351. [Google Scholar] [CrossRef]
  54. Zenodo: Open-access repository. Available online: https://about.zenodo.org/projects/ (accessed on 12 April 2024).
  55. Figshare—Credit for All Your Research. Available online: https://figshare.com/ (accessed on 12 April 2024).
  56. Dryad|Publish and Preserve Your Data. Available online: https://datadryad.org/ (accessed on 12 April 2024).
  57. OSF. Available online: https://osf.io/ (accessed on 12 April 2024).
  58. PhysioNet Databases. Available online: https://physionet.org/about/database/ (accessed on 12 April 2024).
  59. Dataverse. Available online: https://www.dataverse.gr/ (accessed on 12 April 2024).
  60. OpenNeuro. Available online: https://openneuro.org/ (accessed on 12 April 2024).
  61. European Open Science Cloud (EOSC)—European Commission. Available online: https://research-and-innovation.ec.europa.eu/strategy/strategy-research-and-innovation/our-digital-future/open-science/european-open-science-cloud-eosc_en (accessed on 12 April 2024).
  62. Find Open Datasets and Machine Learning Projects|Kaggle. Available online: https://www.kaggle.com/datasets (accessed on 12 April 2024).
  63. Saylam, B.; Incel, O.D. Multitask Learning for Mental Health: Depression, Anxiety, Stress (DAS) Using Wearables. Diagnostics 2024, 14, 501. [Google Scholar] [CrossRef]
  64. Ponzo, S.; Morelli, D.; Kawadler, J.M.; Hemmings, N.R.; Bird, G.; Plans, D. Efficacy of the Digital Therapeutic Mobile App BioBase to Reduce Stress and Improve Mental Well-Being Among University Students: Randomized Controlled Trial. JMIR mHealth uHealth 2020, 8, e17767. [Google Scholar] [CrossRef] [PubMed]
  65. Ramírez-Moreno, M.A.; Carrillo-Tijerina, P.; Candela-Leal, M.O.; Alanis-Espinosa, M.; Tudón-Martínez, J.C.; Roman-Flores, A.; Ramírez-Mendoza, R.A.; Lozoya-Santos, J.J. Evaluation of a Fast Test Based on Biometric Signals to Assess Mental Fatigue at the Workplace—A Pilot Study. Int. J. Environ. Res. Public. Health 2021, 18, 11891. [Google Scholar] [CrossRef]
  66. Chandra, V.; Sethia, D. Machine Learning-Based Stress Classification System Using Wearable Sensor Devices. IAES Int. J. Artif. Intell. 2024, 13, 337–347. [Google Scholar] [CrossRef]
  67. Chalmers, T.; Hickey, B.A.; Newton, P.; Lin, C.-T.; Sibbritt, D.; McLachlan, C.S.; Clifton-Bligh, R.; Morley, J.; Lal, S. Stress Watch: The Use of Heart Rate and Heart Rate Variability to Detect Stress: A Pilot Study Using Smart Watch Wearables. Sensors 2022, 22, 151. [Google Scholar] [CrossRef]
  68. Tutunji, R.; Kogias, N.; Kapteijns, B.; Krentz, M.; Krause, F.; Vassena, E.; Hermans, E.J. Detecting Prolonged Stress in Real Life Using Wearable Biosensors and Ecological Momentary Assessments: Naturalistic Experimental Study. J. Med. Internet Res. 2023, 25, e39995. [Google Scholar] [CrossRef]
  69. Jiang, J.-Y.; Chao, Z.; Bertozzi, A.L.; Wang, W.; Young, S.D.; Needell, D. Learning to Predict Human Stress Level with Incomplete Sensor Data from Wearable Devices. In Proceedings of the 28th ACM International Conference on Information & Knowledge Management (CIKM ’19), Beijing, China, 3–7 November 2019; Association Computing Machinery: New York, NY, USA, 2019; pp. 2773–2781. [Google Scholar] [CrossRef]
  70. Gagnon, J.; Khau, M.; Lavoie-Hudon, L.; Vachon, F.; Drapeau, V.; Tremblay, S. Comparing a Fitbit Wearable to an Electrocardiogram Gold Standard as a Measure of Heart Rate Under Psychological Stress: A Validation Study. JMIR Form. Res. 2022, 6, e37885. [Google Scholar] [CrossRef]
  71. Pakhomov, S.V.S.; Thuras, P.D.; Finzel, R.; Eppel, J.; Kotlyar, M. Using Consumer-Wearable Technology for Remote Assessment of Physiological Response to Stress in the Naturalistic Environment. PLoS ONE 2020, 15, e0229942. [Google Scholar] [CrossRef]
  72. Oweis, K.; Quteishat, H.; Zgoul, M.; Haddad, A. A Study on the Effect of Sports on Academic Stress Using Wearable Galvanic Skin Response. In Proceedings of the 2018 12th International Symposium on Medical Information and Communication Technology (ISMICT), Sydney, Australia, 26–28 March 2018; International Symposium on Medical Information and Communication Technology. IEEE: New York, NY, USA, 2018; pp. 99–104. [Google Scholar]
  73. Kang, J.; Park, D. Stress Management Design Guideline with Smart Devices during COVID-19. Arch. Des. Res. 2022, 35, 115–131. [Google Scholar] [CrossRef]
  74. Huckins, J.F.; daSilva, A.W.; Wang, W.; Hedlund, E.; Rogers, C.; Nepal, S.K.; Wu, J.; Obuchi, M.; Murphy, E.I.; Meyer, M.L.; et al. Mental Health and Behavior of College Students During the Early Phases of the COVID-19 Pandemic: Longitudinal Smartphone and Ecological Momentary Assessment Study. J. Med. Internet Res. 2020, 22, e20185. [Google Scholar] [CrossRef]
  75. Manchia, M.; Gathier, A.W.; Yapici-Eser, H.; Schmidt, M.V.; de Quervain, D.; van Amelsvoort, T.; Bisson, J.I.; Cryan, J.F.; Howes, O.D.; Pinto, L.; et al. The Impact of the Prolonged COVID-19 Pandemic on Stress Resilience and Mental Health: A Critical Review across Waves. Eur. Neuropsychopharmacol. 2022, 55, 22–83. [Google Scholar] [CrossRef]
  76. Chung, E.; Jiann, B.-P.; Nagao, K.; Hakim, L.; Huang, W.; Lee, J.; Lin, H.; Mai, D.B.T.; Nguyen, Q.; Park, H.J.; et al. COVID Pandemic Impact on Healthcare Provision and Patient Psychosocial Distress: A Multi-National Cross-Sectional Survey among Asia-Pacific Countries. World J. Mens Health 2021, 39, 797–803. [Google Scholar] [CrossRef]
  77. Al-Ajlouni, Y.A.; Park, S.H.; Alawa, J.; Shamaileh, G.; Bawab, A.; El-Sadr, W.M.; Duncan, D.T. Anxiety and Depressive Symptoms Are Associated with Poor Sleep Health during a Period of COVID-19-Induced Nationwide Lockdown: A Cross-Sectional Analysis of Adults in Jordan. BMJ Open 2020, 10, e041995. [Google Scholar] [CrossRef]
  78. Maslach, C.; Schaufeli, W.B.; Leiter, M.P. Job Burnout. Annu. Rev. Psychol. 2001, 52, 397–422. [Google Scholar] [CrossRef] [PubMed]
  79. Beckstrand, J.; Yanchus, N.; Osatuke, K. Only One Burnout Estimator Is Consistently Associated with Health Care Providers’ Perceptions of Job Demand and Resource Problems. Psychology 2017, 8, 1019–1041. [Google Scholar] [CrossRef]
  80. Lin, S.-H.; Huang, Y.-C. Life Stress and Academic Burnout. Act. Learn. High. Educ. 2014, 15, 77–90. [Google Scholar] [CrossRef]
  81. Balogun, J.A.; Hoeberlein-Miller, T.M.; Schneider, E.; Katz, J.S. Academic Performance Is Not a Viable Determinant of Physical Therapy Students’ Burnout. Percept. Mot. Skills 1996, 83, 21–22. [Google Scholar] [CrossRef]
  82. El-Masry, R.; Ghreiz, S.; Helal, R.; Audeh, A.; Shams, T. Perceived Stress and Burnout among Medical Students during the Clinical Period of Their Education. Ibnosina J. Med. Biomed. Sci. 2022, 05, 179–188. [Google Scholar] [CrossRef]
  83. Kodikara, C.; Wijekoon, S.; Meegahapola, L. FatigueSense: Multi-Device and Multimodal Wearable Sensing for Detecting Mental Fatigue. ACM Trans. Comput. Healthc. 2025, 6, 1–36. [Google Scholar] [CrossRef]
  84. Saeedi, F.; Ansari, R.; Haghgoo, M. Recent Development in Wearable Sensors for Healthcare Applications. Nano-Struct. Nano-Objects 2025, 42, 101473. [Google Scholar] [CrossRef]
  85. Mandal, A.; Paradkar, M.; Panindre, P.; Kumar, S. AI-Based Detection of Stress Using Heart Rate Data Obtained from Wearable Devices. Procedia Comput. Sci. 2023, 230, 749–759. [Google Scholar] [CrossRef]
  86. Schroeder, N.L.; Romine, W.L.; Kemp, S.E. A Scoping Review of Wrist-Worn Wearables in Education. Comput. Educ. Open 2023, 5, 100154. [Google Scholar] [CrossRef]
  87. Chen, F.; Zhao, L.; Pang, L.; Zhang, Y.; Lu, L.; Li, J.; Liu, C. Wearable Physiological Monitoring of Physical Exercise and Mental Health: A Systematic Review. Intell. Sports Health 2025, 1, 11–21. [Google Scholar] [CrossRef]
  88. Maglogiannis, I.; Trastelis, F.; Kalogeropoulos, M.; Khan, A.; Gallos, P.; Menychtas, A.; Panagopoulos, C.; Papachristou, P.; Islam, N.; Wolff, A.; et al. AI4Work Project: Human-Centric Digital Twin Approaches to Trustworthy AI and Robotics for Improved Working Conditions in Healthcare and Education Sectors. Stud. Health Technol. Inform. 2024, 316, 1013–1017. [Google Scholar] [CrossRef]
  89. Bieling, P.J.; Antony, M.M.; Swinson, R.P. The State--Trait Anxiety Inventory, Trait Version: Structure and Content Re-Examined. Behav. Res. Ther. 1998, 36, 777–788. [Google Scholar] [CrossRef]
  90. Knowles, K.A.; Olatunji, B.O. Specificity of Trait Anxiety in Anxiety and Depression: Meta-Analysis of the State-Trait Anxiety Inventory. Clin. Psychol. Rev. 2020, 82, 101928. [Google Scholar] [CrossRef]
  91. Ghazizadeh, E.; Deigner, H.-P.; Al-Bahrani, M.; Muzammil, K.; Daneshmand, N.; Naseri, Z. DNA Bioinspired by Polyvinyl Alcohol -MXene-Borax Hydrogel for Wearable Skin Sensors. Sens. Actuators Phys. 2025, 386, 116331. [Google Scholar] [CrossRef]
  92. Ulrich, C.M.; Demiris, G.; Kennedy, R.; Rothwell, E. The Ethics of Sensor Technology Use in Clinical Research. Nurs. Outlook 2020, 68, 720–726. [Google Scholar] [CrossRef]
  93. Hensel, B.K.; Demiris, G.; Courtney, K.L. Defining Obtrusiveness in Home Telehealth Technologies: A Conceptual Framework. J. Am. Med. Inform. Assoc. 2006, 13, 428–431. [Google Scholar] [CrossRef]
  94. Vouzis, E.; Maglogiannis, I. Prediction of Early Dropouts in Patient Remote Monitoring Programs. SN Comput. Sci. 2023, 4, 467. [Google Scholar] [CrossRef]
  95. Ahsan, M.M.; Luna, S.A.; Siddique, Z. Machine-Learning-Based Disease Diagnosis: A Comprehensive Review. Healthcare 2022, 10, 541. [Google Scholar] [CrossRef]
  96. Bifarin, O.O.; Fernández, F.M. Automated Machine Learning and Explainable AI (AutoML-XAI) for Metabolomics: Improving Cancer Diagnostics. BioRxiv 2023. [Google Scholar] [CrossRef]
  97. Madden, G.R.; Boone, R.H.; Lee, E.; Sifri, C.D.; Petri, W.A. Predicting Clostridioides Difficile Infection Outcomes with Explainable Machine Learning. EBioMedicine 2024, 106, 105244. [Google Scholar] [CrossRef] [PubMed]
  98. Nagavelli, U.; Samanta, D.; Chakraborty, P. Machine Learning Technology-Based Heart Disease Detection Models. J. Healthc. Eng. 2022, 2022, 7351061. [Google Scholar] [CrossRef]
  99. Gatos, I.; Tsantis, S.; Spiliopoulos, S.; Karnabatidis, D.; Theotokas, I.; Zoumpoulis, P.; Loupas, T.; Hazle, J.D.; Kagadis, G.C. A Machine-Learning Algorithm Toward Color Analysis for Chronic Liver Disease Classification, Employing Ultrasound Shear Wave Elastography. Ultrasound Med. Biol. 2017, 43, 1797–1810. [Google Scholar] [CrossRef]
  100. Mollica, G.; Francesconi, D.; Costante, G.; Moretti, S.; Giannini, R.; Puxeddu, E.; Valigi, P. Classification of Thyroid Diseases Using Machine Learning and Bayesian Graph Algorithms. IFAC-Pap. 2022, 55, 67–72. [Google Scholar] [CrossRef]
  101. Ghosal, S.; Jain, A. Depression and Suicide Risk Detection on Social Media Using fastText Embedding and XGBoost Classifier. Procedia Comput. Sci. 2023, 218, 1631–1639. [Google Scholar] [CrossRef]
  102. Pavlopoulos, A.; Rachiotis, T.; Maglogiannis, I. An Overview of Tools and Technologies for Anxiety and Depression Management Using AI. Appl. Sci. 2024, 14, 9068. [Google Scholar] [CrossRef]
  103. Chen, D.; Yao, Y.; Moser, E.D.; Wang, W.; Soliman, E.Z.; Mosley, T.; Pan, W. A Novel Electrocardiogram-Based Model for Prediction of Dementia—The Atherosclerosis Risk in Communities (ARIC) Study. J. Electrocardiol. 2025, 88, 153832. [Google Scholar] [CrossRef]
  104. Hasan, M.N.; Hossain, M.A.; Rahman, M.A. An Ensemble Based Lightweight Deep Learning Model for the Prediction of Cardiovascular Diseases from Electrocardiogram Images. Eng. Appl. Artif. Intell. 2025, 141, 109782. [Google Scholar] [CrossRef]
  105. Lu, S.C.; Chen, G.Y.; Liu, A.S.; Sun, J.T.; Gao, J.W.; Huang, C.H.; Tsai, C.L.; Fu, L.C. Deep Learning–Based Electrocardiogram Model (EIANet) to Predict Emergency Department Cardiac Arrest: Development and External Validation Study. J. Med. Internet Res. 2025, 27, e67576. [Google Scholar] [CrossRef]
  106. Wang, R.; Zhuang, P. A Strategy for Network Multi-Layer Information Fusion Based on Multimodel in User Emotional Polarity Analysis. Int. J. Cogn. Comput. Eng. 2025, 6, 120–130. [Google Scholar] [CrossRef]
  107. Zlatintsi, A.; Filntisis, P.P.; Garoufis, C.; Efthymiou, N.; Maragos, P.; Menychtas, A.; Maglogiannis, I.; Tsanakas, P.; Sounapoglou, T.; Kalisperakis, E.; et al. E-Prevention: Advanced Support System for Monitoring and Relapse Prevention in Patients with Psychotic Disorders Analyzing Long-Term Multimodal Data from Wearables and Video Captures. Sensors 2022, 22, 7544. [Google Scholar] [CrossRef] [PubMed]
  108. Liu, Z.; Ren, Y.; Kong, X.; Liu, S. Learning Analytics Based on Wearable Devices: A Systematic Literature Review From 2011 to 2021. J. Educ. Comput. Res. 2022, 60, 1514–1557. [Google Scholar] [CrossRef]
  109. Jeong, S.C.; Kim, S.-H.; Park, J.Y.; Choi, B. Domain-Specific Innovativeness and New Product Adoption: A Case of Wearable Devices. Telemat. Inform. 2017, 34, 399–412. [Google Scholar] [CrossRef]
  110. Albareda-Tiana, S.; Vidal-Raméntol, S.; Fernández-Morilla, M. Implementing the Sustainable Development Goals at University Level. Int. J. Sustain. High. Educ. 2018, 19, 473–497. [Google Scholar] [CrossRef]
  111. Chaleta, E.; Saraiva, M.; Leal, F.; Fialho, I.; Borralho, A. Higher Education and Sustainable Development Goals (SDG)—Potential Contribution of the Undergraduate Courses of the School of Social Sciences of the University of Évora. Sustainability 2021, 13, 1828. [Google Scholar] [CrossRef]
  112. Chong, P.L.; Ismail, D.; Ng, P.K.; Kong, F.Y.; Basir Khan, M.R.; Thirugnanam, S. A TRIZ Approach for Designing a Smart Lighting and Control System for Classrooms Based on Counter Application with Dual PIR Sensors. Sensors 2024, 24, 1177. [Google Scholar] [CrossRef]
  113. Vergara-Rodríguez, D.; Antón-Sancho, Á.; Fernández-Arias, P. Variables Influencing Professors’ Adaptation to Digital Learning Environments during the COVID-19 Pandemic. Int. J. Environ. Res. Public Health 2022, 19, 3732. [Google Scholar] [CrossRef]
Figure 1. An infographic of the steps of current systematic literature review methodology, illustrating the defined research questions, the total number of retrieved studies, the main results of the review, and, finally, proposed solution and perspectives for the future of students’ burnout.
Figure 1. An infographic of the steps of current systematic literature review methodology, illustrating the defined research questions, the total number of retrieved studies, the main results of the review, and, finally, proposed solution and perspectives for the future of students’ burnout.
Aisens 01 00002 g001
Figure 2. Systematic literature reviewing process and results in PRISMA 2020 diagram.
Figure 2. Systematic literature reviewing process and results in PRISMA 2020 diagram.
Aisens 01 00002 g002
Figure 3. Burnout symptoms, signals measured, and verification surveys.
Figure 3. Burnout symptoms, signals measured, and verification surveys.
Aisens 01 00002 g003
Figure 4. The illustration of the conceptional map of Machine Learning Models. Measured data are derived through AI wearable devices and the preprocessing techniques take place. Then, the appropriate model is selected, accuracy and performance evaluation and model deployment are performed, and, finally, predictions and simulations are utilized.
Figure 4. The illustration of the conceptional map of Machine Learning Models. Measured data are derived through AI wearable devices and the preprocessing techniques take place. Then, the appropriate model is selected, accuracy and performance evaluation and model deployment are performed, and, finally, predictions and simulations are utilized.
Aisens 01 00002 g004
Figure 5. This image portrays students’ burnout as a manifestation of the pressures inherent in contemporary academic life. Addressing this issue through comprehensive initiatives and informed policymaking is essential in promoting student well-being and advancing the overarching objective of quality education through the Sustainable Development Goals of the UN, SDG 3: Good health and well-being, SDG 4: Quality of education, SDG 5: Gender equality, SDG 8: Decent work and economic growth. https://www.un.org/sustainabledevelopment/news/communications-material/ (accessed on 17 April 2025).
Figure 5. This image portrays students’ burnout as a manifestation of the pressures inherent in contemporary academic life. Addressing this issue through comprehensive initiatives and informed policymaking is essential in promoting student well-being and advancing the overarching objective of quality education through the Sustainable Development Goals of the UN, SDG 3: Good health and well-being, SDG 4: Quality of education, SDG 5: Gender equality, SDG 8: Decent work and economic growth. https://www.un.org/sustainabledevelopment/news/communications-material/ (accessed on 17 April 2025).
Aisens 01 00002 g005
Table 1. Reasons for excluding studies from present systematic review.
Table 1. Reasons for excluding studies from present systematic review.
Exclusion ReasonsRetrieved Studies
R1. Studied a mental disorder, e.g., depression, autism spectrum, etc.[25,26,27,28]
R2. Did not use smartwatches[29,30,31,32,33,34,35,36,37,38,39]
R3. Did not study student population[40]
R4. Were pilot studies, research proposals, or reviews[41,42,43,44,45]
R5. Not associated with research questions[40,46,47,48,49,50]
Table 2. Public datasets for health data mining.
Table 2. Public datasets for health data mining.
Public
Databases
OverviewReference
ZenodoOpen-access repository developed by CERN for all research disciplines, including health and biomedical sciences. It provides broad interdisciplinary coverage, DOI assignment, and integration with other repositories.[54]
FigshareDigital repository for research sharing outputs, datasets, figures, and presentations. It provides a user-friendly interface, high visibility, and metadata support.[55]
DryadOpen repository for life science and medical research, primarily for datasets underlying publications. It provides peer-reviewed datasets, integration with journal submissions.[56]
Open Science FrameworkCollaborative platform for sharing and managing research data, including mental health and epidemiology studies. It has strong version control and project management tools.[57]
PhysioNetProvides access to biomedical datasets, including physiological signals, such as ECG or EEG. It affords high-quality curated datasets, widely used in clinical and machine learning research.[58]
DataverseOpen-source repository developed by Harvard University, hosting various datasets, including public health data. There are well-structured metadata and institutional support.[59]
OpenNeuroPublic repository for neuroimaging datasets, including fMRI, EEG, and MEG. Provides a standardized format, integration with neuroimaging software. It is focused on neuroimaging data.[60]
European Open Science Cloud (EOSC)European initiative for research data, including biomedical datasets.[61]
KaggleOnline platform that hosts datasets, notebooks, and machine learning competitions, including health-related datasets. It is a large community with strong support for data science and AI applications.[62]
Table 3. Determined bias by reviewed study.
Table 3. Determined bias by reviewed study.
BiasReviewed Studies
Reporting[53,63,64,65]
Cognitive[53,66,67,68]
Selection[53,66,69,70,71,72]
Measurement[67,68,69,70]
Design[63,64,65,67]
Content[53,65,68,69,70,71,72]
Demographic[53,64,72]
Table 4. Research purposes, subjects, and behavioral patterns.
Table 4. Research purposes, subjects, and behavioral patterns.
PurposesStudiesN
PredictStress levels using deep learning machines [70], mental stress levels [71], mental well-being, depression, stress, and anxiety [72], the predictive utility of pretreatment HRV in effectiveness of GCBT in reducing depression and anxiety symptoms [73], prediction of stress when exposed to an acute stressor [68]5
Assess the efficacyEfficacy of BioBase for anxiety and stress [65]1
DetectStress levels [71,73], ecological stress [71], fatigue detection [72], response to psychological stress in everyday life [66]5
ManagementAttention management [54], stress management with cognitive process with smart device interventions [74]2
Table 5. Numbers of research studies, AI smart devices, and measured symptoms.
Table 5. Numbers of research studies, AI smart devices, and measured symptoms.
Empatica E4 WristbandMicrosoft Band 2Fitbit Versa 2/FitbitBioBeamSmart Wristband (Not Specified)Huawei Band 6 with Photoplethysmography SensorsApple Watch
Anxiety 11 1
Depression 1 1
Stress2141 1
Fatigue1
Attention 1
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lialiou, P.; Maglogiannis, I. Students’ Burnout Symptoms Detection Using Smartwatch Wearable Devices: A Systematic Literature Review. AI Sens. 2025, 1, 2. https://doi.org/10.3390/aisens1010002

AMA Style

Lialiou P, Maglogiannis I. Students’ Burnout Symptoms Detection Using Smartwatch Wearable Devices: A Systematic Literature Review. AI Sensors. 2025; 1(1):2. https://doi.org/10.3390/aisens1010002

Chicago/Turabian Style

Lialiou, Paschalina, and Ilias Maglogiannis. 2025. "Students’ Burnout Symptoms Detection Using Smartwatch Wearable Devices: A Systematic Literature Review" AI Sensors 1, no. 1: 2. https://doi.org/10.3390/aisens1010002

APA Style

Lialiou, P., & Maglogiannis, I. (2025). Students’ Burnout Symptoms Detection Using Smartwatch Wearable Devices: A Systematic Literature Review. AI Sensors, 1(1), 2. https://doi.org/10.3390/aisens1010002

Article Metrics

Back to TopTop