Next Article in Journal
Texture Components of the Radiographic Image Assist in the Detection of Periapical Periodontitis
Previous Article in Journal
Overburden Damage in High-Intensity Mining: Effects of Lithology and Formation Structure
Previous Article in Special Issue
Machine Learning Prediction of Agitation in Dementia Patients Using Sleep and Physiological Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Integrating Fitbit Wearables and Self-Reported Surveys for Machine Learning-Based State–Trait Anxiety Prediction

1
Department of Information Technology, St. Joseph’s College of Engineering, Chennai 600119, India
2
Department of Computer Science and Engineering, American University of Sharjah, Sharjah 26666, United Arab Emirates
3
Department of Electrical and Computer Engineering, University of Massachusetts, Amherst, MA 01003, USA
4
Independent Researcher, Bengaluru 560077, India
*
Author to whom correspondence should be addressed.
Appl. Sci. 2025, 15(19), 10519; https://doi.org/10.3390/app151910519
Submission received: 28 August 2025 / Revised: 23 September 2025 / Accepted: 26 September 2025 / Published: 28 September 2025
(This article belongs to the Special Issue AI Technologies for eHealth and mHealth, 2nd Edition)

Abstract

Anxiety disorders represent a significant global health challenge, yet a substantial treatment gap persists, motivating the development of scalable digital health solutions. This study investigates the potential of integrating passive physiological data from consumer wearable devices with subjective self-reported surveys to predict state–trait anxiety. Leveraging the multi-modal, longitudinal LifeSnaps dataset, which captured “in the wild” data from 71 participants over four months, this research develops and evaluates a machine learning framework for this purpose. The methodology meticulously details a reproducible data curation pipeline, including participant-specific time zone harmonization, validated survey scoring, and comprehensive feature engineering from Fitbit Sense physiological data. A suite of machine learning models was trained to classify the presence of anxiety, defined by the State–Trait Anxiety Inventory (S-STAI). The CatBoost ensemble model achieved an accuracy of 77.6%, with high sensitivity (92.9%) but more modest specificity (48.9%). The positive predictive value (77.3%) and negative predictive value (78.6%) indicate balanced predictive utility across classes. The model obtained an F1-score of 84.3%, a Matthews correlation coefficient of 0.483, and an AUC of 0.709, suggesting good detection of anxious cases but more limited ability to correctly identify non-anxious cases. Post hoc explainability approaches (local and global) reveal that key predictors of state anxiety include measures of cardio-respiratory fitness (VO2Max), calorie expenditure, duration of light activity, resting heart rate, thermal regulation and age. While additional sensitivity analysis and conformal prediction methods reveal that the size of the datasets contributes to overfitting, the features and the proposed approach is generally conducive for reasonable anxiety prediction. These findings underscore the use of machine learning and ubiquitous sensing modalities for a more holistic and accurate digital phenotyping of state anxiety.

1. Introduction

Anxiety is a multifaceted psychological condition that affects both mental and physical well-being [1]. It is among the most prevalent mental disorders, with the World Health Organization estimating in 2019 that roughly 4% of the global population experienced an anxiety disorder [2]. Conceptually, anxiety is characterized as a distressing emotional state arising in anticipation of potential future threats and can be differentiated into state, trait, and social dimensions [3]. State anxiety reflects a transient emotional reaction to situational stressors, whereas trait anxiety denotes a stable predisposition to interpret circumstances as threatening [1]. While moderate levels of either form may serve adaptive functions by enhancing alertness, excessive or disproportionate anxiety can disrupt cognitive performance and daily functioning [4].
The burden of these disorders is further quantified by their substantial contribution to disability-adjusted life years (DALYs), highlighting the profound impact of lost health and productivity [5]. The World Health Organization further estimates that only about one in four individuals receives any kind of treatment, indicating the presence of a significant treatment gap. This gap is driven by a confluence of factors, including stigma, limited access to health professionals, high costs, and a lack of early screening capability [6,7].
With the proliferation of consumer wearable technologies, such as smartwatches and fitness trackers, there is the promise and potential of scalable and accessible tools for the early detection and continuous monitoring of anxiety [8,9]. These devices are equipped with an array of sensors that can passively and continuously collect high-resolution physiological and behavioral data in naturalistic settings. A critical advantage of employing such approaches is in the pairing of retrospective self-reports in clinical settings, the primary mode of psychiatric assessment [4], with objective measurements of physical behavior in the “wild”, as it were [10].
As the traditional methods can be time-consuming, difficult to repeat, subjective, and sensitive to recall bias [11,12], the addition of longitudinal data acquired through wearable devices can provide subtle but significant changes in physiology and behavior indicative of fluctuating, or more importantly, deteriorating mental states.
A growing body of research has explored the application of machine learning techniques to data from wearable and mobile devices for mental health applications, including the detection, monitoring, and prediction of anxiety and depression [13,14]. The common approach sees the utilization of multi-modal data, combining physiological signals from wearables with self-reported symptoms, demographic information, and environmental factors [15,16,17]. Physiological data sources include heart rate (HR), heart rate variability (HRV), skin conductance (EDA/GSR), sleep patterns, and physical activity measures (e.g., step counts, calories) [7,11,18,19,20]. Some studies also incorporate neuroimaging (fMRI, EEG), social media posts, or clinical records [9,21,22]. The existing models have shown varying levels of accuracy in predicting anxiety, depression or related mental health conditions. For example, one study [15] achieved 81.3% accuracy for 7-day panic attack prediction using random forests on multifactorial variables, including wearable physiological data and questionnaires. Another study [22] predicted STAI-derived anxiety levels with up to 81% accuracy using a limited set of contextual and judgment variables, extracted from a brief (2–3 min) unsupervised picture-rating task based on the International Affective Picture System.
Specifically, our work focuses on predicting state anxiety, or the S-STAI form of the self-reported State–Trait Anxiety questionnaire, using variables captured by Fitbit smartwatches (San Francisco, CA, USA). This metric encapsulates transient anxiety, as opposed to the trait scale which evaluates the individual’s general likelihood of anxiety proneness and is relatively less responsive to change [1]. We leverage the LifeSnaps dataset, a multi-modal, longitudinal, and geographically-distributed collection of anthropological data for developing our machine learning models aimed at predicting the presence or absence of state anxiety [23]. This dataset was gathered unobtrusively over four months from 71 participants, utilizing Fitbit Sense smartwatches, validated psychological surveys, and ecological momentary assessments (EMAs). LifeSnaps provides a vast array of data types, from second to daily granularity, covering aspects like physical activity, sleep, heart rate, stress, mood, and demographics. A significant focus is placed on data privacy and anonymization, adhering to GDPR principles and employing k-anonymity to protect participant identities.
The primary contributions of this work are the following:
  • Examining the potential of implementing ML algorithms with wearable-derived physiological and behavioral data for classifying state anxiety in individuals.
  • Investigating the role of explainability with Shapley values in identifying important variables in naturalistic settings.
  • Releasing the code at XXX of the robust preprocessing and machine learning workflow, to further facilitate in developing predictive models for mental well-being from unobtrusive wearable data.
This paper is organized such that Section 2 introduces the dataset and outlines the methodology and Section 3 presents the results and its discussion, with Section 4 concluding the work.

2. Methodology

The increasing prevalence of ubiquitous wearable technologies offers unprecedented opportunities to monitor various aspects of human life, including physical activity, sleep, and mental health. As shown in Figure 1, we begin with a multi-step process for harmonizing the heterogeneous data sources into a coherent, analysis-ready format. Then, we deploy a suite of machine learning algorithms on the training set and examine the explainability of the features with the test set, followed by a performance evaluation.

2.1. Dataset

The experiments drew on the publicly available LifeSnaps dataset [23], comprising data from 71 participants collected between mid-2021 and early 2022 across four European countries (Greece, Cyprus, Italy, and Sweden). Data collection occurred in two phases: the first from 24 May to 26 July 2021 ( n = 38 ), and the second from 15 November 2021 to 17 January 2022 ( n = 34 ). As mentioned previously, the dataset comprises three main modalities: survey data, ecological momentary assessments (EMAs), and sensing data from Fitbit Sense smartwatches. The Fitbit Sense devices capture a wide array of data types, including steps, sleep, heart rate, skin temperature, oxygen saturation (SpO2), VO2Max, electrodermal activity (EDA) responses, and stress indicators (computed with proprietary algorithms). Psychological well-being was evaluated using validated instruments, including the State–Trait Anxiety Inventory (S-STAI), which was administered weekly to participants. Written informed consent was obtained from all participants, and the study protocol was approved by the Institutional Review Board of the Aristotle University of Thessaloniki.
In terms of the Fitbit Sense wearable device properties, it is equipped with a 3-axis accelerometer, gyroscope, altimeter, built-in GPS receiver, multi-path optical heart rate tracker, and on-wrist skin temperature, ambient light, and multipurpose electrical sensors [23]. It is worth mentioning that some raw data such as sensor signal measurements are not made accessible by the manufacturer, and as such are not available in the LifeSnaps dataset. While there are occasional concerns about the reliability and agreement of Fitbit sensors when compared to the gold standard heart rate monitors, accelerometers, and sleep tracking methods [24,25,26], the wearables have promise in capturing short-term variations in psychological stress [27].
To ensure scientific rigor and reproducibility, this study meticulously followed the data curation and preprocessing pipeline detailed in the original LifeSnaps data descriptor paper [23]. Given the geographically distributed nature of the study across four countries, participants were in different time zones. To facilitate valid analysis and between-user comparisons, a common temporal reference frame was established. All timestamped data with sub-daily granularity were converted from their original time zone (typically Coordinated Universal Time, UTC, for a subset of Fitbit data types) to the participant’s local time. This conversion was based on the time zone declared in each participant’s Fitbit profile. The process also accounted for Daylight Saving Time (DST), as all recruitment countries observed this practice during the study period. This critical step ensures that a participant’s “morning” activity in Greece is correctly aligned with another participant’s “morning” activity in Sweden, preventing temporal misalignment in the data.

2.2. Preprocessing

We first merged the survey records of the patients and the sensing data from Fitbit Sense smartwatches using the shared modality-agnostic user identification number assigned in the dataset. Initially, missing percentages for various columns were assessed, revealing significant gaps in certain variables in the greater dataset, such as heart rate variability and oxygen saturation, which were removed. Any rows with more than 50% information missing were dropped as well. This led to 45 unique patients across 1891 records, as the remaining 26 patients appeared to have poorer adherence to the LifeSnaps study. To prevent data leakage, we ensured that records from the same patient did not appear in the train and test set, and as such gave us approximately 38 unique patients in the training set and 7 unique patients in the testing set across a five-fold cross-validation. We kept a variety of sleep metrics, which captured fragmentation in sleep architecture [21,28].
For participants with multiple S-STAI records, the mean was calculated. The continuous S-STAI scores were converted into a binary target variable using the cut-off of 44, where if the score was less it was classified as no anxiety, and otherwise as anxiety. This cut-off was selected based on the findings initially [29,30,31] and then verified through empirical validation.
Table 1 below describes the final set of continuous features used in this paper after preprocessing, grouped by anxiety classes. We applied the Shapiro–Wilk test to assess normality and observed deviations from a Gaussian distribution across all variables. Consequently, the Mann–Whitney U test, which does not assume normality, was used for continuous variables, while the categorical variables in Table 2 were analyzed using the Chi-square test with Bonferroni-adjusted p-values. We observe that almost all features show statistically significant differences between the populations, suggesting potential correspondence between the features and the anxiety classes. The features with low significance are indicated in Table 1.

2.3. Machine Learning Models

To provide a comprehensive evaluation, we included both baseline classifiers and ensemble methods. The following models were trained to approximate the relationship between daily Fitbit-derived features (e.g., heart rate, VO2Max, sleep metrics, and activity minutes) and the binary target label of Anxiety, defined from State–Trait Anxiety Inventory (STAI) scores.
k-Nearest Neighbors (k-NN). A non-parametric classifier that assigns a label to each test instance based on the majority class among its k closest neighbors in the feature space (using Euclidean distance). In this context, k-NN evaluates whether individuals with similar daily Fitbit profiles exhibit similar anxiety states.
Logistic Regression (LR). A generalized linear model that estimates the probability of anxiety (binary outcome) from weighted combinations of input features. Regularization (L2) was used to prevent overfitting given correlated wearable features.
Support Vector Machine (SVM). A supervised classifier that seeks an optimal separating hyperplane in the feature space. The kernelized SVM allows non-linear decision boundaries, making it suitable for distinguishing Anxiety vs. Non-Anxiety profiles when features are not linearly separable.
Gaussian Naïve Bayes (GNB). This is a probabilistic classifier that applies Bayes’ theorem with the assumption of feature independence, modeling each continuous feature with a Gaussian distribution to estimate class probabilities.
Random Forest (RF). An ensemble of decision trees trained on bootstrap samples of the dataset with random feature selection at each split. RF captures non-linear interactions between Fitbit features (e.g., sleep × activity) while reducing overfitting compared to a single tree.
Extreme Gradient Boosting (XGB). A boosting-based ensemble where new trees iteratively correct the residual errors of previous trees. XGB includes shrinkage, column sampling, and regularization, making it efficient and robust for tabular datasets with mixed feature scales (e.g., sleep ratios, temperature).
Categorical Boosting (CB). A gradient boosting algorithm optimized for handling categorical variables without requiring one-hot encoding. In our dataset, this was advantageous for demographic features such as age group and gender, as CatBoost internally applies ordered encoding and prevents target leakage.
Light Gradient Boosting Machine (LGB). A gradient boosting approach that grows trees leaf-wise rather than level-wise. Combined with histogram-based binning and feature sampling, LightGBM efficiently handles large numbers of daily records while preserving predictive accuracy.

3. Results

Performance was evaluated using accuracy, sensitivity, specificity, PPV, NPV, F1-score, and AUC (Table 3). Accuracy reflects overall correctness, sensitivity and specificity capture the detection of Anxiety and Non-Anxiety cases, respectively, PPV and NPV quantify prediction reliability, the F1-score balances precision and recall, and AUC summarizes the sensitivity–specificity trade-off using the Youden index.
Figure 2 shows the global feature importance rankings derived from the CB classifier. The variables with the greatest relative importance for distinguishing between Anxiety and Non-Anxiety participants were calories burned, cardio-respiratory fitness (VO2Max), and lightly active minutes. Secondary importance was observed for demographic factors (age group ≥30 and <30), thermal regulation measures (daily temperature variation, nightly temperature), and moderate activity minutes. Lower but still notable contributions were observed for gender, sleep duration, deep sleep ratio, and resting heart rate, while features such as mindfulness sessions, steps, and minutes to fall asleep had minimal predictive influence. These rankings emphasize that daily energy balance, fitness, and activity patterns were the most informative predictors of anxiety as defined by S-STAI labels.
Figure 3 presents the Shapley values summary plot, which illustrates the relative impact and correlation of each feature with respect to individual predictions. Each point corresponds to one participant-day, with red indicating high raw feature values and blue indicating low values. Positive Shapley values (shifted right) indicate stronger contributions toward the Anxiety class, whereas negative Shapley values (shifted left) indicate protective contributions toward “Non-Anxiety”. The most influential features in order were VO2Max, BMI, and resting heart rate, followed by age, nightly temperature, and lightly active minutes. Higher VO2Max values were protective against anxiety (blue points shifting left), while elevated BMI and resting heart rate consistently increased the likelihood of anxiety classification (red points shifting right). Age over 30 showed a tendency to push predictions toward the anxiety class, whereas younger age categories had lower contributions. Features such as sleep efficiency and minutes asleep provided smaller but complementary contributions, with higher sleep quality generally being protective against anxiety.
For this subset of the LifeSnaps dataset and with the CB model, both explainability approaches demonstrate that both global rankings and local interpretability converge on the importance of certain variables. As such, the top predictors of S-STAI are VO2Max, calories, light activity, resting heart rate, age, and skin temperature fluctuations obtained from the wrist as the contact point.

4. Discussion

4.1. Research Implications

The ensemble models clearly outperformed the simpler baselines. CB, LGB, and XGB all achieved high performance across folds, with accuracies, sensitivities, specificities, F1-scores, and AUCs of 1.00. RF performed nearly as well, with an accuracy of 1.00, sensitivity of 1.00, specificity of 1.00, and AUC of 1.00, showing only marginal variability across folds. Among the simpler methods, LR achieved high accuracy (0.98 ± 0.04) and F1-score (0.99 ± 0.02), though its specificity was somewhat lower (0.84 ± 0.23). Support Vector Machines (SVMs) showed moderate performance, with accuracy (0.89 ± 0.07) and AUC (0.92 ± 0.10), but specificity dropped to around 0.53 ± 0.15 despite nearly perfect sensitivity. k-NN struggled, with accuracy around 0.71 ± 0.03 and very low specificity (0.20 ± 0.06), indicating a strong bias toward predicting the positive (Anxiety) class. In contrast, GNB trivially predicted all samples as belonging to the same class, producing accuracy, sensitivity, specificity, and AUC values of 1.00 with zero variability. While this superficially appears “perfect,” it reflects a degenerate solution rather than meaningful discrimination, underscoring the limitations of oversimplified generative assumptions in high-dimensional, imbalanced clinical data. Overall, these results highlight that gradient-boosting ensembles capture complex feature interactions and handle imbalanced data far better than linear or distance-based classifiers in predicting anxiety status.
The superior performance of CB and related ensembles suggests that anxiety, as measured by S-STAI, is strongly linked to non-linear combinations of physiological signals and behavioral patterns. These models are designed to handle heterogeneous data (continuous Fitbit signals and categorical demographics) and are robust to class imbalance, which likely explains their gains over k-NN and linear classifiers. Overall, the findings indicate that tree-based gradient-boosting methods are well suited for anxiety detection tasks that combine passive wearable sensing with self-reported surveys.
The results also mirror the feature-importance analysis reported elsewhere in the manuscript: VO2Max, calories burned, lightly active minutes, resting heart rate, thermal regulation measures, and age were the top predictors, and the ensembles effectively integrated these signals to distinguish anxious from non-anxious states. The physiological markers are not disparate phenomena but are primarily downstream consequences of a unified neurobiological process: dysregulation of the Autonomic Nervous System (ANS) [32]. Anxiety is fundamentally characterized by a state of sympathetic nervous system (SNS) hyperactivity—the “fight-or-flight” response—and a concurrent withdrawal of parasympathetic nervous system (PNS) activity, which is responsible for the “rest-and-digest” state [33]. An elevated resting heart rate, for example, is a direct result of sympathetic activation [34], while decreased wrist skin temperature is a consequence of sympathetically-mediated peripheral vasoconstriction [35]. Cardio-respiratory fitness (VO2Max) could be a biomarker of a more resilient and adaptable ANS [36]. Similarly, the predictive power of calorie expenditure and light physical activity can be framed as responsive to the potent anxiolytic effects of exercise [37,38], and involving psychological mechanisms such as distraction and enhanced self-efficacy [39,40].
The sensitivity analysis across varying train–test splits, as in Figure 4, further highlights the robustness as well as shortcomings of CatBoost (one of the best performers) under different data availability conditions. When training on a large proportion of patients (seven held out for testing), the model achieved near-perfect discrimination (Acc = 0.996, AUC = 0.994). However, as the test set increased in size (12, 23, 34, and 39 patients), performance gradually deteriorated, with specificity dropping sharply and AUC values declining to as low as 0.442. This pattern indicates that CatBoost can overfit in scenarios with limited heterogeneity in the training set, but maintains high sensitivity (often close to 1.0) even when generalization weakens. The consistent tendency to favor the anxious class underscores the challenge posed by class imbalance in the LifeSnaps dataset. Under this setting, the CB model attained an accuracy of 0.78, with high sensitivity (0.93) but more modest specificity (0.49). The model’s positive predictive value (PPV = 0.77) and negative predictive value (NPV = 0.79) indicate balanced predictive utility across both classes. Its F1-score of 0.84 reflects strong overall classification ability, while the Matthews correlation coefficient (MCC = 0.48) highlights moderate agreement between predictions and ground truth. This shows the general efficacy of the features and their utility in predicting anxiety, underscoring the importance of our findings as per their importances reported in Figure 2 and Figure 3. We found that a more practical level of performance was achieved using a 75:25 train/test split.
Finally, we extended our analysis by applying conformal prediction through the Model Agnostic Prediction Interval Estimator (MAPIE) [41] framework to quantify the distribution-free reliability of CatBoost predictions in Table 4. From the total of 1891 records, approximately 15% (≈280) were held out as the calibration set, with the remainder used to train the base model. While the empirical coverage systematically underperformed the target levels (e.g., 70.1% at a nominal 95% confidence), this result is informative. It highlights a tendency toward overconfident predictions, likely exacerbated by the modest size and imbalance of the calibration pool. Nevertheless, incorporating conformal prediction provides additional value beyond traditional performance metrics, as it enables us to explicitly assess predictive uncertainty—a critical requirement for clinical adoption. Future work should focus on larger and more balanced calibration cohorts, as well as advanced calibration strategies, to ensure that uncertainty estimates meet their nominal guarantees.

4.2. Comparison with Related Work

A recent study implemented an AdaBoost algorithm with data collected from undergraduate students wearing Fitbit devices, and classified participants based on the Perceived Stress Scale survey [42]. The survey score is self-reported, and provides insights about stress levels based on participant responses to questions about how unpredictable, uncontrollable, and overwhelming their lives have been in the week prior. The study observes the effects of using each feature individually from the set of heart rate, calories, steps, distance traveled, and sleep, as well as their combination. Interestingly, using only calorie expenditure at an aggregation level of 4 h performed the best and obtained the highest F1-score of 0.81, even compared to all features together. It is hypothesized that reduced signal conflict, lower data dimensionality, and the volume of data may have lead to this result [42].
A framework called Kora [9] was developed, also leveraging the same LifeSnaps [23] dataset we have used, to support mental well-being. Their work differs from ours on two particular fronts. Kora is specifically designed for anxiety management in pre- and post-operative cardiac surgery patients, and recommends personalized relaxation techniques, such as music therapy, guided meditation, or breathing exercises. Due to variations in handling missingness and participant adherence, Kora only relies on age, gender, heart rate, sleep duration, oxygen saturation (SPO2 instead of VO2Max), and emotion state (alert, happy, sad, tense, and tired) to classify an anxiety category (low, medium, and high). With a gradient-boosting tree model, they obtain accuracy, sensitivity, and an F1-Score of 80.76%, 84.01%, and 82.33%, respectively, which is considerably lower than ours and suggests that it is easier to detect the presence or absence of anxiety than sub-categories of anxiety.
Another study introduces the UnStressMe framework [8], once again using the LifeSnaps dataset, to develop a gradient-boosting model for stress prediction, achieving a 92.3% accuracy. The primary difference is that their work focuses on classifying participants based on the daily stress score, automatically reported by the Fitbit device utilizing a proprietary algorithm. The score is based on some combination of the heart rate, sleep, and activity level data [43]. It is reasonable to expect that a model trained on Fitbit data will indeed find a strong relationship with a score also computed based on the same Fitbit data. Thus, our study differs in that it examines the ability of objectively measured Fitbit data in capturing the subjective self-reported S-STAI score, which is a clinically validated measure for anxiety [1,4].

4.3. Limitations and Future Work

The study was conducted on a cohort of 71 participants in Europe, and for analysis, we use 45 of them. While valuable, this sample is modest in size and may not be representative of the broader global population in terms of age, cultural background, socioeconomic status, or clinical severity. A critical challenge in “in the wild” sensing research is the potential for the measurement to influence the phenomenon being measured (a form of the Hawthorne effect) [44]. The very act of wearing a health-monitoring device and receiving feedback or alerts could, for some individuals, increase health anxiety or preoccupation. This study did not explicitly measure this effect. Future work should incorporate measures of technology-related anxiety to better understand and mitigate this potential confounding factor. The ground truth for anxiety in this study was the weekly S-STAI score. While this is a validated and reliable measure, its weekly frequency is coarse compared to the minute-by-minute data generated by the Fitbits. This temporal mismatch limits the ability to predict short-term, in-the-moment fluctuations in state anxiety. It is essential to emphasize that the developed models identify strong correlational patterns, not causal relationships. For example, the model found that poor sleep is a strong predictor of anxiety. However, this relationship is known to be bidirectional: anxiety can cause poor sleep, and poor sleep can exacerbate anxiety. The current modeling approach cannot disentangle this causality. Future research must aim to validate these models on larger, more diverse, and clinically diagnosed populations to ensure their generalizability and equity.

5. Conclusions

This study demonstrates the utility of the multi-modal LifeSnaps dataset for mental health research and establishes an initial baseline for anxiety prediction benchmarking. It also provides a first assessment of how anxiety relates to short-term (“in the wild”) physiological and behavioral states.
The rigorous preprocessing and harmonization steps, including data merging, binary label creation from S-STAI scores, and careful handling of missing values, lay a robust foundation for future studies utilizing such complex “in the wild” datasets. Gradient-boosting ensembles such as CB, LGB, and XGB achieved reasonable classification performance, with CB reaching 77.66% accuracy and an AUC of 0.709, confirming that non-linear models can effectively capture the complex interplay between cardio-respiratory fitness (VO2Max), activity, heart rate, and demographic factors. Simpler classifiers (LR, k-NN, SVM, and GNB) were much less effective, underscoring the importance of model choice for digital phenotyping tasks. These findings align with the emerging view that physiological fitness and lifestyle behaviors are tightly coupled with momentary mood and stress.
While wearable sensors provide a wealth of objective data on the body’s physiological state, they cannot capture the full, nuanced experience of anxiety, which is inherently a subjective, cognitive, and affective phenomenon. A racing heart, for instance, could be a sign of a panic attack or the result of vigorous exercise. By combining multiple modalities and incorporating finer-grained metrics, it becomes possible to build computational models that learn the specific physiological signatures associated with an individual’s self-reported anxiety, creating a more complete and personalized digital phenotype.

Author Contributions

Conceptualization, A.S. and F.A.; methodology, A.V., J.R., A.A. and S.G.; software, A.V. and J.R.; validation, A.A. and S.G.; formal analysis, A.V. and J.R.; investigation, A.A. and S.G.; resources, J.R., A.A. and S.G.; data curation, A.A. and S.G.; writing—original draft preparation, A.V., J.R., A.A. and S.G.; writing—review and editing, J.R., A.A. and S.G.; visualization, J.R., A.A. and S.G.; supervision, R.A., A.S. and F.A.; project administration, R.A., A.S. and F.A.; funding acquisition, A.S. and F.A. All authors have read and agreed to the published version of the manuscript.

Funding

The work in this paper was supported, in part, by the Open Access Program from the American University of Sharjah. This paper represents the opinions of the author(s) and does not mean to represent the position or opinions of the American University of Sharjah.

Informed Consent Statement

Not applicable.

Data Availability Statement

The dataset adopted in this research is openly available in [Synapse] at https://zenodo.org/records/6832186 (accessed on 20 April 2024).

Conflicts of Interest

The authors declare no conflicts of interest.

Abbreviations

The following abbreviations are used in this manuscript:
STAIState–Trait Anxiety Inventory
CBCatBoost
GNBGaussian Naïve Bayes
k-NNk-Nearest Neighbors
LGBLight Gradient Boosting
LRLogistic Regression
MLMachine Learning
RFRandom Forest
SVMSupport Vector Machine
XGBExtreme Gradient Boosting

References

  1. Spielberger, C.D.; Sydeman, S.J.; Owen, A.E.; Marsh, B.J. Measuring anxiety and anger with the State-Trait Anxiety Inventory (STAI) and the State-Trait Anger Expression Inventory (STAXI). In The Use of Psychological Testing for Treatment Planning and Outcomes Assessment, 2nd ed.; Lawrence Erlbaum Associates Publishers: Mahwah, NJ, USA, 1999; pp. 993–1021. [Google Scholar]
  2. Alonso, J.; Liu, Z.; Evans-Lacko, S.; Sadikova, E.; Sampson, N.; Chatterji, S.; Abdulmalik, J.; Aguilar-Gaxiola, S.; Al-Hamzawi, A.; Andrade, L.H.; et al. Treatment gap for anxiety disorders is global: Results of the World Mental Health Surveys in 21 countries. Depress. Anxiety 2018, 35, 195–208. [Google Scholar] [CrossRef]
  3. Regier, D.A.; Kuhl, E.A.; Kupfer, D.J. The DSM-5: Classification and criteria changes. World Psychiatry 2013, 12, 92–98. [Google Scholar] [CrossRef]
  4. Sánchez-Vincitore, L.V.; Castelló Gómez, M.E.; Lajara, B.; Duñabeitia, J.A.; Marte-Santana, H. Prevalence of state, trait, generalized, and social anxiety, and well-being among undergraduate students at a university in the Dominican Republic. Psychiatry Res. Commun. 2025, 5, 100225. [Google Scholar] [CrossRef]
  5. Xiong, P.; Liu, M.; Liu, B.; Hall, B.J. Trends in the incidence and DALYs of anxiety disorders at the global, regional, and national levels: Estimates from the Global Burden of Disease Study 2019. J. Affect. Disord. 2022, 297, 83–93. [Google Scholar] [CrossRef]
  6. Solatidehkordi, Z.; Ramesh, J.; Pasquier, M.; Sagahyroon, A.; Aloul, F. A Survey of Machine Learning Approaches for Detecting Depression Using Smartphone Data. In Proceedings of the 2022 IEEE International Conference on Industry 4.0, Artificial Intelligence, and Communications Technology (IAICT), Bali, Indonesia, 28–30 July 2022; pp. 184–190. [Google Scholar] [CrossRef]
  7. Ahmed, A.; Ramesh, J.; Ganguly, S.; Aburukba, R.; Sagahyroon, A.; Aloul, F. Evaluating multimodal wearable sensors for quantifying affective states and depression with neural networks. IEEE Sens. J. 2023, 23, 22788–22802. [Google Scholar] [CrossRef]
  8. Paraschou, E.; Yfantidou, S.; Vakali, A. UnStressMe: Explainable Stress Analytics and Self-tracking Data Visualizations. In Proceedings of the 2023 IEEE International Conference on Pervasive Computing and Communications Workshops and other Affiliated Events (PerCom Workshops), Atlanta, GA, USA, 13–17 March 2023; pp. 340–342. [Google Scholar] [CrossRef]
  9. Alexandra, A.M.M.; Gabriel, G.D.C.; Elio, N.V. Mobile Application for Managing Anxiety using Machine Learning Model. In Proceedings of the 2025 3rd Cognitive Models and Artificial Intelligence Conference (AICCONF), Prague, Czech Republic, 13–14 June 2025; pp. 1–7. [Google Scholar] [CrossRef]
  10. Ahmed, A.; Ramesh, J.; Ganguly, S.; Aburukba, R.; Sagahyroon, A.; Aloul, F. Investigating the Feasibility of Assessing Depression Severity and Valence-Arousal with Wearable Sensors Using Discrete Wavelet Transforms and Machine Learning. Information 2022, 13, 406. [Google Scholar] [CrossRef]
  11. Abd-alrazaq, A.; AlSaad, R.; Aziz, S.; Ahmed, A.; Denecke, K.; Househ, M.; Farooq, F.; Sheikh, J. Wearable Artificial Intelligence for Anxiety and Depression: Scoping Review. J. Med. Internet Res. 2023, 25, e42672. [Google Scholar] [CrossRef]
  12. Knowles, K.A.; Olatunji, B.O. Specificity of trait anxiety in anxiety and depression: Meta-analysis of the State-Trait Anxiety Inventory. Clin. Psychol. Rev. 2020, 82, 101928. [Google Scholar] [CrossRef] [PubMed]
  13. Jerath, R.; Syam, M.; Ahmed, S. The Future of Stress Management: Integration of Smartwatches and HRV Technology. Sensors 2023, 23, 7314. [Google Scholar] [CrossRef] [PubMed]
  14. Choudhury, A.; Asan, O. Impact of using wearable devices on psychological Distress: Analysis of the health information national Trends survey. Int. J. Med. Inform. 2021, 156, 104612. [Google Scholar] [CrossRef] [PubMed]
  15. Tsai, C.H.; Chen, P.C.; Liu, D.S.; Kuo, Y.Y.; Hsieh, T.T.; Chiang, D.L.; Lai, F.; Wu, C.T. Panic Attack Prediction Using Wearable Devices and Machine Learning: Development and Cohort Study. JMIR Med. Inform. 2022, 10, e33063. [Google Scholar] [CrossRef] [PubMed]
  16. De Angel, V.; Lewis, S.; White, K.; Oetzmann, C.; Leightley, D.; Oprea, E.; Lavelle, G.; Matcham, F.; Pace, A.; Mohr, D.C.; et al. Digital health tools for the passive monitoring of depression: A systematic review of methods. Npj Digit. Med. 2022, 5, 3. [Google Scholar] [CrossRef]
  17. Zhang, Y.; Stewart, C.; Ranjan, Y.; Conde, P.; Sankesara, H.; Rashid, Z.; Sun, S.; Dobson, R.J.B.; Folarin, A.A. Large-scale digital phenotyping: Identifying depression and anxiety indicators in a general UK population with over 10,000 participants. J. Affect. Disord. 2025, 375, 412–422. [Google Scholar] [CrossRef]
  18. Dai, R.; Kannampallil, T.; Kim, S.; Thornton, V.; Bierut, L.; Lu, C. Detecting Mental Disorders with Wearables: A Large Cohort Study. In Proceedings of the 8th ACM/IEEE Conference on Internet of Things Design and Implementation (IoTDI ’23), New York, NY, USA, 9–12 May 2023; pp. 39–51. [Google Scholar] [CrossRef]
  19. Ancillon, L.; Elgendi, M.; Menon, C. Machine Learning for Anxiety Detection Using Biosignals: A Review. Diagnostics 2022, 12, 1794. [Google Scholar] [CrossRef]
  20. Nakagome, K.; Makinodan, M.; Uratani, M.; Kato, M.; Ozaki, N.; Miyata, S.; Iwamoto, K.; Hashimoto, N.; Toyomaki, A.; Mishima, K.; et al. Feasibility of a wrist-worn wearable device for estimating mental health status in patients with mental illness. Front. Psychiatry 2023, 14, 1189765. [Google Scholar] [CrossRef] [PubMed]
  21. Ramesh, J.; Keeran, N.; Sagahyroon, A.; Aloul, F. Towards Validating the Effectiveness of Obstructive Sleep Apnea Classification from Electronic Health Records Using Machine Learning. Healthcare 2021, 9, 1450. [Google Scholar] [CrossRef] [PubMed]
  22. Bari, S.; Kim, B.W.; Vike, N.L.; Lalvani, S.; Stefanopoulos, L.; Maglaveras, N.; Block, M.; Strawn, J.; Katsaggelos, A.K.; Breiter, H.C. A novel approach to anxiety level prediction using small sets of judgment and survey variables. Npj Ment. Health Res. 2024, 3, 29. [Google Scholar] [CrossRef]
  23. Yfantidou, S.; Karagianni, C.; Efstathiou, S.; Vakali, A.; Palotti, J.; Giakatos, D.P.; Marchioro, T.; Kazlouski, A.; Ferrari, E.; Girdzijauskas, v. LifeSnaps, a 4-month multi-modal dataset capturing unobtrusive snapshots of our lives in the wild. Sci. Data 2022, 9, 663. [Google Scholar] [CrossRef]
  24. Park, J.E.; Ahn, E.K.; Yoon, K.; Kim, J. Performance of Fitbit Devices as Tools for Assessing Sleep Patterns and Associated Factors. J. Sleep Med. 2024, 21, 59–64. [Google Scholar] [CrossRef]
  25. Peake, J.M.; Kerr, G.; Sullivan, J.P. A Critical Review of Consumer Wearables, Mobile Applications, and Equipment for Providing Biofeedback, Monitoring Stress, and Sleep in Physically Active Populations. Front. Physiol. 2018, 9, 743. [Google Scholar] [CrossRef]
  26. Lederer, L.; Breton, A.; Jeong, H.; Master, H.; Roghanizad, A.R.; Dunn, J. The Importance of Data Quality Control in Using Fitbit Device Data From the All of Us Research Program. JMIR mHealth uHealth 2023, 11, e45103. [Google Scholar] [CrossRef]
  27. Gagnon, J.; Khau, M.; Lavoie-Hudon, L.; Vachon, F.; Drapeau, V.; Tremblay, S. Comparing a Fitbit wearable to an electrocardiogram gold standard as a measure of heart rate under psychological stress: A validation study. JMIR Form. Res. 2022, 6, e37885. [Google Scholar] [CrossRef]
  28. Ramesh, J.; Solatidehkordi, Z.; Sagahyroon, A.; Aloul, F. Multimodal Neural Network Analysis of Single-Night Sleep Stages for Screening Obstructive Sleep Apnea. Appl. Sci. 2025, 15, 1035. [Google Scholar] [CrossRef]
  29. Ercan, I.; Hafizoglu, S.; Ozkaya, G.; Kirli, S.; Yalcintas, E.; Akaya, C. Examining cut-off values for the state-trait anxiety inventory. Rev. Argent. Clin. Psicol. 2015, 24, 143–148. [Google Scholar]
  30. Mughal, F.; Raffe, W.; Stubbs, P.; Garcia, J. Towards depression monitoring and prevention in older populations using smart wearables: Quantitative Findings. In Proceedings of the 2022 IEEE 10th International Conference on Serious Games and Applications for Health(SeGAH), Sydney, Australia, 10–12 August 2022; pp. 1–8. [Google Scholar] [CrossRef]
  31. Linde, K.; Olm, M.; Teusen, C.; Akturk, Z.; Schrottenberg, V.; Hapfelmeier, A.; Dawson, S.; Rücker, G.; Löwe, B.; Schneider, A. The diagnostic accuracy of widely used self-report questionnaires for detecting anxiety disorders in adults. Cochrane Database Syst. Rev. 2022, 2022, CD015292. [Google Scholar] [CrossRef]
  32. Teed, A.R.; Feinstein, J.S.; Puhl, M.; Lapidus, R.C.; Upshaw, V.; Kuplicki, R.T.; Bodurka, J.; Ajijola, O.A.; Kaye, W.H.; Thompson, W.K.; et al. Association of Generalized Anxiety Disorder With Autonomic Hypersensitivity and Blunted Ventromedial Prefrontal Cortex Activity During Peripheral Adrenergic Stimulation. JAMA Psychiatry 2022, 79, 323–332. [Google Scholar] [CrossRef] [PubMed]
  33. Richards, J.C.; Bertram, S. Anxiety Sensitivity, State and Trait Anxiety, and Perception of Change in Sympathetic Nervous System Arousal. J. Anxiety Disord. 2000, 14, 413–427. [Google Scholar] [CrossRef] [PubMed]
  34. Gullett, N.; Zajkowska, Z.; Walsh, A.; Harper, R.; Mondelli, V. Heart rate variability (HRV) as a way to understand associations between the autonomic nervous system (ANS) and affective states: A critical review of the literature. Int. J. Psychophysiol. 2023, 192, 35–42. [Google Scholar] [CrossRef]
  35. Herborn, K.A.; Graves, J.L.; Jerem, P.; Evans, N.P.; Nager, R.; McCafferty, D.J.; McKeegan, D.E. Skin temperature reveals the intensity of acute stress. Physiol. Behav. 2015, 152, 225–230. [Google Scholar] [CrossRef]
  36. Rawliuk, T.; Thrones, M.; Cordingley, D.M.; Cornish, S.M.; Greening, S.G. Promoting brain health and resilience: The effect of three types of acute exercise on affect, brain-derived neurotrophic factor and heart rate variability. Behav. Brain Res. 2025, 493, 115675. [Google Scholar] [CrossRef]
  37. Daniela, M.; Catalina, L.; Ilie, O.; Paula, M.; Daniel-Andrei, I.; Ioana, B. Effects of Exercise Training on the Autonomic Nervous System with a Focus on Anti-Inflammatory and Antioxidants Effects. Antioxidants 2022, 11, 350. [Google Scholar] [CrossRef]
  38. Verma, A.; Balekar, N.; Rai, A. Navigating the Physical and Mental Landscape of Cardio, Aerobic, Zumba, and Yoga. Arch. Med. Health Sci. 2024, 12, 242. [Google Scholar] [CrossRef]
  39. Schuch, F.B.; Stubbs, B. Physical Activity, Physical Fitness, and Depression. In Oxford Research Encyclopedia of Psychology; Oxford University Press: Oxford, UK, 2017. [Google Scholar] [CrossRef]
  40. Anderson, E.; Shivakumar, G. Effects of Exercise and Physical Activity on Anxiety. Front. Psychiatry 2013, 4, 27. [Google Scholar] [CrossRef] [PubMed]
  41. Taquet, V.; Blot, V.; Morzadec, T.; Lacombe, L.; Brunel, N. MAPIE: An open-source library for distribution-free uncertainty quantification. arXiv 2022, arXiv:2207.12274. [Google Scholar] [CrossRef]
  42. Lopez, R.; Shrestha, A.; Hickey, K.; Guo, X.; Tlachac, M.; Liu, S.; Rundensteiner, E.A. Screening Students for Stress Using Fitbit Data. In Proceedings of the 2024 IEEE International Conference on Big Data (BigData), Washington, DC, USA, 15–18 December 2024; pp. 3931–3934. [Google Scholar] [CrossRef]
  43. How do I Track and Manage Stress with my Fitbit Device?—Fitbit Help Center. Available online: https://support.google.com/fitbit/answer/14237928?hl=en (accessed on 3 August 2024).
  44. Ahmed, A.; Aziz, S.; Alzubaidi, M.; Schneider, J.; Irshaidat, S.; Abu Serhan, H.; Abd-alrazaq, A.A.; Solaiman, B.; Househ, M. Wearable devices for anxiety & depression: A scoping review. Comput. Methods Programs Biomed. Update 2023, 3, 100095. [Google Scholar] [CrossRef]
Figure 1. Overview of the proposed experimental pipeline for classifying anxiety from physiological and behavioral data.
Figure 1. Overview of the proposed experimental pipeline for classifying anxiety from physiological and behavioral data.
Applsci 15 10519 g001
Figure 2. Sorted feature importance extracted from CB.
Figure 2. Sorted feature importance extracted from CB.
Applsci 15 10519 g002
Figure 3. Global feature importance with Shapley values on the test set after creating Kernel Explainer using training data samples.
Figure 3. Global feature importance with Shapley values on the test set after creating Kernel Explainer using training data samples.
Applsci 15 10519 g003
Figure 4. Sensitivity Analysis for CatBoost with different train/test ratios.
Figure 4. Sensitivity Analysis for CatBoost with different train/test ratios.
Applsci 15 10519 g004
Table 1. Comparison of continuous features between Non-Anxiety and Anxiety participants. Mean ± SD are shown. Mann–Whitney U tests were used. All p–values were < 0.001 , except for variables marked with “†,” for which the p–values were < 0.05 . All data were collected or aggregated by Fitbit smartwatches daily, except for variables marked with &, which may sometimes have granularity of multiple days.
Table 1. Comparison of continuous features between Non-Anxiety and Anxiety participants. Mean ± SD are shown. Mann–Whitney U tests were used. All p–values were < 0.001 , except for variables marked with “†,” for which the p–values were < 0.05 . All data were collected or aggregated by Fitbit smartwatches daily, except for variables marked with &, which may sometimes have granularity of multiple days.
Feature NameNon-Anxiety (398)Anxiety (1493)Description
BMI 22.43 ± 2.15 22.86 ± 2.55 Body Mass Index, collected as survey data ( kg · m 2 ).
Steps 8799.06 ± 5361.71 8354.81 ± 4620.94 Number of steps taken by an individual.
Demographic VO2Max & 52.39 ± 3.31 46.06 ± 6.56 Maximum oxygen uptake, estimated every 2 days ( mL · kg 1 · min 1 ).
Resting HR & 58.68 ± 7.22 66.82 ± 6.65 Heart rate at rest, baseline every 4 days.
BPM 72.32 ± 9.51 78.94 ± 7.67 Heart rate during general activity.
Daily temperature variation 1.38 ± 0.82 1.30 ± 0.91 Measure of daily skin temperature variation from a 3-day baseline.
Nightly temperature 33.75 ± 0.61 33.90 ± 0.76 Measure of night-time skin temperature variation from a 3-day baseline.
Lightly active minutes 206.16 ± 93.51 209.84 ± 87.39 Time spent in light physical activity, calculated through METs *.
Moderately active minutes 27.91 ± 31.47 22.97 ± 27.26 Time spent in moderately active physical activity, calculated through METs *.
Very active minutes 27.52 ± 34.75 22.22 ± 31.93 Time spent in very active physical activity, calculated through METs *.
Calories 2640.23 ± 663.72 2475.15 ± 690.79 Number of calories burned.
Sedentary minutes 716.64 ± 132.81 708.13 ± 134.22 Time spent unengaged in physical activity.
Mindfulness session 0.01 ± 0.07 0.05 ± 0.21 Number of mindfulness sessions.
Light sleep ratio 0.99 ± 0.22 1.00 ± 0.24 Ratio of time spent in light sleep.
REM sleep ratio 1.01 ± 0.45 1.01 ± 0.38 Ratio of time spent in REM sleep.
Deep sleep ratio 1.00 ± 0.27 1.00 ± 0.33 Ratio of time spent in deep sleep.
Sleep duration ( 2.74 ± 0.56 ) × 10 7 ( 2.75 ± 0.61 ) × 10 7 Total duration of time spent in bed in milliseconds.
Minutes To Fall Asleep 0.00 ± 0.00 0.01 ± 0.24 Number of minutes for individual to fall asleep.
Minutes Asleep 393.75 ± 80.62 397.98 ± 87.83 Total minutes spent asleep.
Minutes After Wakeup 1.02 ± 2.52 0.64 ± 1.96 Number of minutes individual spent awake during sleep session.
Minutes Awake 62.22 ± 20.65 59.37 ± 20.34 Total minutes spent awake.
Sleep wake ratio 1.00 ± 0.29 1.00 ± 0.31 Ratio of time spent awake.
Sleep efficiency 92.98 ± 4.08 94.43 ± 3.25 Ratio of time spent asleep.
* METs are metabolic equivalents.
Table 2. Comparison of categorical features between Non-Anxiety and Anxiety participants. Chi–square tests were used; all p-values were < 0.001 .
Table 2. Comparison of categorical features between Non-Anxiety and Anxiety participants. Chi–square tests were used; all p-values were < 0.001 .
Feature
p-Value (Chi-Square)
CategoryNon-Anxiety (398)Anxiety (1493)
Age≥30161816
Age<30237677
GenderFemale58544
GenderMale340949
Table 3. Classification performance measures across ensemble and traditional models for anxiety classification with five-fold cross-validation.
Table 3. Classification performance measures across ensemble and traditional models for anxiety classification with five-fold cross-validation.
ModelAccSenSpF1-ScorePPVNPVAUC%
k-NN0.71 ± 0.030.88 ± 0.060.20 ± 0.060.82 ± 0.020.78 ± 0.070.38 ± 0.240.56 ± 0.08
LR0.98 ± 0.041.00 ± 00.84 ± 0.230.99 ± 0.020.97 ± 0.041.00 ± 01.00 ± 0
SVM0.89 ± 0.071.00 ± 00.53 ± 0.150.93 ± 0.050.87 ± 0.080.99 ± 0.010.92 ± 0.10
GNB1.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 0
RF1.00 ± 01.00 ± 01.00 ± 0.011.00 ± 01.00 ± 01.00 ± 01.00 ± 0
XGB1.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 0
CB1.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 0
LGB1.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 01.00 ± 0
Table 4. Conformal prediction results showing average prediction set size and empirical coverage at different target confidence levels.
Table 4. Conformal prediction results showing average prediction set size and empirical coverage at different target confidence levels.
Target Confidence (1 − α )Avg. Set SizeEmpirical Coverage
0.950.9000.701
0.850.6030.484
0.750.4420.332
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Velu, A.; Ramesh, J.; Ahmed, A.; Ganguly, S.; Aburukba, R.; Sagahyroon, A.; Aloul, F. Integrating Fitbit Wearables and Self-Reported Surveys for Machine Learning-Based State–Trait Anxiety Prediction. Appl. Sci. 2025, 15, 10519. https://doi.org/10.3390/app151910519

AMA Style

Velu A, Ramesh J, Ahmed A, Ganguly S, Aburukba R, Sagahyroon A, Aloul F. Integrating Fitbit Wearables and Self-Reported Surveys for Machine Learning-Based State–Trait Anxiety Prediction. Applied Sciences. 2025; 15(19):10519. https://doi.org/10.3390/app151910519

Chicago/Turabian Style

Velu, Archana, Jayroop Ramesh, Abdullah Ahmed, Sandipan Ganguly, Raafat Aburukba, Assim Sagahyroon, and Fadi Aloul. 2025. "Integrating Fitbit Wearables and Self-Reported Surveys for Machine Learning-Based State–Trait Anxiety Prediction" Applied Sciences 15, no. 19: 10519. https://doi.org/10.3390/app151910519

APA Style

Velu, A., Ramesh, J., Ahmed, A., Ganguly, S., Aburukba, R., Sagahyroon, A., & Aloul, F. (2025). Integrating Fitbit Wearables and Self-Reported Surveys for Machine Learning-Based State–Trait Anxiety Prediction. Applied Sciences, 15(19), 10519. https://doi.org/10.3390/app151910519

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop