Next Article in Journal
Experimental Validation and Reservoir Computing Capability of Spiking Neuron Based on Threshold Selector and Tunnel Diode
Previous Article in Journal
Hybrid Approach to Patient Review Classification at Scale: From Expert Annotations to Production-Ready Machine Learning Models for Sustainable Healthcare
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Invasive Sleep Stage Classification with Imbalance-Aware Machine Learning for Healthcare Monitoring

Department of Information Engineering (DII), Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
*
Author to whom correspondence should be addressed.
Big Data Cogn. Comput. 2026, 10(4), 116; https://doi.org/10.3390/bdcc10040116
Submission received: 3 February 2026 / Revised: 2 April 2026 / Accepted: 7 April 2026 / Published: 10 April 2026

Abstract

Sleep plays a fundamental role in human health and cognitive functioning, motivating the development of reliable and scalable methodologies for sleep stage classification (SSC). Recent advances in non-invasive and economically sustainable sensing technologies enable continuous sleep monitoring beyond laboratory settings. However, SSC remains a challenging data analytics task due to the intrinsic class imbalance among sleep stages. This study investigates the effectiveness of different imbalanced data management strategies within a machine learning framework for non-invasive SSC. The proposed approach relies exclusively on heart rate and motion signals, which can be acquired through wearable devices or contactless under-mattress sensors, making it suitable for longitudinal monitoring scenarios. Using the PhysioNet DREAMT dataset, 32 experimental scenarios are defined by combining data-level techniques (ADASYN oversampling with different balancing weights), algorithm-level strategies (cost-sensitive learning), and hybrid solutions. Four model families are evaluated—Decision Tree, k-Nearest Neighbors, Ensemble Classifiers, and Artificial Neural Networks—across classification tasks involving 2, 3, 4, and 5 sleep stages. The experimental results show that ensemble-based models provide robust and consistent performance under severe class imbalance, achieving macro accuracies of 82% for sleep–wake detection, 73% for 3-stage classification, 72% for 4-stage classification, and 64% for 5-stage classification. These findings confirm the relevance of imbalance-aware analytics and demonstrate the feasibility of accurate, minimally invasive SSC within big data and cognitive computing paradigms.

1. Introduction

Accurate monitoring and classification of sleep stages are critically important because sleep plays a fundamental role in regulating numerous physiological and cognitive processes, as highlighted by [1]. Sleep activity has been closely linked to immune function, metabolic regulation, memory consolidation, and emotional processing. Therefore, disrupted sleep patterns or misclassifications of sleep stages can hinder the timely identification of sleep-related disorders such as insomnia, sleep apnea, or narcolepsy. As such, the development of reliable computational models for sleep stage classification has significant implications not only for clinical diagnosis and personalized medicine, but also for broader efforts in preventive health and well-being monitoring.
In particular, adults who sleep less than seven hours a night are at greater risk of gaining weight, diabetes, high blood pressure, heart disease, stroke, and depression. However, the amount of sleep is certainly not the only important thing to monitor. According to [2], quality of sleep is indeed essential. Monitoring sleep stages during the night enables an objective assessment of a person’s sleep quality. While sleeping, we spend different amounts of time in five different sleep stages (American Academy of Sleep Medicine—AASM-guidelines), as extensively described by [3]:
  • Wake (W) refers to 5–15% of a healthy adult’s night rest;
  • Rapid eye movement (REM) refers to 20–25% of a healthy adult’s night rest;
  • Non-REM light sleep (NREM1) refers to 2–5% of a healthy adult’s night rest;
  • Non-REM medium sleep (NREM2) refers to 45–55% of a healthy adult’s night rest;
  • Non-REM deep sleep (NREM3) refers to 10–20% of a healthy adult’s night rest.
Therefore, the physiology of sleep makes it an inherently imbalanced use case for data scientists aiming at the creation of SSC models.
The gold standard for monitoring sleep stages and associated sleep quality is polysomnography (PSG), which consists of electroencephalography (EEG), electromyography (EMG), electrocardiogram (ECG), and electrooculography (EOG) [4]. The raw PSG data are then manually annotated by professional technicians, who are able to estimate the sleep stage of monitored patients within 30 s epochs. The commonly accepted silver standard is actigraphy (ACG), but over time, its poor reliability and instability have been proven by [5]. The performance of the ACG was shown to be comparable or poorer than that of other wearable and sport tracking technologies, making the ACG not interesting for sleep stage classification (SSC). PSG is therefore the best and most reliable solution, but it is associated with high resource consumption, high costs, patient discomfort, and limited accuracy of interrater interpretation. Moreover, it is usually performed in specialized laboratories and is thus not viable for longitudinal monitoring [6].
Even though PSG is extremely reliable, it is expensive, invasive, complex, and not suitable for longitudinal studies. In this regard, the present work aims to answer the following research questions (RQ):
(RQ1): 
Is it possible to develop reliable and efficient modeling solutions able to directly perform SSC through non-invasive sensors able to measure HR and motion?
(RQ2): 
Do imbalance management techniques offer a way for improving achievable results in non-invasive SSC through HR and motion?
The remainder of this work is organized as follows: Section 2 deepens the literature focused on the two main topics of this work, non-invasive SSC and imbalance management techniques for imbalanced classification data. Section 3 is dedicated to a detailed presentation of the experiments performed and the dataset used. Furthermore, in Section 4, we present and discuss the achieved results. The last Section aims at summarizing the obtained results with respect to RQs and comparing our best model’s performances to those of similar works in the current literature. Additionally, we summarize the findings and intuitions developed on the basis of the extensive number of experiments performed.

2. Related Works

Some researchers attempted to automate the annotation task, owing to the development of solutions capable of extracting features from raw ECG signals. They then used these predictors for the classification of sleep stages. In [7], HR, RR and motion were shown to be suitable for the SSC task. Two training scenarios (subject-specific and subject-independent) were explored. The results achieved are as follows: 51% accuracy in discerning all five sleep stages (5-SSC) and 77% accuracy in 3-SSC (NREM1-2-3 phases condensed into one single NREM). Nonetheless, HR, BR and motion are extracted from raw PSG signals, thus being potentially different from the HR, BR and motion that other less intrusive monitoring technologies could provide. Similarly, ref. [8] used HR, RR, and movement (number of movements in a 30 s window) for SSC. These signals are extracted and computed from the original PSG signals. Ref. [9] used HR (directly derived from ECGs) data only for the SSC task. They perform 5-SSC with 66% accuracy, and 72% for 4-SSC (with NREM1 and NREM2 aggregated in a single state). Ref. [10] presented an open source Python 3.12 package for sleep staging based on heart rate variability that can be extracted from raw ECGs. Recent studies have also explored machine learning approaches for automated sleep analysis using physiological signals, highlighting the growing potential of data-driven techniques for unobtrusive sleep monitoring [11]. In [4], a generative adversarial network (GAN) was proposed for the management of class imbalance. The input of the network is the raw EEG signals from the acquired PSG. Nonetheless, the authors are not interested in non-invasive SSC.

2.1. Modern SSC Solutions

PSG weakness, together with advancements in sensing technologies and data analysis and modelling solutions, fostered the adoption of wearable devices [12] and noncontact sensors to build models capable of accurately estimating sleep stages, without the need for invasive and expensive PSG measurements [1]. Ref. [13] employed HR and motion count signals extracted from a wearable device. They compare several DL and ML models and some imbalance management techniques. They used an open dataset, which involved 31 subjects. They perform binary classification only (sleep-wake recognition), and the results are approximately 91–95% accuracy, 63–67% specificity, 94–98% sensitivity. Ref. [14] solved the 4-SSC task via a Fitbit device (wearable consumer tracker). Two sequential models are developed: the first cleans sleep stages computed by the proprietary Fitbit algorithm; in the case of misclassified sleep stage detection, the second model computes the correct sleep stage from raw data collected from the wearable device. The authors use random upsampling (RUS) and random downsampling (RDS). The effectiveness of under/oversampling has been investigated in another dedicated work [15]. Ref. [6] used an EarlySense contactless device. This work is focused on 4 SSC and sleep-wake (SW) recognition methods. Ref. [5] included in their study 7 consumer devices for sleep stage monitoring (4 wearables, 1 EarlySense, 2 based on the RF wave). A comparison with PSG is performed for both the Epoch-By-Epoch (EBE) 4 SSC and the aggregated sleep summary measures.
For the effective development of non-invasive SSC solutions based on ML and DL, the availability of open data is essential, and the recent review of [1] lists the datasets available up to 2022. Recently, an open dataset for sleep stage classification was published by [16] on the PhysioNet platform developed by [17]. It comprises 100 nights monitored via both the PSG and a wearable device (Empatica E4 bracelet), thus being a suitable reference for building and comparing data-driven modeling and data imbalance management solutions for sleep stage classification via simpler and less invasive monitoring technology. This open dataset (together with other open datasets) was used by [18], who focused on SW recognition through acceleration signals and self-supervised modeling.
In summary, much attention has been given to sleep monitoring over time; however, several studies have not focused on making it a non-invasive and easier task through the development of algorithms for wearable or noncontact device signals. Studies focused on assessing the reliability of contactless technologies (such as undermattress belts) are usually pilot studies with few patients involved (approximately 5 people). Among previous studies addressing our same objective, namely, developing algorithms based on easily collectable physiological signals for accurate sleep stage classification, and using this framework to evaluate data imbalance management techniques, only a few have considered both multi-class sleep stage recognition beyond the simple sleep–wake (SW) task and the impact of imbalance mitigation strategies [13].

2.2. Class Imbalance Management Techniques

Sleep stage classification, akin to numerous other tasks in biomedical data analysis, is notably characterized by pronounced class imbalance. This intrinsic characteristic of data necessitates the rigorous assessment of mitigation strategies aimed at addressing data complexity and enhancing the robustness and generalizability of data-driven models in highly imbalanced and heterogeneous settings such as the one selected.
The class imbalance problem is evident in either simple sleep/wake recognition or up to more subtle sleep stage classification tasks focused on REM, NREM1 (also called light sleep), NREM2, NREM3 (also called deep sleep), and wake state recognition. Specifically, a normal human night’s sleep typically consists of 40 to 60% of the time spent in light sleep (NREM 1 and 2), 15 to 25% of REM sleep, 15 to 20% of deep sleep (NREM 3), and 5 to 15% of wakefulness [19].
There are several domains of ML applications characterized by a strong class imbalance, especially if we focus on biomedical application scenarios. Several strategies have been proposed and adopted over time.
We can summarize the main typologies as follows:
  • Data-level techniques—data resampling (over- and undersampling) such as the renowned synthetic minority oversampling technique (SMOTE);
  • Algorithm-level techniques—cost-sensitive learning (CSL), and imbalance robust models, such as ensembles;
  • Other approaches—hybrid combinations.
Ref. [20] proposed an overview of imbalanced data for the classification tasks in several domains. The authors distinguish between external solutions (which modify the data, but not the algorithms) and internal solutions (based on algorithms or training strategies that can manage imbalance). This review does not focus on multilabel classification and is mostly focused on binary classification, which is not the only task addressed in our study. A very complete article is the one by [21], where CSL, resampling, and algorithmic strategies are reviewed. The authors highlight the difficulty of setting the cost in CSL. Unfortunately, in their review discussion, they are only focused on binary classification. However, they delineate several research directions, and some have been explored in the meantime between their review and this paper. In [22], several resampling techniques (both undersampling and oversampling), and some imbalance-robust models (ensemble boosted, among others) are presented. Ref. [23] review is focused on multi-class medical tasks, which are characterized by a strong data imbalance. The authors focus on oversampling and provide a list of CSL and ensemble models capable of dealing with imbalanced data. Among the most recent contributions in this field, ref. [24] proposed a survey on oversampling (OS), undersampling (US), imbalance-robust algorithms, and CSL. This reference highlights the instability of the approaches tested and the need to carefully evaluate the effectiveness of imbalanced data management strategies case by case.
As briefly introduced, several strategies have been proposed to address class imbalance in machine learning. These methods can be broadly categorized into data-level approaches, algorithm-level approaches, and other complementary techniques.

2.2.1. Data-Level Imbalance-Handling Techniques

Ref. [25] explored undersampling, oversampling, and hybrid-sampling techniques. They compared 6 classifiers trained with 25 datasets, each characterized by different imbalance ratios, but mostly made up of few features and few samples. Moreover, they are focused only on binary classification. The effectiveness of SMOTE for a Parkinson’s disease monitoring solution was tested in [26]. Ref. [27] described the adaptive synthetic (ADASYN) oversampling technique as an improvement over SMOTE. In fact, ADASYN creates instances in the interior of the minority class, using the weighted distribution for the minority classes instead of creating completely synthetic samples. The master’s thesis by [28] focused only on binary classification. It deepens non-medical use cases by testing logistic regression (LR), support vector machines (SVM), random forest (RF) models with SMOTE or ADASYN or nothing. Their conclusions show that there is no preprocessing method that consistently improves the performance of trained models. In [29], a new oversampling technique is introduced: the Mahalanobis distance-based oversampling technique (MDO). The authors compare it with SMOTE and ADASYN via 20 multiclass datasets (3–26 classes each, 100–20 k samples each). MDO proved to be the best solution; however, this methodology has not received attention in the literature, which is why we decided to test the most commonly used ADASYN strategy.

2.2.2. Algorithm-Level Imbalance-Handling Techniques

Ref. [30] demonstrated the effectiveness of CSL: based on the experiments performed, the authors conclude that the reliability of CSL and resampling depends on the specific characteristics of the dataset and the application context. Ref. [31] demonstrated that four different ML models reach better performance with the CSL configuration using four different medical datasets. Ref. [32] reviewed CSL for medical data; however, resampling was not considered in their work, which highlights the importance of data and code sharing for this specific research topic. Furthermore, ref. [33] defined a new CSL method and validated its efficiency.
Analyzing hybrid techniques, the effectiveness of CSL coupled with oversampling to achieve better results has been demonstrated for bankruptcy modeling by [34]. Ref. [35] also used oversampling together with CSL on 4 non-biomedical datasets. The combination of these two approaches achieves better performance than both used separately. Ref. [36] tested single and ensemble classifiers, and ADASYN coupled with CSL. Nevertheless, they performed the experiments only on a proprietary dataset (regarding the freezing of gait in Parkinson’s patients). Ref. [37] investigated the effects of feature selection in conjunction with oversampling and CSL. The experiments use six binary classification biomedical datasets characterized by extremely high dimensionality, which are not comparable to our modeling scenario.

2.2.3. Other Imbalance-Handling Techniques

A work proposing a novel approach for binary classification is [38], in which the proposed methodology is validated using 11 datasets. Ref. [39] demonstrated overfitting and noise resulting from the application of data-level approaches. Ref. [40] explored whether class imbalance (whose hindering effects on ML modeling are proven) also has a negative influence on DL models (multilayer perceptron and convolutional neural network). They used several datasets, some belonging to the biomedical domain, to test the hypothesis. They conclude that both MLP and CNN suffer from data imbalance, especially when the available dataset size is limited. In particular, MLP is highly affected, whereas the CNN is slightly less affected. Ref. [41] focused on AI-based approaches for imbalanced datasets. In terms of modern AI-based methods, GANs are unstable and not reliable if used with limited data. However, resampling techniques such as SMOTE and its variations, or ADASYN, proved to be as efficient as being limited and inadequate, depending on the different use cases.
Over time, different solutions have been proposed to address imbalanced data in binary and multiclass applications, but a clear and unified strategy for successfully accomplishing this objective is lacking. Ref. [42] highlighted how a common and clear usage of resampling techniques in imbalanced datasets is missing. The datasets are sampled by the authors to create different datasets with different imbalance ratios and sizes. The normal 70/30% train/test split is used for each experiment. The authors conclude that SMOTE and ADASYN are similar if the amount of data is not large and if the imbalance ratio is high. Their focus is on binary classification only, but on the basis of the reviewed literature, we can deduce how much this holds for multiclass applications as well. Ref. [41] reviewed the literature on oversampling methodologies, summarizing the efficiency and limitations of different approaches. The authors highlight the need for tailored strategies in specific use cases, in line with the majority of academic works.
Based on the reviewed literature, we believe that the implementation of experiments based on ML modeling without considering more complex approaches based on data-consuming DL techniques, such as GANs, should be proven in the context of SSC. This will provide a clearer view of this specific use case in terms of minimally invasive SSC solutions obtained, considering imbalanced data management techniques.
The landscape emerging from this extended review on the management of imbalanced classification tasks is diverse. Several specific application domains have been explored over time, but most of the conclusions drawn by researchers highlight the dependency on specific characteristics of data, the instability of advanced DL-based solutions, and the consequent potential for additional research on this topic. Most of the reviewed literature agrees with the use-case specificity when evaluating the effectiveness and reliability of data-level resampling solutions and model-level cost-sensitive solutions.

3. Materials and Methods

As highlighted in the previous section, several strategies, focused on different conceptual levels of ML modeling, can be implemented to address imbalanced data classification. From doing nothing at all, given that imbalance is common in many practical use cases, to customizing the cost matrix used during model training, or oversampling minority classes, or selecting imbalance-robust supervised models.
In this work, we specify 32 different scenarios on the basis of the concatenation of class-imbalance management strategies in the face of different SSC tasks. Specifically, we are going to address sleep-wake (SW) recognition, 3-SSC (wake vs. REM vs. NREM), 4-SSC (wake vs. NREM1-2 vs. NREM3 vs. REM), and 5-SSC (all the sleep stages labeled by sleep technicians). The labeling of the five sleep stages is aligned with American Academy of Sleep Medicine (AASM) guidelines; nonetheless, the imbalance associated with the other tasks is peculiar and different from that associated with 5-SSL. Hence, we also present the results achieved for the other classification tasks.
We tested several strategies (each called a scenario), both simple and hybrid, in each of the aforementioned tasks. The extensive description of the experiments performed allows us to evaluate evidence of the effectiveness in dealing with the SSC use case.
We used Python 3.9 for data import and preparation, together with MATLAB 2024b, to facilitate the model training and comparison phases. Hereafter, we are going to present the data used for the experiments.

3.1. Dataset

The dataset we used for the experiments is the “Dataset for Real-time sleep stage EstimAtion using Multisensor Wearable Technology (DREAMT)” [16], which is available on the PhysioNet platform [17]. Data are accessible upon registration and signing of a data use agreement, thus making it an open access dataset. It is a novel (2024) and quite extensive repository, where the Empatica E4 wearable device (validated biomedical device, [43]) has been coupled with invasive PSG sensors to collect multiple signals, and the associated sleep stage is accurately estimated by sleep technicians on the basis of a PSG every 30 s window. Data were acquired from 100 subjects, both healthy and with any disease (mainly obstructive apnea and obesity). The data acquisition protocol is fixed for all participants.
For every participant’s night, Empatica E4 collects the following:
  • The timestamp [s] (64 Hz);
  • Blood volume pulse (BVP) derived from a photoplethysmography (PPG) sensor (64 Hz);
  • Interbeat interval (IBI) [ms] derived from the PPG (64 Hz);
  • Electrodermal activity (EDA) [ μ S] of the galvanic skin response sensor (4 Hz);
  • Skin temperature [°C] from the infrared thermopile sensor (4 Hz);
  • Triaxial accelerometry (32 Hz);
  • Heart rate (HR) [bpm] estimated from the BVP signal (1 Hz).
Owing to the synchronization of the acquisition systems, these data are aligned with those associated with the PSG to allow sleep stage annotation for each 30-s window.
Additional information about the dataset can be found in the original reference. The details reported here provide essential information to be noted in order to understand the process followed in this study. In this study, the dataset split was performed at the epoch level, meaning that samples from the same subject may appear in both the training and testing sets. Although subject-independent evaluation protocols (e.g., LOSO) are commonly adopted, their application in this dataset is challenging due to severe class imbalance and the absence of specific sleep stages in some subjects, which may lead to unstable evaluation conditions.

3.2. Preprocessing

We used the aforementioned dataset to create an analytical model that is able to estimate the correct sleep stage on the basis of HR and motion only, given that these physiological signals can be reliably acquired using contactless sleep monitoring technologies, such as under-mattress sensing systems. This choice was made to emulate the type of information typically available in unobtrusive long-term sleep monitoring scenarios.
The first preprocessing step is computing the magnitude of the three acceleration signals. Moreover, we filtered out the “preparation” and “missing” stages annotated, given their lack of consistency with our main aim, i.e., the development of a reliable SSC algorithm.
Feature Extraction: For all the remaining windows, we extracted some relevant and commonly adopted statistical time-domain features [44,45] from both the HR sequence and the motion signal computed. The extracted features for both HR and motion are those listed in Table 1:
The resulting dataset available for successive experiments is composed of 80,091 samples (overall number of 30-s windows associated with relevant sleep stages) of the 24 extracted features (12 extracted from the HR signal, and 12 extracted from the motion signal).
The minority degree of minority classes depends on the different SSC tasks we address among SW, 3-SSC, 4-SSC, and 5-SSC. The number of samples for each sleep stage associated with every SSC task is summarized in Table 2.
This dataset has been randomly split into 80/20% training/testing for the baseline scenarios, and into 60/40% training/testing for the data resampling-based scenarios. This different proportion is related to the fact that oversampling synthesizes minority class samples, thus increasing the amount of data available for model training, while the testing amount remains fixed. Hence, retaining more data for testing is fair for rebalancing-based scenarios.
Adaptive synthetic oversampling (ADASYN): Concerning oversampling, we reviewed the literature and identified several traditional and innovative methodologies. The effectiveness and reliability of most of them have been proven in specific scenarios only, which is why we decided to test one modern yet renowned approach, the adaptive synthetic (ADASYN) approach. It is based on generating new minority class samples on the basis of local density. With respect to the most cited and used SMOTE, ADASYN is better at emphasizing instances that are hardly classifiable. When oversampling, it is important to choose the best balancing ratio. The optimization of the ratio is highly specific, and no academic reference highlights a quantitative way to set the best rebalancing weight. For this reason, we defined different scenarios, each characterized by a specific percentage (either 25, 50 or 100%) of the gap between each minority class and the majority class to be synthesized. Practically speaking, if the gap between the NREM1 class and the majority class (i.e., NREM2) is 20,000, in the ADA25 scenario 5000 NREM1 samples have been synthesized; in the ADA50 scenario, 10,000 NREM1 samples are generated; and in the ADA100 scenario the numerical gap between each minority class and the majority class has been filled (this is the case for absolutely balanced datasets resulting from the oversampling phase). Therefore, the selected ratios (25%, 50%, and 100%) represent increasing levels of re-balancing, allowing us to analyze the effect of mild, moderate, and full oversampling on model performance.
Customization of the misclassification cost matrix (CSL): Another strategy we explored is the CSL. The misclassification costs were defined by considering the class distribution observed in the dataset. In particular, higher penalties were assigned to errors involving minority classes in proportion to their relative frequency, so that the resulting cost matrices reflect the imbalance ratio among sleep stages rather than relying solely on expert-defined values. Specifically, we adopted the basic approach, which attributes the highest cost (an integer number) to the first minority class, the lower integer to the second minority class, and so on up to the majority class’s cost, which is 1. The misclassification cost matrices used in all the _CSL scenarios for every task are presented in Table 3, Table 4, Table 5 and Table 6.

3.3. Experimental Scenarios

The summary of experimental scenarios defined with respect to imbalance management techniques applied is presented hereafter. In the experiments, a compact notation is adopted to describe the different scenarios under evaluation. The term base refers to the use of the original, unbalanced dataset; the term CSL indicates that a cost matrix was applied during training in that scenario; the term ADAx indicates the use of ADASYN oversampling with x being the ratio of the gap filled between each minority class and the majority class in the training set (x being either 25, 50 or 100%). So, for example, in the scenario ADA25_CSL, ADASYN was applied to the training data, filling a quarter of the gap between each minority class and the majority class, applying then a cost matrix during training.
Specifically, for each SSC task addressed, we experiment with different strategies, as summarized below:
  • baseline—use normalized features extracted from the original data (80% training and 20% testing);
  • base_CSL—use normalized features extracted from the original data + customized misclassification cost matrix during model training (80% training and 20% testing);
  • ADA100—use ADASYN oversampled normalized features (on the 60% used for training) up to a completely balanced dataset, and test the resulting models on 40% of the original data taken apart for testing;
  • ADA50—use ADASYN oversampled normalized features (on the 60% used for training) filling half the gap between each minority class and the majority class, and test the resulting models on 40% of the original data taken apart for testing;
  • ADA25—use ADASYN oversampled normalized features filling a quarter of the gap between each minority class and the majority class, and test the resulting models on 40% of the original data taken apart for testing;
  • ADA100_CSL—as ADA100, but also setting a customized misclassification cost, and then testing the resulting models on 40% of the original data taken apart;
  • ADA50_CSL—as ADA50, but also setting customized misclassification costs, and then testing the resulting models on 40% of the original data taken apart;
  • ADA25_CSL—as ADA25, but also setting customized misclassification costs, and then testing the resulting models on 40% of the original data taken apart.
Within each of the experimental scenarios presented, four different models have been trained and compared with each other: decision tree (DT), k-nearest neighbour (KNN), ensemble (ENS), and deep artificial neural network (ANN). With respect to the models’ hyperparameters, every experiment trained the optimized version by using 30 trials led by the Bayesian optimizer.
Owing to the proposed experimental setup, we are able to draw conclusions about several topics:
  • compare the effectiveness and reliability of different imbalanced data management techniques (with/without CSL, with ADASYN at varying degrees of synthesis) for every SSC task and hence under different imbalance ratios;
  • compare different models such as DT, KNN, and ANN, as long as the imbalance-robust ENS, under every scenario and every SSC task.

4. Results and Discussion

4.1. Performance Metrics

In line with the literature reviewed, we decided to include, together with well-known classification metrics computed for each class, such as accuracy, precision, sensitivity, specificity, and F1 score [46], other metrics that proved to be robust against unbalanced classification. Specifically, we propose both class-specific metrics, such as specificity, sensitivity, precision, F1 score, geometric mean and Matthew’s correlation coefficient (MCC) [47], and overall metrics, like imbalance accuracy metric (IAM) proposed by [48] and micro accuracy. Both the MCC and IAM range from −1 to 1, and values close to 1 are desirable, as they mean better classification performance. Both metrics are more suitable with respect to the previously mentioned metrics in cases such as the one under analysis in this paper, i.e., imbalanced multiclass classification. MCC includes all the values of the confusion matrix in its formula, and it is sensitive to imbalance; IAM identifies asymmetric errors and can be adopted in unbalanced multiclass scenarios. Table 7 summarizes the detailed descriptions of the metrics, the associated formulas, and the category to which each belongs.

4.2. Experimental Setup

In this work, several experiments under diverse modeling scenarios, addressing different SSC tasks, were performed. Specifically, we performed experiments embracing three main variability dimensions: different models, different data-level techniques (scenarios), and different SSC tasks. The presentation of all the results could be overwhelming for the reader. For this reason, visually powerful bar charts and bump charts are presented hereafter. Extensive tables containing all the detailed numerical results can be found in the Appendix A, while the best results are going to be described in the text.
The bump charts highlight the ranking of different modeling scenario metrics, with respect to every class (horizontal axis) for every model trained (groups on the horizontal axis) in every SSC task addressed (one chart for each task). In this work, we consider performance metrics where the higher the value is, the better it is; we assign the 1st rank to the highest value (best performance), and the 8th rank to the lowest value.
On the other hand, bar charts highlight the magnitude of the value and the difference among the facts represented. It is easier to assess the effectiveness of different scenarios and models on the basis of the bump charts and bar charts presented below.

4.3. Comparative Analysis

To obtain a comprehensive view of the experiments performed, we present bump charts of F 1 c score (Figure 1), MCC c (Figure 2), and GM c (Figure 3) for every task addressed, with the aim of understanding whether one “best model” can be elected.
These bump charts allow us to deduce some general evidence:
  • There is high variability in the results achieved under different SSC tasks; only the KNN model seems to have slightly more coherent behavior in the different classes (majority one and minority classes) for all three considered metrics.
  • The F 1 c scores and MCC c metrics are very coherent with each other as the number of classes increases.
  • Different models appear to be differently affected by the imbalance management technique (oversampling and/or CSL); in simpler tasks (binary or ternary), ANN and DT are similar to each other, whereas in more complex tasks (4-SSC and 5-SSC) every model benefits from different modeling scenarios in a completely different way.
Considering the previous results shown, the following plots provide a clearer focus on some specific points of interest. Figure 4 and Figure 5 represent the IAM and micro accuracy values, respectively, for every model and scenario under the different tasks considered.
These allow us to deduce even deeper evidence in order to answer the RQs set:
  • The more classes we try to detect, the worse the overall performance we obtain, as expected;
  • Under every modeling scenario and classification task, the ensemble is usually the best-performing model;
  • The superiority of the ENS over the DT, KNN classifier and ANN, is stable even in more complex multiclass tasks;
  • By focusing on ENS, the base scenario together with the base_CSL and ADA_25 are usually the best scenarios.
  • In general, every model usually performs best under the base, base_CSL and ADA_25 scenarios, suggesting that high weights for oversampling hamper the model performance of trained models.

4.4. Ensemble Model Analysis

To clarify the ensemble’s superiority and dive deeper into the insights sketched before, we present heatmaps of the models’ MCC c metric (in green and bold the top values, in red lowest values) on the basis of scenario over model crossing for every class within each task addressed. Figure 6, Figure 7, Figure 8 and Figure 9 show detailed results for each task addressed. The MCC c metric proved to be the best metric in the case of imbalanced data, such as the use case considered in this study.
From the heatmaps, by looking at greener horizontal lines, it is evident again how much the ensemble model is the best performing model against imbalanced data classification under almost any imbalance management technique (either oversampling or/and CSL) for every class (either minority or majority).
Moreover, the vertical greener columns reveal how much the baseline and the baseline_CSL scenarios are generally the best ones in terms of the MCC, followed only by the ADA25 scenario, i.e., the scenario where a small number of minority classes’ samples are synthesized. Nonetheless, the bold values (the best MCC for every class) range in cells linked to either ADA100, ADA100_CSL, ADA25, base, or base_CSL.
In summary, the ensemble is always the best model (coupled with variable scenarios) except for the following:
  • REM (minority class in the 3-SSC task), where KNN is the best model;
  • NREM1 (third minority class in the 5-SSC task), where KNN is the best model;
  • REM (second minority class in the 5-SSC task), where KNN is the best model.
With respect to the scenarios, the landscape is more diverse:
  • The base is the best (coupled with either Ensemble or KNN) in both classes for the SW task, in the minority class (REM) for the 3-SSC task, and for the REM, NREM1 and wake classes in the 5-SSC task;
  • The base_CSL is the best (coupled with either Ensemble or KNN) in both classes for the SW task, in the minority class (REM) for the 3-SSC task, in the REM and wake classes for the 4-SSC task, and in the NREM2 and NREM3 classes for the 5-SSC task;
  • The ADA100 is the best (coupled with the ensemble only) in the NREM and wake classes for the 3-SSC task, in the NREM12 (majority) class for the 4-SSC task, and for the NREM2 (majority) class for the 5-SSC task;
  • The ADA25 is the best (coupled with the ensemble) in the NREM3 (minority) class for the 4-SSC task.
Hybrid scenarios, where we coupled oversampling with CSL, are preferable if we consider the other performance metrics (GM and F1), which can be screened in the appendix tables.
The ensembles’ configuration set by the Bayesian optimization is usually with bagged trees. Only the following other methods have been fitted for some scenarios:
  • Adaptive Logistic Boosting is the optimal ensemble method for the SW task—base scenario;
  • Adaptive Boosting is the optimal ensemble method for the 3-SSC task—base_CSL and ADA100_CSL, 4-SSC task—ADA25, and 5-SSC task—ADA50 scenarios;
  • Gentle Adaptive Boosting is the optimal ensemble method for the SW task—ADA100, ADA50, ADA25, ADA100_CSL, ADA50_CSL, and ADA25_CSL;
  • Random Undersampling Boosting is the optimal ensemble method for the 3-SSC task—ADA50_CSL and ADA25_CSL, 4-SSC task—ADA50_CSL, and 5-SSC task—ADA100_CSL and ADA50_CSL.
In order to provide additional performance metrics for the best modeling scenarios outlined, we present the ROC graphs for the best model in each of the 32 experimental scenarios in Appendix B (Figure A1 and Figure A2).
To further support the comparison among classifiers based on the global performance metrics reported in Appendix A, we conducted a Wilcoxon signed-rank test across the 32 experimental scenarios. The results of this statistical analysis are summarized in Table 8, which reports the p-values for the pairwise comparisons between the ensemble model and the other evaluated classifiers (ANN, DT, and KNN), considering both macro accuracy and IAM. As shown in Table 8, the ensemble model achieved statistically significant improvements over the other models in all comparisons (all p-values < 0.001).

4.5. Model Explainability Analysis

To provide additional insights into model behavior and support the interpretability of the proposed solutions, a global SHAP (SHapley Additive exPlanations) analysis was conducted on the overall best-performing models identified for each SSC task.
Global SHAP values were computed on a representative subset of the test set (500 samples) using the interventional approach. Shapley values were estimated via Monte Carlo sampling (500 samples) with a maximum of 128 feature subsets. Feature importance was quantified as the mean absolute SHAP value aggregated across all classes, enabling a global interpretation of model behavior.
Regarding model selection, for the 2SSC task, the optimized ensemble model under the base_CSL scenario clearly provides the best overall performance. For the 3SSC task, no single model dominates across all classes; however, the optimized ensemble under the ADA100 scenario offers the most balanced performance, particularly for NREM and wake stages, while the KNN model performs better for REM. Considering the typical trade-off between majority and minority classes, the SHAP analysis is conducted on the optimized ensemble under the ADA100 scenario. For the 4SSC task, the optimized ensemble under the base_CSL scenario achieves the best or near-best performance across all classes, particularly for REM and wake stages, while remaining competitive for NREM12 and NREM3. Similarly, for the 5SSC task, the optimized ensemble under the base_CSL scenario represents the most reliable overall solution. Although it is not the top-performing model for REM and NREM1 (where KNN performs better), it provides superior or near-optimal performance for the remaining classes, making it the most balanced choice.
The resulting SHAP-based feature importance analyses for each SSC task are presented below.
For the 2SSC task (Figure 10a), the SHAP analysis shows that model predictions are mainly driven by variability- and range-based features derived from both motion (e.g., stdM, p2pM, rmsM) and heart rate (e.g., stdHR, p2pHR). Mean-based descriptors are not among the most relevant predictors, indicating that the discrimination between sleep and wake states primarily relies on dynamic fluctuations rather than on average signal levels.
For the 3SSC task (Figure 10b), the same variability-driven behavior is observed, with both motion and heart rate features contributing significantly. However, distributional descriptors (e.g., quartiles and kurtosis) begin to emerge among the relevant predictors, and feature importance becomes more evenly distributed, reflecting the increased complexity of multi-stage classification.
For the 4SSC task (Figure 10c), variability- and range-based features remain dominant, while distributional indicators (e.g., quartiles and median-related statistics) play a more structured role in capturing finer differences among sleep stages. In this setting, feature contributions are not uniformly shared across classes: some sleep stages (e.g., wake or NREM2) are strongly associated with the most influential predictors, whereas others are characterized by more diffuse and less distinctive patterns.
For the 5SSC task (Figure 10d), this trend is further emphasized. Although variability-driven features still provide the largest contributions, feature importance is more widely distributed and increasingly class-dependent. Certain stages are clearly associated with the dominant predictors, while others are harder to distinguish and rely on a combination of lower-impact features, confirming the higher intrinsic difficulty of the task.
Overall, the SHAP analysis reveals a consistent progression across SSC tasks: variability-based features dominate in simpler settings, while distributional descriptors and a more distributed importance structure become increasingly relevant as task complexity grows. These findings suggest that reliable sleep stage classification with minimal and non-invasive signals is feasible, provided that their dynamic and statistical properties are adequately exploited, in line with the physiological complexity of sleep. This behavior is evident from the class-wise SHAP contributions, where dominant features show unbalanced relevance across sleep stages.

5. Conclusions

This study contributes to the field of data analytics and cognitive computing by investigating the feasibility of sleep stage classification (SSC), which can be achieved using a minimal and non-invasive set of physiological signals—heart rate and motion—combined with imbalance-aware machine learning strategies (RQ1). Through a systematic evaluation of 32 experimental scenarios across multiple classification granularities and model families on the PhysioNet DREAMT dataset, we show that ensemble-based approaches provide robust and consistent performance even under severe class imbalance conditions.
In particular, the ensemble model generally outperformed the other considered classifiers across most of the evaluated scenarios. This confirms the ensemble’s superior capability in handling complex and imbalanced multiclass settings.
Our experiments also reveal that the effectiveness of imbalance management strategies depends on the intensity of oversampling and its interaction with algorithm-level techniques. Moderate configurations, such as ADA25, and baseline configurations combined with cost-sensitive learning (base_CSL), tended to provide better performance across several tasks. Conversely, aggressive oversampling strategies (e.g, ADA100) did not consistently improve classification performance and, in some cases, led to reduced effectiveness. This suggests that excessive synthetic sample generation might negatively affect model generalization.
From a data-driven perspective, these results highlight the effectiveness of combining data-level and algorithm-level imbalance management techniques to enhance model reliability in real-world scenarios characterized by skewed class distributions (RQ2). The observed stability of ensemble models across different tasks suggests their potential suitability for cognitive computing applications, where decision robustness and generalization are critical requirements.
A further key insight emerges from the SHAP-based explainability analysis, which reveals a progressive shift in feature relevance across SSC tasks: while variability- and range-based descriptors dominate simpler settings, more complex classifications increasingly rely on distributional features and a more distributed, class-dependent contribution of predictors. This behavior reflects the intrinsic physiological complexity of sleep and highlights the importance of capturing subtle signal dynamics when moving towards finer-grained sleep stage discrimination.
Beyond methodological aspects, the proposed framework supports scalable and computationally efficient analytics for non-invasive sleep monitoring. The reliance on easily collectible signals enables long-term, continuous data acquisition and integration into large-scale monitoring pipelines (RQ1), aligning with big data paradigms and distributed cognitive systems. Such characteristics are particularly relevant for applications requiring adaptive and personalized insights derived from heterogeneous physiological data streams.
Finally, this work emphasizes the importance of adopting imbalance-aware evaluation metrics, such as the Matthews Correlation Coefficient, to avoid misleading conclusions based solely on global accuracy. Although validated in the context of SSC, the proposed experimental framework and methodological insights are broadly applicable to other cognitive computing and biomedical data analytics tasks affected by class imbalance.
Despite the promising results, some limitations should be addressed. The results depend on the selected feature extraction pipeline and modelling configurations adopted in this study. Although multiple models and imbalance management strategies were systematically evaluated, alternative feature representations or hyperparameter settings could lead to different performance outcomes. Furthermore, the experiments were conducted on a single publicly available dataset (PhysioNet DREAMT), which may limit the generalizability of the findings to other populations, sensor configurations, or acquisition settings. However, the use of a publicly available dataset and clearly defined experimental scenarios supports their reproducibility. Future work will include further validation on additional datasets and cross-dataset evaluations to fully assess the generalization capabilities of the proposed methodology. Additionally, the proposed framework could be extended by investigating imbalance-aware training strategies within deep learning architectures, such as CNN or LSTM models applied to raw wearable signals. Future research will also investigate subject-independent evaluation protocols to further assess the generalizability of the proposed approach, particularly in the presence of highly imbalanced sleep stage distributions.

Author Contributions

Conceptualization, L.S., A.B., S.B. and S.R.; methodology, L.S., M.E. and S.R.; software, L.S. and M.E.; validation, L.S., A.B. and S.B.; formal analysis, L.S. and P.P.; investigation, L.S.; resources, P.P.; data curation, L.S.; writing—original draft preparation, L.S., S.B., M.E. and S.R.; writing—review and editing, A.B., S.B., M.E., S.R. and P.P.; visualization, L.S., M.E. and S.R.; supervision, P.P.; project administration, A.B. and P.P.; funding acquisition, P.P. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The data used for experiments presented in this article can be freely and openly accessed at Physionet: [16].

Acknowledgments

This study has been partially promoted within the project entitled “E-BED: Empowered Bed for Elderly and Disability”, funded within PR FESR 2021–2027 program—AXIS 1—SPECIFIC OBJECTIVE 1.1—ACTION 1.1.1—call “R&D TO INNOVATE THE MARCHE REGION”—Incentives for businesses to undertake industrial research and experimental development activities within the scope of the Regional Strategy for Smart Specialization, CUP B29J24000930007. During the preparation of this manuscript/study, the author(s) used ChatGPT (free version) to assist with language editing and to improve the clarity of the English expression. The authors have reviewed and edited the output and take full responsibility for the content of this publication.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A. Complete Results Tables

Table A1. F 1 c score, GM c and MCC c for every model and scenario in the SW task (best overall results for each metric in bold).
Table A1. F 1 c score, GM c and MCC c for every model and scenario in the SW task (best overall results for each metric in bold).
ClassScenarioANNDTENSKNN
F1GMMCCF1GMMCCF1GMMCCF1GMMCC
sleepADA1000.7910.6520.2930.7660.6380.2570.8540.7040.4300.7600.6760.312
ADA100_CSL0.7280.6690.2970.6350.6180.2310.8550.7080.4360.7740.6420.267
ADA250.8610.6100.3570.8620.6170.3660.8780.6420.4290.8490.6580.371
ADA25_CSL0.7830.6750.3150.7860.6700.3110.8780.6400.4280.7960.6840.337
ADA500.8300.6470.3320.8260.6580.3380.8700.6900.4480.8530.6150.347
ADA50_CSL0.7600.6670.2970.7430.6620.2850.8700.6690.4310.7400.6620.285
base0.8770.5760.3920.8770.5580.3850.8870.6180.4550.8720.5420.354
base_CSL0.8500.6700.3860.8500.6480.3640.8790.6700.4550.8540.6610.385
wakeADA1000.4920.6520.2930.4730.6380.2570.5760.7040.4300.5140.6760.312
ADA100_CSL0.5080.6690.2970.4720.6180.2310.5810.7080.4360.4780.6420.267
ADA250.4830.6100.3570.4910.6170.3660.5320.6420.4290.5210.6580.371
ADA25_CSL0.5090.6750.3150.5050.6700.3110.5310.6400.4280.5230.6840.337
ADA500.5020.6470.3320.5110.6580.3380.5750.6900.4480.4850.6150.347
ADA50_CSL0.5030.6670.2970.4980.6620.2850.5540.6690.4310.4980.6620.285
base0.4660.5760.3920.4480.5580.3850.5220.6180.4550.4250.5420.354
base_CSL0.5360.6700.3860.5130.6480.3640.5640.6700.4550.5300.6610.385
Table A2. F 1 c score, GM c and MCC c for every model and scenario in the 3-SSC task (best overall results for each metric in bold).
Table A2. F 1 c score, GM c and MCC c for every model and scenario in the 3-SSC task (best overall results for each metric in bold).
ClassScenarioANNoptDToptENSoptKNNopt
F1GMMCCF1GMMCCF1GMMCCF1GMMCC
NREMADA1000.6800.6480.2860.6710.6260.2420.7890.7290.4480.6570.6140.219
ADA100_CSL0.5050.5570.2270.6400.6250.2520.8020.7210.4480.6600.6110.212
ADA250.7490.6490.3080.7170.6350.2670.8170.6720.4240.7800.5980.292
ADA25_CSL0.6490.6450.2990.6810.6390.2670.7770.7260.4370.7390.6610.317
ADA500.7300.6520.3010.6920.6330.2570.8020.7010.4270.7340.6430.289
ADA50_CSL0.6210.6270.2780.6580.6300.2540.7770.7140.4190.6790.6130.219
base0.7980.5360.3030.7960.5020.2830.8200.6070.4030.8150.6970.444
base_CSL0.7460.6590.3200.7400.6320.2800.7920.4850.2570.8150.6970.444
REMADA1000.3160.6320.2240.3110.6020.2170.4910.7110.4280.2760.5740.174
ADA100_CSL0.2870.6590.2010.3230.6360.2330.5120.6880.4570.2700.5640.166
ADA250.3350.5840.2490.3220.5810.2330.4650.5890.4500.2400.4230.187
ADA25_CSL0.3210.6600.2350.3330.6180.2440.4980.7210.4350.3640.6260.282
ADA500.3440.6010.2560.3200.5910.2260.4830.6470.4310.2780.5330.181
ADA50_CSL0.3170.6490.2250.3250.6170.2320.4740.6890.4070.2650.5410.160
base0.1050.2440.1280.0230.1090.0450.3780.4920.4280.6030.7670.554
base_CSL0.3290.5780.2400.2710.5020.1810.0000.0000.0000.6030.7670.554
wakeADA1000.5010.6530.3210.4650.6270.2630.5890.7190.4440.4470.6110.240
ADA100_CSL0.4870.6500.2840.4660.6320.2560.5840.7120.4410.4360.6020.227
ADA250.4850.6310.3190.4590.6180.2700.5630.6790.4370.4850.6180.344
ADA25_CSL0.5140.6690.3320.4660.6310.2630.5860.7230.4360.4840.6350.308
ADA500.5010.6490.3280.4610.6220.2610.5780.7020.4390.4950.6390.326
ADA50_CSL0.5010.6610.3040.4690.6330.2610.5750.7090.4220.4390.6020.236
base0.4780.5930.3780.4540.5710.3620.5340.6350.4420.5120.6320.387
base_CSL0.5180.6570.3560.4990.6420.3310.4410.5620.3460.5120.6320.387
Table A3. F 1 c score, GM c and MCC c for every model and scenario in the 4-SSC task.
Table A3. F 1 c score, GM c and MCC c for every model and scenario in the 4-SSC task.
ClassScenarioANNoptDToptENSoptKNNopt
F1GMMCCF1GMMCCF1GMMCCF1GMMCC
NREM12ADA1000.6620.6310.2560.6530.6320.2600.7610.7250.4410.6040.5860.174
ADA100_CSL0.5210.5700.2380.5890.6020.2330.6340.6640.3910.6040.5860.174
ADA250.7050.6260.2580.6960.6450.2850.8000.6780.4280.7240.5730.214
ADA25_CSL0.5290.5730.2300.6460.6250.2470.7490.7210.4330.6590.6030.202
ADA500.6800.6400.2730.6680.6330.2610.7830.7160.4400.6520.6210.238
ADA50_CSL0.5160.5650.2270.6110.6120.2370.7420.7120.4150.6310.6030.204
base0.7740.5270.2860.7700.4750.2530.8010.6120.4010.7810.5690.326
base_CSL0.7030.6440.2860.6990.6390.2750.7970.6870.4330.7040.6260.258
NREM3ADA1000.4160.6960.3980.4260.7490.4180.6160.8170.6030.3050.6850.296
ADA100_CSL0.3990.7290.3890.3970.7580.3950.5370.8480.5400.3050.6850.296
ADA250.4120.6980.3940.4330.7310.4200.6410.7430.6380.2580.5890.235
ADA25_CSL0.2780.7860.3080.4090.7470.4020.6040.8280.5950.3540.7030.344
ADA500.3990.6880.3810.4170.7280.4050.6430.8000.6300.3770.7240.368
ADA50_CSL0.3320.7440.3350.3660.7290.3600.5450.7900.5330.3290.6900.318
base0.3730.5100.4010.0000.0000.0000.5870.6710.6060.5200.6380.528
base_CSL0.3750.6440.3530.4250.6690.4050.6360.7420.6330.3230.6330.302
REMADA1000.3120.5790.2200.3420.6280.2530.5030.7180.4410.2510.5350.143
ADA100_CSL0.3150.6270.2200.3280.6360.2380.4380.7540.3790.2510.5350.143
ADA250.3030.5450.2160.3780.6200.2990.4990.6130.4880.1530.3330.094
ADA25_CSL0.2940.6320.1980.3490.6270.2630.5150.7260.4550.2590.5140.162
ADA500.3130.5840.2220.3320.5980.2440.5020.6700.4500.2830.5570.186
ADA50_CSL0.3030.6420.2100.3130.6100.2200.4830.7140.4190.2630.5380.161
base0.0850.2160.1140.0040.0450.0460.3600.4740.4210.2140.3610.237
base_CSL0.3130.5590.2230.2980.5460.2060.5010.6070.4980.2440.4670.157
wakeADA1000.4920.6520.3050.4640.6260.2720.5960.7320.4520.4300.5980.226
ADA100_CSL0.4870.6570.2750.4690.6370.2650.5850.7380.4260.4300.5980.226
ADA250.4680.6150.2980.4760.6290.2900.5680.6810.4440.4570.5920.315
ADA25_CSL0.5070.6570.3290.4650.6250.2670.5970.7330.4460.4340.5930.239
ADA500.4980.6490.3190.4720.6300.2810.5920.7170.4530.4630.6210.271
ADA50_CSL0.4950.6570.2950.4670.6310.2640.5800.7180.4270.4430.6060.242
base0.4680.5850.3710.4590.5740.3720.5420.6430.4500.4820.5980.380
base_CSL0.5270.6670.3630.4900.6390.3120.5910.7040.4630.5030.6460.335
Table A4. F 1 c score, GM c and MCC c for every model and scenario in the 5-SSC task.
Table A4. F 1 c score, GM c and MCC c for every model and scenario in the 5-SSC task.
ClassScenarioANNoptDToptENSoptKNNopt
F1GMMCCF1GMMCCF1GMMCCF1GMMCC
NREM1ADA1000.2180.4990.0930.2100.4870.0840.2590.5150.1540.1970.4660.072
ADA100_CSL0.2110.5330.0690.2180.5080.0900.2320.4880.1210.1980.4630.073
ADA250.1200.2830.0690.1910.4340.0760.1620.3280.1240.1780.4070.069
ADA25_CSL0.2260.5030.1030.1920.4580.0640.2680.5040.1710.1750.4100.061
ADA500.1920.4160.0920.1970.4560.0760.2050.3960.1360.2110.4610.100
ADA50_CSL0.2200.5200.0900.2060.4890.0770.2400.4850.1360.1820.4280.064
base0.0040.0450.0160.0230.1090.0480.0940.2290.1200.2940.5350.203
base_CSL0.0000.0000.0000.2000.4390.0970.2070.3890.1530.2930.5810.192
NREM2ADA1000.5360.5930.2580.5970.6340.3000.7060.7210.4510.5510.5970.237
ADA100_CSL0.1410.2760.1180.5200.5850.2770.6580.6820.3810.5670.6100.256
ADA250.6640.6370.2850.6240.6420.2890.7420.6990.4350.5990.6200.248
ADA25_CSL0.5100.5790.2760.5540.6060.2710.6940.7150.4490.5880.6110.229
ADA500.6140.6320.2680.6010.6310.2800.7380.7150.4420.6210.6470.308
ADA50_CSL0.4090.5040.2250.5230.5850.2610.6820.7030.4200.5640.5970.210
base0.7010.5350.2810.7030.5290.2840.7440.6580.4190.7420.7060.439
base_CSL0.6290.6420.2880.6380.6510.3050.7390.7220.4510.6900.7100.433
NREM3ADA1000.3640.7690.3700.4050.7170.3920.5990.8180.5880.3610.7130.352
ADA100_CSL0.1990.7810.2420.4030.7440.3960.4980.7510.4820.3810.7290.373
ADA250.3970.7520.3940.4370.7390.4260.6440.7840.6330.3340.7200.333
ADA25_CSL0.3510.7750.3630.3910.7430.3870.5940.8560.5920.3590.6950.348
ADA500.3350.7640.3460.4120.7340.4030.6180.7600.6080.4080.7600.405
ADA50_CSL0.2850.7760.3100.3850.7410.3810.5230.7940.5140.3180.6920.311
base0.2450.3930.2890.2190.3630.2810.5960.6800.6120.5940.7560.580
base_CSL0.3540.6510.3350.4640.7240.4480.6730.7900.6650.4700.8420.483
REMADA1000.3060.6050.2100.3420.5970.2570.5080.7190.4470.2770.5440.178
ADA100_CSL0.2590.5960.1500.3320.6140.2420.4220.6540.3500.2940.5600.200
ADA250.2820.5250.1900.3520.5930.2690.5020.6430.4640.2820.5390.187
ADA25_CSL0.3250.6400.2330.3360.6040.2470.5050.7400.4430.2770.5280.182
ADA500.2830.5480.1870.3560.6000.2740.5140.6660.4680.3320.5930.244
ADA50_CSL0.2920.6100.1930.3300.6090.2390.4840.7140.4200.2530.5090.153
base0.0640.1860.0860.0310.1260.0650.4250.5370.4460.6280.7570.588
base_CSL0.3220.6060.2280.2950.5410.2020.5240.6570.4880.5790.8080.531
wakeADA1000.4570.5980.3070.4420.5950.2660.5720.6980.4330.4250.5830.240
ADA100_CSL0.3800.5100.2950.4380.5990.2450.5340.6750.3750.4350.5910.254
ADA250.5040.6320.3670.4540.6030.2820.5820.7000.4510.4460.5960.275
ADA25_CSL0.4720.6170.3080.4500.6070.2600.5860.7150.4410.4270.5840.243
ADA500.4850.6180.3410.4390.5940.2560.5770.7010.4380.4490.5980.277
ADA50_CSL0.4610.6040.2990.4380.5970.2440.5570.6910.4050.4100.5680.224
base0.4980.6210.3700.4940.6160.3690.5830.6950.4580.4640.5810.369
base_CSL0.5240.6750.3420.4940.6360.3290.5990.7280.4530.4850.6040.374
Table A5. Global metrics (IAM and macro accuracy)of every model under every modeling scenario for the different SSC tasks.
Table A5. Global metrics (IAM and macro accuracy)of every model under every modeling scenario for the different SSC tasks.
TaskScenarioANNoptDToptENSoptKNNopt
IAMACCmacroIAMACCmacroIAMACCmacroIAMACCmacro
SWADA1000.1860.7040.1130.6760.4210.7830.0980.679
ADA100_CSL0.0220.649−0.1540.5690.4240.7840.1360.685
ADA250.2360.7810.2460.7830.2800.8060.3440.770
ADA25_CSL0.1480.6980.1580.7010.2760.8060.1880.714
ADA500.3270.7460.3080.7430.3850.8000.2460.771
ADA50_CSL0.0970.6760.0550.6600.3380.7980.0480.658
base0.1630.8000.1360.7990.2280.8170.1140.791
base_CSL0.3670.7730.3200.7700.3330.8110.3420.777
3-SSCADA100−0.1280.574−0.1520.5620.1790.702−0.1950.542
ADA100_CSL−0.3510.448−0.2060.5380.2490.717−0.1970.541
ADA250.0080.636−0.0630.6020.0920.731−0.1020.670
ADA25_CSL−0.1790.552−0.1370.5720.1460.692−0.0050.627
ADA50−0.0070.622−0.1110.5790.1950.715−0.0320.618
ADA50_CSL−0.2220.530−0.1700.5540.1570.688−0.1580.558
base−0.2380.692−0.2970.687−0.0620.7310.2040.725
base_CSL0.0250.638−0.0020.628−0.3150.6810.2040.725
4-SSCADA100−0.1680.560−0.1910.5480.1440.681−0.3270.491
ADA100_CSL−0.3400.471−0.2820.504−0.1370.582−0.3270.491
ADA25−0.1120.590−0.0910.5900.0980.718−0.3060.599
ADA25_CSL−0.3690.459−0.1960.5450.1200.674−0.2250.538
ADA50−0.1470.572−0.1630.5620.2370.702−0.2250.539
ADA50_CSL−0.3720.459−0.2730.5140.0640.659−0.2780.516
base−0.3280.663−0.4960.656−0.0880.713−0.1940.676
base_CSL−0.0930.599−0.0890.5870.1180.719−0.1500.589
5-SSCADA100−0.3970.424−0.3040.461−0.0200.588−0.3800.423
ADA100_CSL−0.6840.230−0.3710.417−0.1420.536−0.3550.437
ADA25−0.3140.529−0.2500.489−0.0480.635−0.3430.460
ADA25_CSL−0.4010.417−0.3460.437−0.0480.587−0.3390.450
ADA50−0.3280.479−0.2860.468−0.0120.626−0.2900.481
ADA50_CSL−0.4940.359−0.3750.417−0.0960.565−0.3950.426
base−0.5250.573−0.5400.573−0.2070.640−0.0070.619
base_CSL−0.3460.511−0.2170.5050.0000.636−0.1640.563

Appendix B. Roc-Auc Graphs

Figure A1. ROC-AUC curves for each scenario - task - best model, under the base, base_CSL, ADA100, ADA50, ADA25, and ADA100_CSL scenarios (different scenarios in different rows, different SSC tasks in each column, as described in sub-captions).
Figure A1. ROC-AUC curves for each scenario - task - best model, under the base, base_CSL, ADA100, ADA50, ADA25, and ADA100_CSL scenarios (different scenarios in different rows, different SSC tasks in each column, as described in sub-captions).
Bdcc 10 00116 g0a1aBdcc 10 00116 g0a1b
Figure A2. ROC-AUC curves for each scenario - task - best model, under the ADA50_CSL, and ADA25_CSL scenarios.
Figure A2. ROC-AUC curves for each scenario - task - best model, under the ADA50_CSL, and ADA25_CSL scenarios.
Bdcc 10 00116 g0a2

References

  1. Hussain, Z.; Sheng, Q.Z.; Zhang, W.E.; Ortiz, J.; Pouriyeh, S. Non-invasive Techniques for Monitoring Different Aspects of Sleep: A comprehensive review. ACM Trans. Comput. Healthc. 2022, 3, 1–26. [Google Scholar] [CrossRef]
  2. Boostani, R.; Karimzadeh, F.; Nami, M. A comparative review on sleep stage classification methods in patients and healthy individuals. Comput. Methods Programs Biomed. 2017, 140, 77–91. [Google Scholar] [CrossRef]
  3. Kryger, M.H.; Roth, T.; Dement, W.C. Principles and Practice of Sleep Medicine E-Book: Expert Consult-Online and Print; Elsevier Health Science: Amsterdam, The Netherlands, 2010. [Google Scholar]
  4. Zhou, D.; Xu, Q.; Wang, J.; Xu, H.; Kettunen, L.; Chang, Z.; Cong, F. Alleviating Class Imbalance Problem in Automatic Sleep Stage Classification. IEEE Trans. Instrum. Meas. 2022, 71, 4006612. [Google Scholar] [CrossRef]
  5. Chinoy, E.D.; Cuellar, J.A.; Huwa, K.E.; Jameson, J.T.; Watson, C.H.; Bessman, S.C.; Hirsch, D.A.; Cooper, A.D.; Drummond, S.P.; Markwald, R.R. Performance of seven consumer sleep-tracking devices compared with polysomnography. Sleep 2021, 44, zsaa291. [Google Scholar] [CrossRef]
  6. Tal, A.; Shinar, Z.; Shaki, D.; Codish, S.; Goldbart, A. Validation of contact-free sleep monitoring device with comparison to polysomnography. J. Clin. Sleep Med. 2017, 13, 517–522. [Google Scholar] [CrossRef]
  7. Gaiduk, M.; Penzel, T.; Ortega, J.A.; Seepold, R. Automatic sleep stages classification using respiratory, heart rate and movement signals. Physiol. Meas. 2018, 39, 124008. [Google Scholar] [CrossRef]
  8. Morokuma, S.; Hayashi, T.; Kanegae, M.; Mizukami, Y.; Asano, S.; Kimura, I.; Tateizumi, Y.; Ueno, H.; Ikeda, S.; Niizeki, K. Deep learning-based sleep stage classification with cardiorespiratory and body movement activities in individuals with suspected sleep disorders. Sci. Rep. 2023, 13, 17730. [Google Scholar] [CrossRef]
  9. Mitsukura, Y.; Fukunaga, K.; Yasui, M.; Mimura, M. Sleep stage detection using only heart rate. Health Inform. J. 2020, 26, 376–387. [Google Scholar] [CrossRef] [PubMed]
  10. Brunner, C.; Hofer, F. SleepECG: A Python package for sleep staging based on heart rate. J. Open Source Softw. 2023, 8, 5411. [Google Scholar] [CrossRef]
  11. Yi, R.; Enayati, M.; Keller, J.M.; Popescu, M.; Skubic, M. Non-invasive in-home sleep stage classification using a ballistocardiography bed sensor. In Proceedings of the 2019 IEEE EMBS International Conference on Biomedical & Health Informatics (BHI), Chicago, IL, USA, 19–22 May 2019; IEEE: Piscataway, NJ, USA, 2019; pp. 1–4. [Google Scholar]
  12. Pierleoni, P.; Belli, A.; Palma, L.; Pernini, L.; Valenti, S. An accurate device for real-time altitude estimation using data fusion algorithms. In Proceedings of the 2014 IEEE/ASME 10th International Conference on Mechatronic and Embedded Systems and Applications (MESA), Senigallia, Italy, 10–12 September 2014; pp. 1–5. [Google Scholar] [CrossRef]
  13. Pardamean, B.; Budiarto, A.; Mahesworo, B.; Hidayat, A.A.; Sudigyo, D. Supervised Learning for Imbalance Sleep Stage Classification Problem. Commun. Math. Biol. Neurosci. 2023, 2023, 131. [Google Scholar] [CrossRef]
  14. Liang, Z.; Chapa-Martell, M.A. A Multi-Level Classification Approach for Sleep Stage Prediction With Processed Data Derived From Consumer Wearable Activity Trackers. Front. Digit. Health 2021, 3, 665946. [Google Scholar] [CrossRef] [PubMed]
  15. Liang, Z.; Chapa-Martell, M.A. Combining Resampling and Machine Learning to Improve Sleep-Wake Detection of Fitbit Wristbands. In Proceedings of the 2019 IEEE International Conference on Healthcare Informatics (ICHI), Xi’an, China, 10–13 June 2019; pp. 1–3. [Google Scholar] [CrossRef]
  16. Wang, K.; Yang, J.; Shetty, A.; Dunn, J. DREAMT: Dataset for Real-time sleep stage EstimAtion using Multisensor wearable Technology. PhysioNet, 2024; RRID:SCR_007345. [CrossRef]
  17. Goldberger, A.L.; Amaral, L.A.N.; Glass, L.; Hausdorff, J.M.; Ivanov, P.C.; Mark, R.G.; Mietus, J.E.; Moody, G.B.; Peng, C.K.; Stanley, H.E. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. Circulation 2000, 101, e215–e220. [Google Scholar] [CrossRef] [PubMed]
  18. Logacjov, A.; Bach, K.; Mork, P.J. Long-term self-supervised learning for accelerometer-based sleep–wake recognition. Eng. Appl. Artif. Intell. 2025, 141, 109758. [Google Scholar] [CrossRef]
  19. Ohayon, M.; Wickwire, E.; Hirshkowitz, M.; Albert, S.; Avidan, A.; Daly, F.; Dauvilliers, Y.; Ferri, R.; Fung, C.; Gozal, D.; et al. National Sleep Foundation’s sleep quality recommendations: First report. Sleep Health 2017, 3, 6–19. [Google Scholar] [CrossRef] [PubMed]
  20. Wang, L.; Han, M.; Li, X.; Zhang, N.; Cheng, H. Review of Classification Methods on Unbalanced Data Sets. IEEE Access 2021, 9, 64606–64628. [Google Scholar] [CrossRef]
  21. Kaur, H.; Pannu, H.S.; Malhi, A.K. A systematic review on imbalanced data challenges in machine learning: Applications and solutions. ACM Comput. Surv. 2019, 52, 79. [Google Scholar] [CrossRef]
  22. Jafarigol, E.; Trafalis, T. A Review of Machine Learning Techniques in Imbalanced Data and Future Trends. arXiv 2023, arXiv:2310.07917. [Google Scholar] [CrossRef]
  23. Yang, Y.; Khorshidi, H.A.; Aickelin, U. A review on over-sampling techniques in classification of multi-class imbalanced datasets: Insights for medical problems. Front. Digit. Health 2024, 6, 1430245. [Google Scholar] [CrossRef]
  24. Altalhan, M.; Algarni, A.; Alouane, M.T.H. Imbalanced Data Problem in Machine Learning: A Review. IEEE Access 2025, 13, 13686–13699. [Google Scholar] [CrossRef]
  25. Jadhav, A.; Mostafa, S.M.; Elmannai, H.; Karim, F.K. An Empirical Assessment of Performance of Data Balancing Techniques in Classification Task. Appl. Sci. 2022, 12, 3928. [Google Scholar] [CrossRef]
  26. Balakrishnan, A.; Medikonda, J.; Namboothiri, P.K.; Natarajan, M. Parkinson’s Disease Stage Classification with Gait Analysis using Machine Learning Techniques and SMOTE-based Approach for Class Imbalance Problem. In Proceedings of the 2022 IEEE International Conference on Distributed Computing, VLSI, Electrical Circuits and Robotics, DISCOVER 2022—Proceedings, Shivamogga, India, 14–15 October 2022; Institute of Electrical and Electronics Engineers Inc.: Piscataway, NJ, USA, 2022; pp. 277–281. [Google Scholar] [CrossRef]
  27. Spelmen, V.S.; Porkodi, R. A Review on Handling Imbalanced Data. In Proceedings of the 2018 IEEE International Conference on Current Trends Toward Converging Technologies, Coimbatore, India, 1–3 March 2018; Institute of Electrical and Electronics Engineers: Piscataway, NJ, USA, 2018; p. 115. [Google Scholar] [CrossRef]
  28. Brandt, J.; Lanzén, E. A Comparative Review of SMOTE and ADASYN in Imbalanced Data Classification. Master’s Thesis, Uppsala University, Uppsala, Sweden, 2020. Available online: https://www.diva-portal.org/smash/get/diva2:1519153/FULLTEXT01.pdf (accessed on 1 January 2026).
  29. Abdi, L.; Hashemi, S. To Combat Multi-Class Imbalanced Problems by Means of Over-Sampling Techniques. IEEE Trans. Knowl. Data Eng. 2016, 28, 238–251. [Google Scholar] [CrossRef]
  30. Moghadas-Dastjerdi, H.; Sha-E-Tallat, H.R.; Sannachi, L.; Sadeghi-Naini, A.; Czarnota, G.J. A priori prediction of tumour response to neoadjuvant chemotherapy in breast cancer patients using quantitative CT and machine learning. Sci. Rep. 2020, 10, 10936. [Google Scholar] [CrossRef]
  31. Mienye, I.D.; Sun, Y. Performance analysis of cost-sensitive learning methods with application to imbalanced medical data. Inform. Med. Unlocked 2021, 25, 10069. [Google Scholar] [CrossRef]
  32. Araf, I.; Idri, A.; Chairi, I. Cost-sensitive learning for imbalanced medical data: A review. Artif. Intell. Rev. 2024, 57, 80. [Google Scholar] [CrossRef]
  33. Ren, Z.; Zhu, Y.; Kang, W.; Fu, H.; Niu, Q.; Gao, D.; Yan, K.; Hong, J. Adaptive cost-sensitive learning: Improving the convergence of intelligent diagnosis models under imbalanced data. Knowl.-Based Syst. 2022, 241, 108296. [Google Scholar] [CrossRef]
  34. Le, T.; Vo, M.T.; Vo, B.; Lee, M.Y.; Baik, S.W. A Hybrid Approach Using Oversampling Technique and Cost-Sensitive Learning for Bankruptcy Prediction. Complexity 2019, 2019, 8460934. [Google Scholar] [CrossRef]
  35. El-Amir, S.; El-Henawy, I. An Improved Model Using Oversampling Technique and Cost-Sensitive Learning for Imbalanced Data Problem. Inf. Sci. Appl. 2024, 2, 33–50. [Google Scholar] [CrossRef]
  36. Naghavi, N.; Miller, A.; Wade, E. Towards real-time prediction of freezing of gait in patients with parkinson’s disease: Addressing the class imbalance problem. Sensors 2019, 19, 3898. [Google Scholar] [CrossRef]
  37. Pes, B. Learning from high-dimensional biomedical datasets: The issue of class imbalance. IEEE Access 2020, 8, 13527–13540. [Google Scholar] [CrossRef]
  38. Feng, F.; Li, K.C.; Shen, J.; Zhou, Q.; Yang, X. Using Cost-Sensitive Learning and Feature Selection Algorithms to Improve the Performance of Imbalanced Classification. IEEE Access 2020, 8, 69979–69996. [Google Scholar] [CrossRef]
  39. Solanki, Y.S.; Chakrabarti, P.; Jasinski, M.; Leonowicz, Z.; Bolshev, V.; Vinogradov, A.; Jasinska, E.; Gono, R.; Nami, M. A hybrid supervised machine learning classifier system for breast cancer prognosis using feature selection and data imbalance handling approaches. Electronics 2021, 10, 699. [Google Scholar] [CrossRef]
  40. Ghosh, K.; Bellinger, C.; Corizzo, R.; Branco, P.; Krawczyk, B.; Japkowicz, N. The class imbalance problem in deep learning. Mach. Learn. 2024, 113, 4845–4901. [Google Scholar] [CrossRef]
  41. Apu, K.U.; Ali, M. A Systematic Literature Review on AI Approaches to Address Data Imbalance In Machine Learning. Front. Appl. Eng. Technol. 2025, 2, 58–77. [Google Scholar] [CrossRef]
  42. Zhu, J.; Pu, S.; He, J.; Su, D.; Cai, W.; Xu, X.; Liu, H. Processing imbalanced medical data at the data level with assisted-reproduction data as an example. BioData Min. 2024, 17, 29. [Google Scholar] [CrossRef] [PubMed]
  43. McCarthy, C.; Pradhan, N.; Redpath, C.; Adler, A. Validation of the Empatica E4 wristband. In Proceedings of the 2016 IEEE EMBS International Student Conference (ISC), Ottawa, ON, Canada, 29–31 May 2016; pp. 1–4. [Google Scholar] [CrossRef]
  44. Prieto, M.D.; Cirrincione, G.; Espinosa, A.G.; Ortega, J.A.; Henao, H. Bearing Fault Detection by a Novel Condition-Monitoring Scheme Based on Statistical-Time Features and Neural Networks. IEEE Trans. Ind. Electron. 2013, 60, 3398–3407. [Google Scholar] [CrossRef]
  45. Nayana, B.R.; Geethanjali, P. Analysis of Statistical Time-Domain Features Effectiveness in Identification of Bearing Faults From Vibration Signal. IEEE Sens. J. 2017, 17, 5618–5625. [Google Scholar] [CrossRef]
  46. Grandini, M.; Bagli, E.; Visani, G. Metrics for Multi-Class Classification: An Overview. arXiv 2020, arXiv:2008.05756. [Google Scholar] [CrossRef]
  47. Chicco, D.; Jurman, G. The Matthews correlation coefficient (MCC) should replace the ROC-AUC as the standard metric for assessing binary classification. BioData Min. 2023, 16, 4. [Google Scholar] [CrossRef] [PubMed]
  48. Mortaz, E. Imbalance accuracy metric for model selection in multi-class imbalance classification problems. Knowl.-Based Syst. 2020, 210, 106490. [Google Scholar] [CrossRef]
Figure 1. Bump charts (one for each task) ranking different scenarios’ models F 1 c scores.
Figure 1. Bump charts (one for each task) ranking different scenarios’ models F 1 c scores.
Bdcc 10 00116 g001
Figure 2. Bump charts (one for each task) ranking different scenarios’ models MCC c .
Figure 2. Bump charts (one for each task) ranking different scenarios’ models MCC c .
Bdcc 10 00116 g002
Figure 3. Bump charts (one for each task) ranking different scenarios’ models GM c .
Figure 3. Bump charts (one for each task) ranking different scenarios’ models GM c .
Bdcc 10 00116 g003
Figure 4. Bar charts (one for each task) of the IAM metric value reached by each model (bar color) in every scenario (horizontal axis).
Figure 4. Bar charts (one for each task) of the IAM metric value reached by each model (bar color) in every scenario (horizontal axis).
Bdcc 10 00116 g004
Figure 5. Bar charts (one for each task) of the Macro-Accuracy metric value reached by each model (bar color) in every scenario (horizontal axis).
Figure 5. Bar charts (one for each task) of the Macro-Accuracy metric value reached by each model (bar color) in every scenario (horizontal axis).
Bdcc 10 00116 g005
Figure 6. Heat-map of MCC c values of considered models under different scenarios associated to the SW task (best value for each class in bold).
Figure 6. Heat-map of MCC c values of considered models under different scenarios associated to the SW task (best value for each class in bold).
Bdcc 10 00116 g006
Figure 7. Heat-map of MCC c values of considered models under different scenarios associated with the 3-SSC task (best value for each class in bold).
Figure 7. Heat-map of MCC c values of considered models under different scenarios associated with the 3-SSC task (best value for each class in bold).
Bdcc 10 00116 g007
Figure 8. Heat-map of MCC c values of considered models under different scenarios associated with the 4-SSC task (best value for each class in bold).
Figure 8. Heat-map of MCC c values of considered models under different scenarios associated with the 4-SSC task (best value for each class in bold).
Bdcc 10 00116 g008
Figure 9. Heat-map of MCC c values of considered models under different scenarios associated with the 5-SSC task (best value for each class in bold).
Figure 9. Heat-map of MCC c values of considered models under different scenarios associated with the 5-SSC task (best value for each class in bold).
Bdcc 10 00116 g009
Figure 10. Global SHAP-based feature importance (top 14) for each of the four SSC tasks.
Figure 10. Global SHAP-based feature importance (top 14) for each of the four SSC tasks.
Bdcc 10 00116 g010
Table 1. Extracted features and associated equations.
Table 1. Extracted features and associated equations.
FeatureEquation
Mean μ = 1 N i = 1 N x i
Standard Deviation σ = 1 N 1 i = 1 N ( x i μ ) 2
First Quartile (Q1)value of x at 25 percentile
Median (Q2)value of x at 50 percentile
Third Quartile (Q3)value of x at 75 percentile
Minimum min ( x 1 , x 2 , , x N )
Maximum max ( x 1 , x 2 , , x N )
Root Mean Square (RMS) RMS = 1 N i = 1 N x i 2
Kurtosis Kurt ( x ) = 1 N i = 1 N ( x i μ ) 4 σ 4
Peak-to-Peak P 2 P = max ( x i ) min ( x i )
Crest Factor CF = max ( | x i | ) RMS
Skewness Skew ( x ) = 1 N i = 1 N ( x i μ ) 3 σ 3
Table 2. Number of windows for each sleep stage in each SSC task addressed.
Table 2. Number of windows for each sleep stage in each SSC task addressed.
Sleep Stage
SSC TaskWakeREMNREM 1NREM 2NREM 3
SW20,113 (25%)59,978 (75%)
3-SSC20,113 (25%)8406 (11%)51,572 (64%)
4-SSC20,113 (25%)8406 (11%)48,869 (61%)2703 (3%)
5-SSC20,113 (25%)8406 (11%)8824 (11%)40,045 (50%)2703 (3%)
Table 3. Cost matrix for the _CSL scenarios addressing the SW task (sleep and wake).
Table 3. Cost matrix for the _CSL scenarios addressing the SW task (sleep and wake).
Predicted Class
Sleep Wake
True Classsleep01
wake20
Table 4. Cost matrix for the _CSL scenarios addressing the 3-SSC task (NREM, REM, and wake).
Table 4. Cost matrix for the _CSL scenarios addressing the 3-SSC task (NREM, REM, and wake).
Predicted Class
NREM REM Wake
True ClassNREM011
REM303
wake220
Table 5. Cost matrix for the _CSL scenarios addressing the 4-SSC task (NREM12, NREM3, REM, and wake).
Table 5. Cost matrix for the _CSL scenarios addressing the 4-SSC task (NREM12, NREM3, REM, and wake).
Predicted Class
NREM12 NREM3 REM Wake
True ClassNREM120111
NREM34044
REM3303
wake2220
Table 6. Cost matrix for the _CSL scenarios addressing the 5-SSC task (NREM1, NREM2, NREM3, REM, and wake).
Table 6. Cost matrix for the _CSL scenarios addressing the 5-SSC task (NREM1, NREM2, NREM3, REM, and wake).
Predicted Class
NREM1 NREM2 NREM3 REM Wake
True ClassNREM103333
NREM210111
NREM355055
REM33303
wake22220
Table 7. Performance metrics used to measure the multiclass (C classes) experiments.
Table 7. Performance metrics used to measure the multiclass (C classes) experiments.
MetricFormula/DefinitionType
True Positives TP c = correctly c-classified instances of c-class class
True Negatives TN c = correctly not c-classified instances of other classes class
False Positives FP c = incorrectly c-classified instances of other classes class
False Negatives FN c = incorrectly not c-classified instances of c-class class
Sensitivity Sens c = T P c T P c + F N c class
Specificity Spec c = T N c T N c + F P c class
Precision Prec c = T P c T P c + F P c class
F1 Score F 1 c = 2 · Precision c · Recall c Precision c + Recall c class
Geometric Mean GM c = Sensitivity c · Specificity c class
Matthew’s Correl. Coef. MCC c = T P c · T N c F P c · F N c ( T P c + F P c ) ( T P c + F N c ) ( T N c + F P c ) ( T N c + F N c ) class
Micro Accuracy MicroAcc = c T P c c ( T P c + F P c ) global
Imbalance Accuracy Metric IAM = 1 C · c = 1 C T P c max ( F P c , F N c ) max ( T P c + F P c , T P c + F N c ) global
Table 8. Wilcoxon signed-rank test results for the comparison between the ensemble model and the other evaluated classifiers across the 32 experimental scenarios, using macro accuracy and IAM as global performance metrics.
Table 8. Wilcoxon signed-rank test results for the comparison between the ensemble model and the other evaluated classifiers across the 32 experimental scenarios, using macro accuracy and IAM as global performance metrics.
ComparisonACCmacro p-ValueIAM p-Value
ENSopt vs. ANNopt 7.95 × 10 7 2.50 × 10 7
ENSopt vs. DTopt 7.94 × 10 7 6.03 × 10 6
ENSopt vs. KNNopt 1.16 × 10 8 4.40 × 10 5
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sabbatini, L.; Belli, A.; Bruschi, S.; Esposito, M.; Raggiunto, S.; Pierleoni, P. Non-Invasive Sleep Stage Classification with Imbalance-Aware Machine Learning for Healthcare Monitoring. Big Data Cogn. Comput. 2026, 10, 116. https://doi.org/10.3390/bdcc10040116

AMA Style

Sabbatini L, Belli A, Bruschi S, Esposito M, Raggiunto S, Pierleoni P. Non-Invasive Sleep Stage Classification with Imbalance-Aware Machine Learning for Healthcare Monitoring. Big Data and Cognitive Computing. 2026; 10(4):116. https://doi.org/10.3390/bdcc10040116

Chicago/Turabian Style

Sabbatini, Luisiana, Alberto Belli, Sara Bruschi, Marco Esposito, Sara Raggiunto, and Paola Pierleoni. 2026. "Non-Invasive Sleep Stage Classification with Imbalance-Aware Machine Learning for Healthcare Monitoring" Big Data and Cognitive Computing 10, no. 4: 116. https://doi.org/10.3390/bdcc10040116

APA Style

Sabbatini, L., Belli, A., Bruschi, S., Esposito, M., Raggiunto, S., & Pierleoni, P. (2026). Non-Invasive Sleep Stage Classification with Imbalance-Aware Machine Learning for Healthcare Monitoring. Big Data and Cognitive Computing, 10(4), 116. https://doi.org/10.3390/bdcc10040116

Article Metrics

Back to TopTop