Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (6)

Search Parameters:
Keywords = classification of sleep sounds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 5274 KiB  
Article
High-Precision Contactless Stereo Acoustic Monitoring in Polysomnographic Studies of Children
by Milan Smetana and Ladislav Janousek
Sensors 2025, 25(16), 5093; https://doi.org/10.3390/s25165093 - 16 Aug 2025
Viewed by 292
Abstract
This paper focuses on designing a robust stereophonic measurement set-up for sound sleep recording. The system is employed throughout the night during polysomnographic examinations of children in a pediatric sleep laboratory at a university hospital. Deep learning methods were used to classify the [...] Read more.
This paper focuses on designing a robust stereophonic measurement set-up for sound sleep recording. The system is employed throughout the night during polysomnographic examinations of children in a pediatric sleep laboratory at a university hospital. Deep learning methods were used to classify the sounds in the recordings into four categories (snoring, breathing, silence, and other sounds). Specifically, a recurrent neural network with two long short-term memory layers was employed for classification. The network was trained using a dataset containing 1500 sounds from each category. The deep neural network achieved an accuracy of 91.16%. We developed an innovative algorithm for sound classification, which was optimized for accuracy. The results were presented in a detailed report, which included graphical representations and sound categorization throughout the night. Full article
(This article belongs to the Special Issue AI on Biomedical Signal Sensing and Processing for Health Monitoring)
Show Figures

Figure 1

29 pages, 2664 KiB  
Article
Coherent Feature Extraction with Swarm Intelligence Based Hybrid Adaboost Weighted ELM Classification for Snoring Sound Classification
by Sunil Kumar Prabhakar, Harikumar Rajaguru and Dong-Ok Won
Diagnostics 2024, 14(17), 1857; https://doi.org/10.3390/diagnostics14171857 - 25 Aug 2024
Viewed by 1315
Abstract
For patients suffering from obstructive sleep apnea and sleep-related breathing disorders, snoring is quite common, and it greatly interferes with the quality of life for them and for the people surrounding them. For diagnosing obstructive sleep apnea, snoring is used as a screening [...] Read more.
For patients suffering from obstructive sleep apnea and sleep-related breathing disorders, snoring is quite common, and it greatly interferes with the quality of life for them and for the people surrounding them. For diagnosing obstructive sleep apnea, snoring is used as a screening parameter, so the exact detection and classification of snoring sounds are quite important. Therefore, automated and very high precision snoring analysis and classification algorithms are required. In this work, initially the features are extracted from six different domains, such as time domain, frequency domain, Discrete Wavelet Transform (DWT) domain, sparse domain, eigen value domain, and cepstral domain. The extracted features are then selected using three efficient feature selection techniques, such as Golden Eagle Optimization (GEO), Salp Swarm Algorithm (SSA), and Refined SSA. The selected features are finally classified with the help of eight traditional machine learning classifiers and two proposed classifiers, such as the Firefly Algorithm-Weighted Extreme Learning Machine hybrid with Adaboost model (FA-WELM-Adaboost) and the Capuchin Search Algorithm-Weighted Extreme Learning Machine hybrid with Adaboost model (CSA-WELM-Adaboost). The analysis is performed on the MPSSC Interspeech dataset, and the best results are obtained when the DWT features with the refined SSA feature selection technique and FA-WELM-Adaboost hybrid classifier are utilized, reporting an Unweighted Average Recall (UAR) of 74.23%. The second-best results are obtained when DWT features are selected with the GEO feature selection technique and a CSA-WELM-Adaboost hybrid classifier is utilized, reporting an UAR of 73.86%. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

12 pages, 1958 KiB  
Article
Validation of Tracheal Sound-Based Respiratory Effort Monitoring for Obstructive Sleep Apnoea Diagnosis
by Mireia Muñoz Rojo, Renard Xaviero Adhi Pramono, Nikesh Devani, Matthew Thomas, Swapna Mandal and Esther Rodriguez-Villegas
J. Clin. Med. 2024, 13(12), 3628; https://doi.org/10.3390/jcm13123628 - 20 Jun 2024
Cited by 1 | Viewed by 1566
Abstract
Background: Respiratory effort is considered important in the context of the diagnosis of obstructive sleep apnoea (OSA), as well as other sleep disorders. However, current monitoring techniques can be obtrusive and interfere with a patient’s natural sleep. This study examines the reliability of [...] Read more.
Background: Respiratory effort is considered important in the context of the diagnosis of obstructive sleep apnoea (OSA), as well as other sleep disorders. However, current monitoring techniques can be obtrusive and interfere with a patient’s natural sleep. This study examines the reliability of an unobtrusive tracheal sound-based approach to monitor respiratory effort in the context of OSA, using manually marked respiratory inductance plethysmography (RIP) signals as a gold standard for validation. Methods: In total, 150 patients were trained on the use of type III cardiorespiratory polygraphy, which they took to use at home, alongside a neck-worn AcuPebble system. The respiratory effort channels obtained from the tracheal sound recordings were compared to the effort measured by the RIP bands during automatic and manual marking experiments. A total of 133 central apnoeas, 218 obstructive apnoeas, 263 obstructive hypopneas, and 270 normal breathing randomly selected segments were shuffled and blindly marked by a Registered Polysomnographic Technologist (RPSGT) in both types of channels. The RIP signals had previously also been independently marked by another expert clinician in the context of diagnosing those patients, and without access to the effort channel of AcuPebble. The classification achieved with the acoustically obtained effort was assessed with statistical metrics and the average amplitude distributions per respiratory event type for each of the different channels were also studied to assess the overlap between event types. Results: The performance of the acoustic effort channel was evaluated for the events where both scorers were in agreement in the marking of the gold standard reference channel, showing an average sensitivity of 90.5%, a specificity of 98.6%, and an accuracy of 96.8% against the reference standard with blind expert marking. In addition, a comparison using the Embla Remlogic 4.0 automatic software of the reference standard for classification, as opposed to the expert marking, showed that the acoustic channels outperformed the RIP channels (acoustic sensitivity: 71.9%; acoustic specificity: 97.2%; RIP sensitivity: 70.1%; RIP specificity: 76.1%). The amplitude trends across different event types also showed that the acoustic channels exhibited a better differentiation between the amplitude distributions of different event types, which can help when doing manual interpretation. Conclusions: The results prove that the acoustically obtained effort channel extracted using AcuPebble is an accurate, reliable, and more patient-friendly alternative to RIP in the context of OSA. Full article
(This article belongs to the Section Respiratory Medicine)
Show Figures

Figure 1

26 pages, 12093 KiB  
Article
Sleep Pattern Analysis in Unconstrained and Unconscious State
by Won-Ho Jun, Hyung-Ju Kim and Youn-Sik Hong
Sensors 2022, 22(23), 9296; https://doi.org/10.3390/s22239296 - 29 Nov 2022
Cited by 3 | Viewed by 5431
Abstract
Sleep accounts for one-third of an individual’s life and is a measure of health. Both sleep time and quality are essential, and a person requires sound sleep to stay healthy. Generally, sleep patterns are influenced by genetic factors and differ among people. Therefore, [...] Read more.
Sleep accounts for one-third of an individual’s life and is a measure of health. Both sleep time and quality are essential, and a person requires sound sleep to stay healthy. Generally, sleep patterns are influenced by genetic factors and differ among people. Therefore, analyzing whether individual sleep patterns guarantee sufficient sleep is necessary. Here, we aimed to acquire information regarding the sleep status of individuals in an unconstrained and unconscious state to consequently classify the sleep state. Accordingly, we collected data associated with the sleep status of individuals, such as frequency of tosses and turns, snoring, and body temperature, as well as environmental data, such as room temperature, humidity, illuminance, carbon dioxide concentration, and ambient noise. The sleep state was classified into two stages: nonrapid eye movement and rapid eye movement sleep, rather than the general four stages. Furthermore, to verify the validity of the sleep state classifications, we compared them with heart rate. Full article
Show Figures

Figure 1

20 pages, 1680 KiB  
Article
Predicting Polysomnography Parameters from Anthropometric Features and Breathing Sounds Recorded during Wakefulness
by Ahmed Elwali and Zahra Moussavi
Diagnostics 2021, 11(5), 905; https://doi.org/10.3390/diagnostics11050905 - 19 May 2021
Cited by 6 | Viewed by 3336
Abstract
Background: The apnea/hypopnea index (AHI) is the primary outcome of a polysomnography assessment (PSG) for determining obstructive sleep apnea (OSA) severity. However, other OSA severity parameters (i.e., total arousal index, mean oxygen saturation (SpO2%), etc.) are crucial for a full diagnosis of OSA [...] Read more.
Background: The apnea/hypopnea index (AHI) is the primary outcome of a polysomnography assessment (PSG) for determining obstructive sleep apnea (OSA) severity. However, other OSA severity parameters (i.e., total arousal index, mean oxygen saturation (SpO2%), etc.) are crucial for a full diagnosis of OSA and deciding on a treatment option. PSG assessments and home sleep tests measure these parameters, but there is no screening tool to estimate or predict the OSA severity parameters other than the AHI. In this study, we investigated whether a combination of breathing sounds recorded during wakefulness and anthropometric features could be predictive of PSG parameters. Methods: Anthropometric information and five tracheal breathing sound cycles were recorded during wakefulness from 145 individuals referred to an overnight PSG study. The dataset was divided into training, validation, and blind testing datasets. Spectral and bispectral features of the sounds were evaluated to run correlation and classification analyses with the PSG parameters collected from the PSG sleep reports. Results: Many sound and anthropometric features had significant correlations (up to 0.56) with PSG parameters. Using combinations of sound and anthropometric features in a bilinear model for each PSG parameter resulted in correlation coefficients up to 0.84. Using the evaluated models for classification with a two-class random-forest classifier resulted in a blind testing classification accuracy up to 88.8% for predicting the key PSG parameters such as arousal index. Conclusions: These results add new value to the current OSA screening tools and provide a new promising possibility for predicting PSG parameters using only a few seconds of breathing sounds recorded during wakefulness without conducting an overnight PSG study. Full article
(This article belongs to the Special Issue Diagnosis and Management of Obstructive Sleep Apnea)
Show Figures

Figure 1

17 pages, 1244 KiB  
Article
Noncontact Sleep Study by Multi-Modal Sensor Fusion
by Ku-young Chung, Kwangsub Song, Kangsoo Shin, Jinho Sohn, Seok Hyun Cho and Joon-Hyuk Chang
Sensors 2017, 17(7), 1685; https://doi.org/10.3390/s17071685 - 21 Jul 2017
Cited by 33 | Viewed by 7519
Abstract
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven [...] Read more.
Polysomnography (PSG) is considered as the gold standard for determining sleep stages, but due to the obtrusiveness of its sensor attachments, sleep stage classification algorithms using noninvasive sensors have been developed throughout the years. However, the previous studies have not yet been proven reliable. In addition, most of the products are designed for healthy customers rather than for patients with sleep disorder. We present a novel approach to classify sleep stages via low cost and noncontact multi-modal sensor fusion, which extracts sleep-related vital signals from radar signals and a sound-based context-awareness technique. This work is uniquely designed based on the PSG data of sleep disorder patients, which were received and certified by professionals at Hanyang University Hospital. The proposed algorithm further incorporates medical/statistical knowledge to determine personal-adjusted thresholds and devise post-processing. The efficiency of the proposed algorithm is highlighted by contrasting sleep stage classification performance between single sensor and sensor-fusion algorithms. To validate the possibility of commercializing this work, the classification results of this algorithm were compared with the commercialized sleep monitoring device, ResMed S+. The proposed algorithm was investigated with random patients following PSG examination, and results show a promising novel approach for determining sleep stages in a low cost and unobtrusive manner. Full article
(This article belongs to the Special Issue Multi-Sensor Integration and Fusion)
Show Figures

Figure 1

Back to TopTop