Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = lung sound detection

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 1135 KiB  
Article
Can Lung Ultrasound Act as a Diagnosis and Monitoring Tool in Children with Community Acquired Pneumonia? Correlation with Risk Factors, Clinical Indicators and Biologic Results
by Raluca Isac, Alexandra-Monica Cugerian-Ratiu, Andrada-Mara Micsescu-Olah, Alexandra Daniela Bodescu, Laura-Adelina Vlad, Anca Mirela Zaroniu, Mihai Gafencu and Gabriela Doros
J. Clin. Med. 2025, 14(15), 5304; https://doi.org/10.3390/jcm14155304 - 27 Jul 2025
Abstract
Background: Community-acquired pneumonia (CAP) is the leading cause of mortality in children from middle- to low-income countries; diagnosing CAP includes clinical evaluation, laboratory testing and pulmonary imaging. Lung ultrasound (LUS) is a sensitive, accessible, non-invasive, non-radiant method for accurately evaluating the lung involvement [...] Read more.
Background: Community-acquired pneumonia (CAP) is the leading cause of mortality in children from middle- to low-income countries; diagnosing CAP includes clinical evaluation, laboratory testing and pulmonary imaging. Lung ultrasound (LUS) is a sensitive, accessible, non-invasive, non-radiant method for accurately evaluating the lung involvement in acute diseases. Whether LUS findings can be correlated with CAP’s severity or sepsis risk remains debatable. This study aimed to increase the importance of LUS in diagnosing and monitoring CAP. We analyzed 102 children aged 1 month up to 18 years, hospital admitted with CAP. Mean age was 5.71 ± 4.85 years. Underweight was encountered in 44.11% of children, especially below 5 years, while overweight was encountered in 11.36% of older children and adolescents. Patients with CAP presented with fever (79.41%), cough (97.05%), tachypnea (18.62%), respiratory failure symptoms (20.58%), chest pain (12.74%) or poor feeding. Despite the fact that 21.56% had clinically occult CAP and six patients (5.88%) experienced radiologically occult pneumonia, CAP diagnosis was established based on anomalies detected using LUS. Conclusions: Detailed clinical examination with abnormal/modified breath sounds and/or tachypnea is suggestive of acute pneumonia. LUS is a sensitive diagnostic tool. A future perspective of including LUS in the diagnosis algorithm of CAP should be taken into consideration. Full article
(This article belongs to the Special Issue Clinical Updates in Lung Ultrasound)
Show Figures

Figure 1

24 pages, 637 KiB  
Review
Deep Learning Network Selection and Optimized Information Fusion for Enhanced COVID-19 Detection: A Literature Review
by Olga Adriana Caliman Sturdza, Florin Filip, Monica Terteliu Baitan and Mihai Dimian
Diagnostics 2025, 15(14), 1830; https://doi.org/10.3390/diagnostics15141830 - 21 Jul 2025
Viewed by 712
Abstract
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and [...] Read more.
The rapid spread of COVID-19 increased the need for speedy diagnostic tools, which led scientists to conduct extensive research on deep learning (DL) applications that use chest imaging, such as chest X-ray (CXR) and computed tomography (CT). This review examines the development and performance of DL architectures, notably convolutional neural networks (CNNs) and emerging vision transformers (ViTs), in identifying COVID-19-related lung abnormalities. Individual ResNet architectures, along with CNN models, demonstrate strong diagnostic performance through the transfer protocol; however, ViTs provide better performance, with improved readability and reduced data requirements. Multimodal diagnostic systems now incorporate alternative methods, in addition to imaging, which use lung ultrasounds, clinical data, and cough sound evaluation. Information fusion techniques, which operate at the data, feature, and decision levels, enhance diagnostic performance. However, progress in COVID-19 detection is hindered by ongoing issues stemming from restricted and non-uniform datasets, as well as domain differences in image standards and complications with both diagnostic overfitting and poor generalization capabilities. Recent developments in COVID-19 diagnosis involve constructing expansive multi-noise information sets while creating clinical process-oriented AI algorithms and implementing distributed learning protocols for securing information security and system stability. While deep learning-based COVID-19 detection systems show strong potential for clinical application, broader validation, regulatory approvals, and continuous adaptation remain essential for their successful deployment and for preparing future pandemic response strategies. Full article
Show Figures

Figure 1

9 pages, 1717 KiB  
Proceeding Paper
Generative AI Respiratory and Cardiac Sound Separation Using Variational Autoencoders (VAEs)
by Arshad Jamal, R. Kanesaraj Ramasamy and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 9; https://doi.org/10.3390/cmsf2025010009 - 1 Jul 2025
Viewed by 210
Abstract
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, [...] Read more.
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, particularly convolutional neural networks (CNNs), to overcome these obstacles. Advanced machine learning models, denoising algorithms, and domain adaptation strategies address challenges such as frequency overlap, external noise, and limited labeled datasets. This study presents a robust methodology for detecting heart and lung diseases from audio signals using advanced preprocessing, feature extraction, and deep learning models. The approach integrates adaptive filtering and bandpass filtering as denoising techniques and variational autoencoders (VAEs) for feature extraction. The extracted features are input into a CNN, which classifies audio signals into different heart and lung conditions. The results highlight the potential of this combined approach for early and accurate disease detection, contributing to the development of reliable diagnostic tools for healthcare. Full article
Show Figures

Figure 1

8 pages, 1216 KiB  
Proceeding Paper
Enhanced Lung Disease Detection Using Double Denoising and 1D Convolutional Neural Networks on Respiratory Sound Analysis
by Reshma Sreejith, R. Kanesaraj Ramasamy, Wan-Noorshahida Mohd-Isa and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 7; https://doi.org/10.3390/cmsf2025010007 - 24 Jun 2025
Viewed by 272
Abstract
The accurate and early detection of respiratory diseases is vital for effective diagnosis and treatment. This study presents a new approach for classifying lung sounds using a double denoising method combined with a 1D Convolutional Neural Network (CNN). The preprocessing uses Fast Fourier [...] Read more.
The accurate and early detection of respiratory diseases is vital for effective diagnosis and treatment. This study presents a new approach for classifying lung sounds using a double denoising method combined with a 1D Convolutional Neural Network (CNN). The preprocessing uses Fast Fourier Transform to clean up sounds and High-Pass Filtering to improve the quality of breathing sounds by eliminating noise and low-frequency interruptions. The Short-Time Fourier Transform (STFT) extracts features that capture localised frequency variations, crucial for distinguishing normal and abnormal respiratory sounds. These features are input into the 1D CNN, which classifies diseases such as bronchiectasis, pneumonia, asthma, COPD, healthy, and URTI. The dual denoising method enhances signal clarity and classification performance. The model achieved 96% validation accuracy, highlighting its reliability in detecting respiratory conditions. The results emphasise the effectiveness of combining signal augmentation with deep learning for automated respiratory sound analysis, with future research focusing on dataset expansion and model refinement for clinical use. Full article
Show Figures

Figure 1

28 pages, 11981 KiB  
Review
Artificial Intelligence in Respiratory Health: A Review of AI-Driven Analysis of Oral and Nasal Breathing Sounds for Pulmonary Assessment
by Shiva Shokouhmand, Smriti Bhatt and Miad Faezipour
Electronics 2025, 14(10), 1994; https://doi.org/10.3390/electronics14101994 - 14 May 2025
Cited by 1 | Viewed by 1990
Abstract
Continuous monitoring of pulmonary function is crucial for effective respiratory disease management. The COVID-19 pandemic has also underscored the need for accessible and convenient diagnostic tools for respiratory health assessment. While traditional lung sound auscultation has been the primary method for evaluating pulmonary [...] Read more.
Continuous monitoring of pulmonary function is crucial for effective respiratory disease management. The COVID-19 pandemic has also underscored the need for accessible and convenient diagnostic tools for respiratory health assessment. While traditional lung sound auscultation has been the primary method for evaluating pulmonary function, emerging research highlights the diagnostic potential of nasal and oral breathing sounds. These sounds, shaped by the upper airway, serve as valuable non-invasive biomarkers for pulmonary health and disease detection. Recent advancements in artificial intelligence (AI) have significantly enhanced respiratory sound analysis by enabling automated feature extraction and pattern recognition from spectral and temporal characteristics or even raw acoustic signals. AI-driven models have demonstrated promising accuracy in detecting respiratory conditions, paving the way for real-time, smartphone-based respiratory monitoring. This review examines the potential of AI-enhanced respiratory sound analysis, discussing methodologies, available datasets, and future directions toward scalable and accessible diagnostic solutions. Full article
(This article belongs to the Special Issue Medical Applications of Artificial Intelligence)
Show Figures

Figure 1

12 pages, 725 KiB  
Article
Use of Ultrasonography for the Evaluation of Lung Lesions in Lambs with Respiratory Complex
by Alejandro Sánchez-Fernández, Juan Carlos Gardón, Carla Ibáñez and Joel Bueso-Ródenas
Animals 2025, 15(8), 1153; https://doi.org/10.3390/ani15081153 - 17 Apr 2025
Viewed by 622
Abstract
The ovine respiratory complex significantly affects lamb welfare and production efficiency, necessitating accurate diagnostic methods for pulmonary lesions. This study explores the relationship between clinical scoring, auscultation, ultrasonography, and macroscopic post-mortem evaluation to assess respiratory disease in 111 lambs. A standardized clinical scoring [...] Read more.
The ovine respiratory complex significantly affects lamb welfare and production efficiency, necessitating accurate diagnostic methods for pulmonary lesions. This study explores the relationship between clinical scoring, auscultation, ultrasonography, and macroscopic post-mortem evaluation to assess respiratory disease in 111 lambs. A standardized clinical scoring system, adapted from bovine models, evaluated ocular and nasal discharge, head tilt, cough, and rectal temperature. Auscultation categorized pulmonary sounds, while ultrasonography identified lung abnormalities, including B-lines, consolidations, pleural effusion, and abscesses. Macroscopic post-mortem examinations confirmed lesion extent. Kendall–Tau-B correlation coefficient analysis revealed significant associations between the methods (p < 0.01), with a high correlation between auscultation and clinical scoring τ of 0.634 (95% CI: 0.489 to 0.765), auscultation and ultrasonography τ of 0.611 (95% CI: 0.500 to 0.710), and ultrasonography and post-mortem findings τ 0.608 (95% CI: 0.460 to 0.731). While auscultation and clinical scoring provided useful insights, ultrasonography exhibited superior sensitivity in detecting subclinical and early-stage lesions, aligning closely with post-mortem evaluations. These findings emphasize ultrasonography as an effective tool for diagnosing respiratory disease in lambs, improving diagnostic accuracy and enabling timely interventions to mitigate disease impact and reduce antimicrobial use. Full article
(This article belongs to the Collection Diseases of Small Ruminants)
Show Figures

Figure 1

24 pages, 4555 KiB  
Review
Biophysics of Voice Onset: A Comprehensive Overview
by Philippe H. DeJonckere and Jean Lebacq
Bioengineering 2025, 12(2), 155; https://doi.org/10.3390/bioengineering12020155 - 6 Feb 2025
Viewed by 1493
Abstract
Voice onset is the sequence of events between the first detectable movement of the vocal folds (VFs) and the stable vibration of the vocal folds. It is considered a critical phase of phonation, and the different modalities of voice onset and their distinctive [...] Read more.
Voice onset is the sequence of events between the first detectable movement of the vocal folds (VFs) and the stable vibration of the vocal folds. It is considered a critical phase of phonation, and the different modalities of voice onset and their distinctive characteristics are analysed. Oscillation of the VFs can start from either a closed glottis with no airflow or an open glottis with airflow. The objective of this article is to provide a comprehensive survey of this transient phenomenon, from a biomechanical point of view, in normal modal (i.e., nonpathological) conditions of vocal emission. This synthetic overview mainly relies upon a number of recent experimental studies, all based on in vivo physiological measurements, and using a common, original and consistent methodology which combines high-speed imaging, sound analysis, electro-, photo-, flow- and ultrasound glottography. In this way, the two basic parameters—the instantaneous glottal area and the airflow—can be measured, and the instantaneous intraglottal pressure can be automatically calculated from the combined records, which gives a detailed insight, both qualitative and quantitative, into the onset phenomenon. The similarity of the methodology enables a link to be made with the biomechanics of sustained phonation. Essential is the temporal relationship between the glottal area and intraglottal pressure. The three key findings are (1) From the initial onset cycles onwards, the intraglottal pressure signal leads that of the opening signal, as in sustained voicing, which is the basic condition for an energy transfer from the lung pressure to the VF tissue. (2) This phase lead is primarily due to the skewing of the airflow curve to the right with respect to the glottal area curve, a consequence of the compressibility of air and the inertance of the vocal tract. (3) In case of a soft, physiological onset, the glottis shows a spindle-shaped configuration just before the oscillation begins. Using the same parameters (airflow, glottal area, intraglottal pressure), the mechanism of triggering the oscillation can be explained by the intraglottal aerodynamic condition. From the first cycles on, the VFs oscillate on either side of a paramedian axis. The amplitude of these free oscillations increases progressively before the first contact on the midline. Whether the first movement is lateral or medial cannot be defined. Moreover, this comprehensive synthesis of onset biomechanics and the links it creates sheds new light on comparable phenomena at the level of sound attack in wind instruments, as well as phenomena such as the production of intervals in the sung voice. Full article
(This article belongs to the Special Issue The Biophysics of Vocal Onset)
Show Figures

Figure 1

4 pages, 1765 KiB  
Interesting Images
Dynamic Digital Radiography (DDR) in the Diagnosis of a Diaphragm Dysfunction
by Elisa Calabrò, Tiana Lisnic, Maurizio Cè, Laura Macrì, Francesca Lucrezia Rabaiotti and Michaela Cellina
Diagnostics 2025, 15(1), 2; https://doi.org/10.3390/diagnostics15010002 - 24 Dec 2024
Cited by 1 | Viewed by 1395
Abstract
Dynamic digital radiography (DDR) is a recent imaging technique that allows for real-time visualization of thoracic and pulmonary movement in synchronization with the breathing cycle, providing useful clinical information. A 46-year-old male, a former smoker, was evaluated for unexplained dyspnea and reduced exercise [...] Read more.
Dynamic digital radiography (DDR) is a recent imaging technique that allows for real-time visualization of thoracic and pulmonary movement in synchronization with the breathing cycle, providing useful clinical information. A 46-year-old male, a former smoker, was evaluated for unexplained dyspnea and reduced exercise tolerance. His medical history included a SARS-CoV-2 infection in 2021. On physical examination, decreased breath sounds were noted at the right-lung base. Spirometry showed results below predicted values. A standard chest radiograph revealed an elevated right hemidiaphragm, a finding not present in a previous CT scan performed during his SARS-CoV-2 infection. To better assess the diaphragmatic function, a posteroanterior DDR study was performed in the standing position with X-ray equipment (AeroDR TX, Konica Minolta Inc., Tokyo, Japan) during forced breath, with the following acquisition parameters: tube voltage, 100 kV; tube current, 50 mA; pulse duration of pulsed X-ray, 1.6 ms; source-to-image distance, 2 m; additional filter, 0.5 mm Al + 0.1 mm Cu. The exposure time was 12 s. The pixel size was 388 × 388 μm, the matrix size was 1024 × 768, and the overall image area was 40 × 30 cm. The dynamic imaging, captured at 15 frames/s, was then assessed on a dedicated workstation (Konica Minolta Inc., Tokyo, Japan). The dynamic acquisition showed a markedly reduced motion of the right diaphragm. The diagnosis of diaphragm dysfunction can be challenging due to its range of symptoms, which can vary from mild to severe dyspnea. The standard chest X-ray is usually the first exam to detect an elevated hemidiaphragm, which may suggest motion impairment or paralysis but fails to predict diaphragm function. Ultrasound (US) allows for the direct visualization of the diaphragm and its motion. Still, its effectiveness depends highly on the operator’s experience and could be limited by gas and abdominal fat. Moreover, ultrasound offers limited information regarding the lung parenchyma. On the other hand, high-resolution CT can be useful in identifying causes of diaphragmatic dysfunction, such as atrophy or eventration. However, it does not allow for the quantitative assessment of diaphragmatic movement and the differentiation between paralysis and dysfunction, especially in bilateral dysfunction, which is often overlooked due to the elevation of both hemidiaphragms. Dynamic Digital Radiography (DDR) has emerged as a valuable and innovative imaging technique due to its unique ability to evaluate diaphragm movement in real time, integrating dynamic functional information with static anatomical data. DDR provides both visual and quantitative analysis of the diaphragm’s motion, including excursion and speed, which leads to a definitive diagnosis. Additionally, DDR offers a range of post-processing techniques that provide information on lung movement and pulmonary ventilation. Based on these findings, the patient was referred to a thoracic surgeon and deemed a candidate for surgical plication of the right diaphragm. Full article
(This article belongs to the Special Issue Diagnosis of Cardio-Thoracic Diseases)
Show Figures

Figure 1

16 pages, 624 KiB  
Article
Towards the Development of the Clinical Decision Support System for the Identification of Respiration Diseases via Lung Sound Classification Using 1D-CNN
by Syed Waqad Ali, Muhammad Munaf Rashid, Muhammad Uzair Yousuf, Sarmad Shams, Muhammad Asif, Muhammad Rehan and Ikram Din Ujjan
Sensors 2024, 24(21), 6887; https://doi.org/10.3390/s24216887 - 27 Oct 2024
Cited by 3 | Viewed by 1678
Abstract
Respiratory disorders are commonly regarded as complex disorders to diagnose due to their multi-factorial nature, encompassing the interplay between hereditary variables, comorbidities, environmental exposures, and therapies, among other contributing factors. This study presents a Clinical Decision Support System (CDSS) for the early detection [...] Read more.
Respiratory disorders are commonly regarded as complex disorders to diagnose due to their multi-factorial nature, encompassing the interplay between hereditary variables, comorbidities, environmental exposures, and therapies, among other contributing factors. This study presents a Clinical Decision Support System (CDSS) for the early detection of respiratory disorders using a one-dimensional convolutional neural network (1D-CNN) model. The ICBHI 2017 Breathing Sound Database, which contains samples of different breathing sounds, was used in this research. During pre-processing, audio clips were resampled to a uniform rate, and breathing cycles were segmented into individual instances of the lung sound. A One-Dimensional Convolutional Neural Network (1D-CNN) consisting of convolutional layers, max pooling layers, dropout layers, and fully connected layers, was designed to classify the processed clips into four categories: normal, crackles, wheezes, and combined crackles and wheezes. To address class imbalance, the Synthetic Minority Over-sampling Technique (SMOTE) was applied to the training data. Hyperparameters were optimized using grid search with k−fold cross-validation. The model achieved an overall accuracy of 0.95, outperforming state-of-the-art methods. Particularly, the normal and crackles categories attained the highest F1-scores of 0.97 and 0.95, respectively. The model’s robustness was further validated through 5−fold and 10−fold cross-validation experiments. This research highlighted an essential aspect of diagnosing lung sounds through artificial intelligence and utilized the 1D-CNN to classify lung sounds accurately. The proposed advancement of technology shall enable medical care practitioners to diagnose lung disorders in an improved manner, leading to better patient care. Full article
(This article belongs to the Special Issue AI-Based Automated Recognition and Detection in Healthcare)
Show Figures

Figure 1

14 pages, 464 KiB  
Article
Empowering Healthcare: TinyML for Precise Lung Disease Classification
by Youssef Abadade, Nabil Benamar, Miloud Bagaa and Habiba Chaoui
Future Internet 2024, 16(11), 391; https://doi.org/10.3390/fi16110391 - 25 Oct 2024
Cited by 4 | Viewed by 3526
Abstract
Respiratory diseases such as asthma pose significant global health challenges, necessitating efficient and accessible diagnostic methods. The traditional stethoscope is widely used as a non-invasive and patient-friendly tool for diagnosing respiratory conditions through lung auscultation. However, it has limitations, such as a lack [...] Read more.
Respiratory diseases such as asthma pose significant global health challenges, necessitating efficient and accessible diagnostic methods. The traditional stethoscope is widely used as a non-invasive and patient-friendly tool for diagnosing respiratory conditions through lung auscultation. However, it has limitations, such as a lack of recording functionality, dependence on the expertise and judgment of physicians, and the absence of noise-filtering capabilities. To overcome these limitations, digital stethoscopes have been developed to digitize and record lung sounds. Recently, there has been growing interest in the automated analysis of lung sounds using Deep Learning (DL). Nevertheless, the execution of large DL models in the cloud often leads to latency, dependency on internet connectivity, and potential privacy issues due to the transmission of sensitive health data. To address these challenges, we developed Tiny Machine Learning (TinyML) models for the real-time detection of respiratory conditions by using lung sound recordings, deployable on low-power, cost-effective devices like digital stethoscopes. We trained three machine learning models—a custom CNN, an Edge Impulse CNN, and a custom LSTM—on a publicly available lung sound dataset. Our data preprocessing included bandpass filtering and feature extraction through Mel-Frequency Cepstral Coefficients (MFCCs). We applied quantization techniques to ensure model efficiency. The custom CNN model achieved the highest performance, with 96% accuracy and 97% precision, recall, and F1-scores, while maintaining moderate resource usage. These findings highlight the potential of TinyML to provide accessible, reliable, and real-time diagnostic tools, particularly in remote and underserved areas, demonstrating the transformative impact of integrating advanced AI algorithms into portable medical devices. This advancement facilitates the prospect of automated respiratory health screening using lung sounds. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

18 pages, 1554 KiB  
Article
A Framework for Detecting Pulmonary Diseases from Lung Sound Signals Using a Hybrid Multi-Task Autoencoder-SVM Model
by Khwanjit Orkweha, Khomdet Phapatanaburi, Wongsathon Pathonsuwan, Talit Jumphoo, Atcharawan Rattanasak, Patikorn Anchuen, Watcharakorn Pinthurat, Monthippa Uthansakul and Peerapong Uthansakul
Symmetry 2024, 16(11), 1413; https://doi.org/10.3390/sym16111413 - 23 Oct 2024
Cited by 2 | Viewed by 1197
Abstract
Research focuses on the efficacy of Multi-Task Autoencoder (MTAE) models in signal classification due to their ability to handle many tasks while improving feature extraction. However, researchers have not thoroughly investigated the study of lung sounds (LSs) for pulmonary disease detection. This paper [...] Read more.
Research focuses on the efficacy of Multi-Task Autoencoder (MTAE) models in signal classification due to their ability to handle many tasks while improving feature extraction. However, researchers have not thoroughly investigated the study of lung sounds (LSs) for pulmonary disease detection. This paper introduces a new framework that utilizes an MTAE model to detect lung diseases based on LS signals. The model integrates an autoencoder and a supervised classifier, simultaneously optimizing both classification accuracy and signal reconstruction. Furthermore, we propose a hybrid approach that combines an MTAE and a Support Vector Machine (MTAE-SVM) to enhance performance. We evaluated our model using LS signals from a publicly available database from King Abdullah University Hospital. The model attained an accuracy of 89.47% for four classes (normal, pneumonia, asthma, and chronic obstructive pulmonary disease) and 90.22% for three classes (normal, pneumonia, and asthma cases). Using the MTAE-SVM, the accuracy was further improved to 91.49% for four classes and 93.08% for three classes, respectively. The results indicate that the MTAE and MTAE-SVM have a considerable potential for detecting pulmonary diseases from lung sound signals. This could aid in the creation of more user-friendly and effective diagnostic tools. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

12 pages, 15147 KiB  
Article
Design and Analysis of a Contact Piezo Microphone for Recording Tracheal Breathing Sounds
by Walid Ashraf and Zahra Moussavi
Sensors 2024, 24(17), 5511; https://doi.org/10.3390/s24175511 - 26 Aug 2024
Cited by 1 | Viewed by 2368
Abstract
Analysis of tracheal breathing sounds (TBS) is a significant area of study in medical diagnostics and monitoring for respiratory diseases and obstructive sleep apnea (OSA). Recorded at the suprasternal notch, TBS can provide detailed insights into the respiratory system’s functioning and health. This [...] Read more.
Analysis of tracheal breathing sounds (TBS) is a significant area of study in medical diagnostics and monitoring for respiratory diseases and obstructive sleep apnea (OSA). Recorded at the suprasternal notch, TBS can provide detailed insights into the respiratory system’s functioning and health. This method has been particularly useful for non-invasive assessments and is used in various clinical settings, such as OSA, asthma, respiratory infectious diseases, lung function, and detection during either wakefulness or sleep. One of the challenges and limitations of TBS recording is the background noise, including speech sound, movement, and even non-tracheal breathing sounds propagating in the air. The breathing sounds captured from the nose or mouth can interfere with the tracheal breathing sounds, making it difficult to isolate the sounds necessary for accurate diagnostics. In this study, two surface microphones are proposed to accurately record TBS acquired solely from the trachea. The frequency response of each microphone is compared with a reference microphone. Additionally, this study evaluates the tracheal and lung breathing sounds of six participants recorded using the proposed microphones versus a commercial omnidirectional microphone, both in environments with and without background white noise. The proposed microphones demonstrated reduced susceptibility to background noise particularly in the frequency ranges (1800–2199) Hz and (2200–2599) Hz with maximum deviation of 2 dB and 2.1 dB, respectively, compared to 9 dB observed in the commercial microphone. The findings of this study have potential implications for improving the accuracy and reliability of respiratory diagnostics in clinical practice. Full article
Show Figures

Figure 1

19 pages, 3746 KiB  
Article
An Accelerometer-Based Wearable Patch for Robust Respiratory Rate and Wheeze Detection Using Deep Learning
by Brian Sang, Haoran Wen, Gregory Junek, Wendy Neveu, Lorenzo Di Francesco and Farrokh Ayazi
Biosensors 2024, 14(3), 118; https://doi.org/10.3390/bios14030118 - 22 Feb 2024
Cited by 7 | Viewed by 4922
Abstract
Wheezing is a critical indicator of various respiratory conditions, including asthma and chronic obstructive pulmonary disease (COPD). Current diagnosis relies on subjective lung auscultation by physicians. Enabling this capability via a low-profile, objective wearable device for remote patient monitoring (RPM) could offer pre-emptive, [...] Read more.
Wheezing is a critical indicator of various respiratory conditions, including asthma and chronic obstructive pulmonary disease (COPD). Current diagnosis relies on subjective lung auscultation by physicians. Enabling this capability via a low-profile, objective wearable device for remote patient monitoring (RPM) could offer pre-emptive, accurate respiratory data to patients. With this goal as our aim, we used a low-profile accelerometer-based wearable system that utilizes deep learning to objectively detect wheezing along with respiration rate using a single sensor. The miniature patch consists of a sensitive wideband MEMS accelerometer and low-noise CMOS interface electronics on a small board, which was then placed on nine conventional lung auscultation sites on the patient’s chest walls to capture the pulmonary-induced vibrations (PIVs). A deep learning model was developed and compared with a deterministic time–frequency method to objectively detect wheezing in the PIV signals using data captured from 52 diverse patients with respiratory diseases. The wearable accelerometer patch, paired with the deep learning model, demonstrated high fidelity in capturing and detecting respiratory wheezes and patterns across diverse and pertinent settings. It achieved accuracy, sensitivity, and specificity of 95%, 96%, and 93%, respectively, with an AUC of 0.99 on the test set—outperforming the deterministic time–frequency approach. Furthermore, the accelerometer patch outperforms the digital stethoscopes in sound analysis while offering immunity to ambient sounds, which not only enhances data quality and performance for computational wheeze detection by a significant margin but also provides a robust sensor solution that can quantify respiration patterns simultaneously. Full article
Show Figures

Figure 1

23 pages, 1282 KiB  
Article
Classification of Adventitious Sounds Combining Cochleogram and Vision Transformers
by Loredana Daria Mang, Francisco David González Martínez, Damian Martinez Muñoz, Sebastián García Galán and Raquel Cortina
Sensors 2024, 24(2), 682; https://doi.org/10.3390/s24020682 - 21 Jan 2024
Cited by 8 | Viewed by 3165
Abstract
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system’s condition and identifying abnormalities. The main contribution of this study is to investigate [...] Read more.
Early identification of respiratory irregularities is critical for improving lung health and reducing global mortality rates. The analysis of respiratory sounds plays a significant role in characterizing the respiratory system’s condition and identifying abnormalities. The main contribution of this study is to investigate the performance when the input data, represented by cochleogram, is used to feed the Vision Transformer (ViT) architecture, since this input–classifier combination is the first time it has been applied to adventitious sound classification to our knowledge. Although ViT has shown promising results in audio classification tasks by applying self-attention to spectrogram patches, we extend this approach by applying the cochleogram, which captures specific spectro-temporal features of adventitious sounds. The proposed methodology is evaluated on the ICBHI dataset. We compare the classification performance of ViT with other state-of-the-art CNN approaches using spectrogram, Mel frequency cepstral coefficients, constant-Q transform, and cochleogram as input data. Our results confirm the superior classification performance combining cochleogram and ViT, highlighting the potential of ViT for reliable respiratory sound classification. This study contributes to the ongoing efforts in developing automatic intelligent techniques with the aim to significantly augment the speed and effectiveness of respiratory disease detection, thereby addressing a critical need in the medical field. Full article
(This article belongs to the Special Issue Advanced Machine Intelligence for Biomedical Signal Processing)
Show Figures

Figure 1

15 pages, 10824 KiB  
Review
Lung Ultrasound and Pleural Artifacts: A Pictorial Review
by Ehsan Safai Zadeh, Christian Görg, Helmut Prosch, Daria Kifjak, Christoph Frank Dietrich, Christian B. Laursen and Hajo Findeisen
Diagnostics 2024, 14(2), 179; https://doi.org/10.3390/diagnostics14020179 - 13 Jan 2024
Viewed by 7291
Abstract
Lung ultrasound is a well-established diagnostic approach used in detecting pathological changes near the pleura of the lung. At the acoustic boundary of the lung surface, it is necessary to differentiate between the primary visualization of pleural parenchymal pathologies and the appearance of [...] Read more.
Lung ultrasound is a well-established diagnostic approach used in detecting pathological changes near the pleura of the lung. At the acoustic boundary of the lung surface, it is necessary to differentiate between the primary visualization of pleural parenchymal pathologies and the appearance of secondary artifacts when sound waves enter the lung or are reflected at the visceral pleura. The aims of this pictorial essay are to demonstrate the sonographic patterns of various pleural interface artifacts and to illustrate the limitations and pitfalls of the use of ultrasound findings in diagnosing any underlying pathology. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

Back to TopTop