Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (26)

Search Parameters:
Keywords = digital auscultation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
26 pages, 1521 KiB  
Article
AI-Based Classification of Pediatric Breath Sounds: Toward a Tool for Early Respiratory Screening
by Lichuan Liu, Wei Li and Beth Moxley
Appl. Sci. 2025, 15(13), 7145; https://doi.org/10.3390/app15137145 - 25 Jun 2025
Viewed by 446
Abstract
Context: Respiratory morbidity is a leading cause of children’s consultations with general practitioners. Auscultation, the act of listening to breath sounds, is a crucial diagnostic method for respiratory system diseases. Problem: Parents and caregivers often lack the necessary knowledge and experience to identify [...] Read more.
Context: Respiratory morbidity is a leading cause of children’s consultations with general practitioners. Auscultation, the act of listening to breath sounds, is a crucial diagnostic method for respiratory system diseases. Problem: Parents and caregivers often lack the necessary knowledge and experience to identify subtle differences in children’s breath sounds. Furthermore, obtaining reliable feedback from young children about their physical condition is challenging. Methods: The use of a human–artificial intelligence (AI) tool is an essential component for screening and monitoring young children’s respiratory diseases. Using clinical data to design and validate the proposed approaches, we propose novel methods for recognizing and classifying children’s breath sounds. Different breath sound signals were analyzed in the time domain, frequency domain, and using spectrogram representations. Breath sound detection and segmentation were performed using digital signal processing techniques. Multiple features—including Mel–Frequency Cepstral Coefficients (MFCCs), Linear Prediction Coefficients (LPCs), Linear Prediction Cepstral Coefficients (LPCCs), spectral entropy, and Dynamic Linear Prediction Coefficients (DLPCs)—were extracted to capture both time and frequency characteristics. These features were then fed into various classifiers, including K-Nearest Neighbor (KNN), artificial neural networks (ANNs), hidden Markov models (HMMs), logistic regression, and decision trees, for recognition and classification. Main Findings: Experimental results from across 120 infants and preschoolers (2 months to 6 years) with respiratory disease (30 asthma, 30 croup, 30 pneumonia, and 30 normal) verified the performance of the proposed approaches. Conclusions: The proposed AI system provides a real-time diagnostic platform to improve clinical respiratory management and outcomes in young children, thereby reducing healthcare costs. Future work exploring additional respiratory diseases is warranted. Full article
Show Figures

Figure 1

14 pages, 838 KiB  
Article
Cardiovascular Disease Screening in Primary School Children
by Alena Bagkaki, Fragiskos Parthenakis, Gregory Chlouverakis, Emmanouil Galanakis and Ioannis Germanakis
Children 2025, 12(1), 38; https://doi.org/10.3390/children12010038 - 29 Dec 2024
Viewed by 1540
Abstract
Background: Screening for cardiovascular disease (CVD) and its associated risk factors in childhood facilitates early detection and timely preventive interventions. However, limited data are available regarding screening tools and their diagnostic yield when applied in unselected pediatric populations. Aims: To evaluate the performance [...] Read more.
Background: Screening for cardiovascular disease (CVD) and its associated risk factors in childhood facilitates early detection and timely preventive interventions. However, limited data are available regarding screening tools and their diagnostic yield when applied in unselected pediatric populations. Aims: To evaluate the performance of a CVD screening program, based on history, 12-lead ECG and phonocardiography, applied in primary school children. Methods: The methods used were prospective study, with voluntary participation of third-grade primary school children in the region of Crete/Greece, over 6 years (2018–2024). Personal and family history were collected by using a standardized questionnaire and physical evaluation (including weight, height, blood pressure measurement), and cardiac auscultation (digital phonocardiography (PCG)) and 12-lead electrocardiogram (ECG) were recorded at local health stations (Phase I). Following expert verification of responses and obtained data, assisted by designated electronic health record with incorporated decision support algorithms (phase II), pediatric cardiology evaluation at the tertiary referral center followed (phase III). Results: A total of 944 children participated (boys 49.6%). A total of 790 (83.7%) had Phase I referral indication, confirmed in 311(32.9%) during Phase II evaluation. Adiposity (10.8%) and hypertension (3.2%) as risk factors for CVD were documented in 10.8% and 3.2% of the total population, respectively. During Phase III evaluations (n = 201), the majority (n = 132, 14% of total) of children were considered as having a further indication for evaluation by other pediatric subspecialties for their reported symptoms. Abnormal CVD findings were present in 69 (7.3%) of the study population, including minor/trivial structural heart disease in 23 (2.4%) and 17 (1.8%), respectively, referred due to abnormal cardiac auscultation, and ECG abnormalities in 29 (3%), of which 6 (0.6%) were considered potentially significant (including 1 case of genetically confirmed channelopathy-LQT syndrome). Conclusions: CVD screening programs in school children can be very helpful for the early detection of CVD risk factors and of their general health as well. Expert cardiac auscultation and 12-lead ECG allow for the detection of structural and arrhythmogenic heard disease, respectively. Further study is needed regarding performance of individual components, accuracy of interpretation (including computer assisted diagnosis) and cost-effectiveness, before large-scale application of CVD screening in unselected pediatric populations. Full article
(This article belongs to the Section Pediatric Cardiology)
Show Figures

Figure 1

14 pages, 464 KiB  
Article
Empowering Healthcare: TinyML for Precise Lung Disease Classification
by Youssef Abadade, Nabil Benamar, Miloud Bagaa and Habiba Chaoui
Future Internet 2024, 16(11), 391; https://doi.org/10.3390/fi16110391 - 25 Oct 2024
Cited by 4 | Viewed by 3589
Abstract
Respiratory diseases such as asthma pose significant global health challenges, necessitating efficient and accessible diagnostic methods. The traditional stethoscope is widely used as a non-invasive and patient-friendly tool for diagnosing respiratory conditions through lung auscultation. However, it has limitations, such as a lack [...] Read more.
Respiratory diseases such as asthma pose significant global health challenges, necessitating efficient and accessible diagnostic methods. The traditional stethoscope is widely used as a non-invasive and patient-friendly tool for diagnosing respiratory conditions through lung auscultation. However, it has limitations, such as a lack of recording functionality, dependence on the expertise and judgment of physicians, and the absence of noise-filtering capabilities. To overcome these limitations, digital stethoscopes have been developed to digitize and record lung sounds. Recently, there has been growing interest in the automated analysis of lung sounds using Deep Learning (DL). Nevertheless, the execution of large DL models in the cloud often leads to latency, dependency on internet connectivity, and potential privacy issues due to the transmission of sensitive health data. To address these challenges, we developed Tiny Machine Learning (TinyML) models for the real-time detection of respiratory conditions by using lung sound recordings, deployable on low-power, cost-effective devices like digital stethoscopes. We trained three machine learning models—a custom CNN, an Edge Impulse CNN, and a custom LSTM—on a publicly available lung sound dataset. Our data preprocessing included bandpass filtering and feature extraction through Mel-Frequency Cepstral Coefficients (MFCCs). We applied quantization techniques to ensure model efficiency. The custom CNN model achieved the highest performance, with 96% accuracy and 97% precision, recall, and F1-scores, while maintaining moderate resource usage. These findings highlight the potential of TinyML to provide accessible, reliable, and real-time diagnostic tools, particularly in remote and underserved areas, demonstrating the transformative impact of integrating advanced AI algorithms into portable medical devices. This advancement facilitates the prospect of automated respiratory health screening using lung sounds. Full article
(This article belongs to the Special Issue Edge Intelligence: Edge Computing for 5G and the Internet of Things)
Show Figures

Figure 1

20 pages, 1645 KiB  
Article
Classification of Acoustic Tones and Cardiac Murmurs Based on Digital Signal Analysis Leveraging Machine Learning Methods
by Nataliya Shakhovska and Ivan Zagorodniy
Computation 2024, 12(10), 208; https://doi.org/10.3390/computation12100208 - 17 Oct 2024
Cited by 2 | Viewed by 2184
Abstract
Heart murmurs are abnormal heart sounds that can indicate various heart diseases. Although traditional auscultation methods are effective, they depend more on specialists’ knowledge, making it difficult to make an accurate diagnosis. This paper presents a machine learning-based framework for the classification of [...] Read more.
Heart murmurs are abnormal heart sounds that can indicate various heart diseases. Although traditional auscultation methods are effective, they depend more on specialists’ knowledge, making it difficult to make an accurate diagnosis. This paper presents a machine learning-based framework for the classification of acoustic sounds and heart murmurs using digital signal analysis. Using advanced machine learning algorithms, we aim to improve the accuracy, speed, and accessibility of heart murmur detection. The proposed method includes feature extraction from digital auscultatory recordings, preprocessing using signal processing techniques, and classification using state-of-the-art machine learning models. We evaluated the performance of different machine learning algorithms, such as convolutional neural networks (CNNs), random forests (RFs) and support vector machines (SVMs), on a selected heart noise dataset. The results show that our framework achieves high accuracy in differentiating normal heart sounds from different types of heart murmurs and provides a robust tool for clinical decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health)
Show Figures

Figure 1

9 pages, 616 KiB  
Article
The Evolving Stethoscope: Insights Derived from Studying Phonocardiography in Trainees
by Matthew A. Nazari, Jaeil Ahn, Richard Collier, Joby Jacob, Halen Heussner, Tara Doucet-O’Hare, Karel Pacak, Venkatesh Raman and Erin Farrish
Sensors 2024, 24(16), 5333; https://doi.org/10.3390/s24165333 - 17 Aug 2024
Viewed by 1663
Abstract
Phonocardiography (PCG) is used as an adjunct to teach cardiac auscultation and is now a function of PCG-capable stethoscopes (PCS). To evaluate the efficacy of PCG and PCS, the authors investigated the impact of providing PCG data and PCSs on how frequently murmurs, [...] Read more.
Phonocardiography (PCG) is used as an adjunct to teach cardiac auscultation and is now a function of PCG-capable stethoscopes (PCS). To evaluate the efficacy of PCG and PCS, the authors investigated the impact of providing PCG data and PCSs on how frequently murmurs, rubs, and gallops (MRGs) were correctly identified by third-year medical students. Following their internal medicine rotation, third-year medical students from the Georgetown University School of Medicine completed a standardized auscultation assessment. Sound files of 10 different MRGs with a corresponding clinical vignette and physical exam location were provided with and without PCG (with interchangeable question stems) as 10 paired questions (20 total questions). Some (32) students also received a PCS to use during their rotation. Discrimination/difficulty indexes, comparative chi-squared, and McNemar test p-values were calculated. The addition of phonocardiograms to audio data was associated with more frequent identification of mitral stenosis, S4, and cardiac friction rub, but less frequent identification of ventricular septal defect, S3, and tricuspid regurgitation. Students with a PCS had a higher frequency of identifying a cardiac friction rub. PCG may improve the identification of low-frequency, usually diastolic, heart sounds but appears to worsen or have little effect on the identification of higher-frequency, often systolic, heart sounds. As digital and phonocardiography-capable stethoscopes become more prevalent, insights regarding their strengths and weaknesses may be incorporated into medical school curricula, bedside rounds (to enhance teaching and diagnosis), and telemedicine/tele-auscultation efforts. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

17 pages, 3905 KiB  
Article
SonicGuard Sensor—A Multichannel Acoustic Sensor for Long-Term Monitoring of Abdominal Sounds Examined through a Qualification Study
by Zahra Mansour, Verena Uslar, Dirk Weyhe, Danilo Hollosi and Nils Strodthoff
Sensors 2024, 24(6), 1843; https://doi.org/10.3390/s24061843 - 13 Mar 2024
Cited by 2 | Viewed by 2472
Abstract
Auscultation is a fundamental diagnostic technique that provides valuable diagnostic information about different parts of the body. With the increasing prevalence of digital stethoscopes and telehealth applications, there is a growing trend towards digitizing the capture of bodily sounds, thereby enabling subsequent analysis [...] Read more.
Auscultation is a fundamental diagnostic technique that provides valuable diagnostic information about different parts of the body. With the increasing prevalence of digital stethoscopes and telehealth applications, there is a growing trend towards digitizing the capture of bodily sounds, thereby enabling subsequent analysis using machine learning algorithms. This study introduces the SonicGuard sensor, which is a multichannel acoustic sensor designed for long-term recordings of bodily sounds. We conducted a series of qualification tests, with a specific focus on bowel sounds ranging from controlled experimental environments to phantom measurements and real patient recordings. These tests demonstrate the effectiveness of the proposed sensor setup. The results show that the SonicGuard sensor is comparable to commercially available digital stethoscopes, which are considered the gold standard in the field. This development opens up possibilities for collecting and analyzing bodily sound datasets using machine learning techniques in the future. Full article
(This article belongs to the Special Issue Physiological Sound Acquisition and Processing (Volume II))
Show Figures

Figure 1

15 pages, 3023 KiB  
Article
Breath Measurement Method for Synchronized Reproduction of Biological Tones in an Augmented Reality Auscultation Training System
by Yukiko Kono, Keiichiro Miura, Hajime Kasai, Shoichi Ito, Mayumi Asahina, Masahiro Tanabe, Yukihiro Nomura and Toshiya Nakaguchi
Sensors 2024, 24(5), 1626; https://doi.org/10.3390/s24051626 - 1 Mar 2024
Cited by 1 | Viewed by 1725
Abstract
An educational augmented reality auscultation system (EARS) is proposed to enhance the reality of auscultation training using a simulated patient. The conventional EARS cannot accurately reproduce breath sounds according to the breathing of a simulated patient because the system instructs the breathing rhythm. [...] Read more.
An educational augmented reality auscultation system (EARS) is proposed to enhance the reality of auscultation training using a simulated patient. The conventional EARS cannot accurately reproduce breath sounds according to the breathing of a simulated patient because the system instructs the breathing rhythm. In this study, we propose breath measurement methods that can be integrated into the chest piece of a stethoscope. We investigate methods using the thoracic variations and frequency characteristics of breath sounds. An accelerometer, a magnetic sensor, a gyro sensor, a pressure sensor, and a microphone were selected as the sensors. For measurement with the magnetic sensor, we proposed a method by detecting the breathing waveform in terms of changes in the magnetic field accompanying the surface deformation of the stethoscope based on thoracic variations using a magnet. During breath sound measurement, the frequency spectra of the breath sounds acquired by the built-in microphone were calculated. The breathing waveforms were obtained from the difference in characteristics between the breath sounds during exhalation and inhalation. The result showed the average value of the correlation coefficient with the reference value reached 0.45, indicating the effectiveness of this method as a breath measurement method. And the evaluations suggest more accurate breathing waveforms can be obtained by selecting the measurement method according to breathing method and measurement point. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 3746 KiB  
Article
An Accelerometer-Based Wearable Patch for Robust Respiratory Rate and Wheeze Detection Using Deep Learning
by Brian Sang, Haoran Wen, Gregory Junek, Wendy Neveu, Lorenzo Di Francesco and Farrokh Ayazi
Biosensors 2024, 14(3), 118; https://doi.org/10.3390/bios14030118 - 22 Feb 2024
Cited by 7 | Viewed by 5015
Abstract
Wheezing is a critical indicator of various respiratory conditions, including asthma and chronic obstructive pulmonary disease (COPD). Current diagnosis relies on subjective lung auscultation by physicians. Enabling this capability via a low-profile, objective wearable device for remote patient monitoring (RPM) could offer pre-emptive, [...] Read more.
Wheezing is a critical indicator of various respiratory conditions, including asthma and chronic obstructive pulmonary disease (COPD). Current diagnosis relies on subjective lung auscultation by physicians. Enabling this capability via a low-profile, objective wearable device for remote patient monitoring (RPM) could offer pre-emptive, accurate respiratory data to patients. With this goal as our aim, we used a low-profile accelerometer-based wearable system that utilizes deep learning to objectively detect wheezing along with respiration rate using a single sensor. The miniature patch consists of a sensitive wideband MEMS accelerometer and low-noise CMOS interface electronics on a small board, which was then placed on nine conventional lung auscultation sites on the patient’s chest walls to capture the pulmonary-induced vibrations (PIVs). A deep learning model was developed and compared with a deterministic time–frequency method to objectively detect wheezing in the PIV signals using data captured from 52 diverse patients with respiratory diseases. The wearable accelerometer patch, paired with the deep learning model, demonstrated high fidelity in capturing and detecting respiratory wheezes and patterns across diverse and pertinent settings. It achieved accuracy, sensitivity, and specificity of 95%, 96%, and 93%, respectively, with an AUC of 0.99 on the test set—outperforming the deterministic time–frequency approach. Furthermore, the accelerometer patch outperforms the digital stethoscopes in sound analysis while offering immunity to ambient sounds, which not only enhances data quality and performance for computational wheeze detection by a significant margin but also provides a robust sensor solution that can quantify respiration patterns simultaneously. Full article
Show Figures

Figure 1

20 pages, 7654 KiB  
Article
Exploring Microphone Technologies for Digital Auscultation Devices
by Matteo Zauli, Lorenzo Mistral Peppi, Luca Di Bonaventura, Valerio Antonio Arcobelli, Alberto Spadotto, Igor Diemberger, Valerio Coppola, Sabato Mellone and Luca De Marchi
Micromachines 2023, 14(11), 2092; https://doi.org/10.3390/mi14112092 - 12 Nov 2023
Cited by 4 | Viewed by 2581
Abstract
The aim of this work is to present a preliminary study for the design of a digital auscultation system, i.e., a novel wearable device for patient chest auscultation and a digital stethoscope. The development and testing of the electronic stethoscope prototype is reported [...] Read more.
The aim of this work is to present a preliminary study for the design of a digital auscultation system, i.e., a novel wearable device for patient chest auscultation and a digital stethoscope. The development and testing of the electronic stethoscope prototype is reported with an emphasis on the description and selection of sound transduction systems and analog electronic processing. The focus on various microphone technologies, such as micro-electro-mechanical systems (MEMSs), electret condensers, and piezoelectronic diaphragms, intends to emphasize the most suitable transducer for auscultation. In addition, we report on the design and development of a digital acquisition system for the human body for sound recording by using a modular device approach in order to fit the chosen analog and digital mics. Tests were performed on a designed phantom setup, and a qualitative comparison between the sounds recorded with the newly developed acquisition device and those recorded with two commercial digital stethoscopes is reported. Full article
(This article belongs to the Special Issue MEMS in Italy 2023)
Show Figures

Figure 1

22 pages, 5219 KiB  
Article
Real-Time Implementation of a Frequency Shifter for Enhancement of Heart Sounds Perception on VLIW DSP Platform
by Vincenzo Muto, Emilio Andreozzi, Carmela Cappelli, Jessica Centracchio, Gennaro Di Meo, Daniele Esposito, Paolo Bifulco and Davide De Caro
Electronics 2023, 12(20), 4359; https://doi.org/10.3390/electronics12204359 - 20 Oct 2023
Cited by 3 | Viewed by 1980
Abstract
Auscultation of heart sounds is important to perform cardiovascular assessment. External noises may limit heart sound perception. In addition, heart sound bandwidth is concentrated at very low frequencies, where the human ear has poor sensitivity. Therefore, the acoustic perception of the operator can [...] Read more.
Auscultation of heart sounds is important to perform cardiovascular assessment. External noises may limit heart sound perception. In addition, heart sound bandwidth is concentrated at very low frequencies, where the human ear has poor sensitivity. Therefore, the acoustic perception of the operator can be significantly improved by shifting the heart sound spectrum toward higher frequencies. This study proposes a real-time frequency shifter based on the Hilbert transform. Key system components are the Hilbert transformer implemented as a Finite Impulse Response (FIR) filter, and a Direct Digital Frequency Synthesizer (DDFS), which allows agile modification of the frequency shift. The frequency shifter has been implemented on a VLIW Digital Signal Processor (DSP) by devising a novel piecewise quadratic approximation technique for efficient DDFS implementation. The performance has been compared with other DDFS implementations both considering piecewise linear technique and sine/cosine standard library functions of the DSP. Piecewise techniques allow a more than 50% reduction in execution time compared to the DSP library. Piecewise quadratic technique also allows a more than 50% reduction in total required memory size in comparison to the piecewise linear. The theoretical analysis of the dynamic power dissipation exhibits a more than 20% reduction using piecewise techniques with respect to the DSP library. The real-time operation has been also verified on the DSK6713 rapid prototyping board by Texas Instruments C6713 DSP. Audiologic tests have also been performed to assess the actual improvement of heart sound perception. To this aim, heart sound recordings were corrupted by additive white Gaussian noise, crowded street noise, and helicopter noise, with different signal-to-noise ratios. All recordings were collected from public databases. Statistical analyses of the audiological test results confirm that the proposed approach provides a clear improvement in heartbeat perception in noisy environments. Full article
(This article belongs to the Special Issue Feature Papers in Circuit and Signal Processing)
Show Figures

Figure 1

9 pages, 981 KiB  
Article
Interpretation of Heart and Lungs Sounds Acquired via Remote, Digital Auscultation Reached Fair-to-Substantial Levels of Consensus among Specialist Physicians
by Diana Magor, Evgeny Berkov, Dmitry Siomin, Eli Karniel, Nir Lasman, Liat Radinsky Waldman, Irina Gringauz, Shai Stern, Reut Lerner Kassif, Galia Barkai, Hadas Lewy and Gad Segal
Diagnostics 2023, 13(19), 3153; https://doi.org/10.3390/diagnostics13193153 - 9 Oct 2023
Cited by 5 | Viewed by 1666
Abstract
Background. Technological advancement may bridge gaps between long-practiced medical competencies and modern technologies. Such a domain is the application of digital stethoscopes used for physical examination in telemedicine. This study aimed to validate the level of consensus among physicians regarding the interpretation of [...] Read more.
Background. Technological advancement may bridge gaps between long-practiced medical competencies and modern technologies. Such a domain is the application of digital stethoscopes used for physical examination in telemedicine. This study aimed to validate the level of consensus among physicians regarding the interpretation of remote, digital auscultation of heart and lung sounds. Methods. Seven specialist physicians considered both the technical quality and clinical interpretation of auscultation findings of pre-recorded heart and lung sounds of patients hospitalized in their homes. TytoCareTM system was used as a remote, digital stethoscope. Results. In total, 140 sounds (70 heart and 70 lungs) were presented to seven specialists. The level of agreement was measured using Fleiss’ Kappa (FK) variable. Agreement relating to heart sounds reached low-to-moderate consensus: the overall technical quality (FK = 0.199), rhythm regularity (FK = 0.328), presence of murmurs (FK = 0.469), appreciation of sounds as remote (FK = 0.011), and an overall diagnosis as normal or pathologic (FK = 0.304). The interpretation of some of the lung sounds reached a higher consensus: the overall technical quality (FK = 0.169), crepitus (FK = 0.514), wheezing (FK = 0.704), bronchial sounds (FK = 0.034), and an overall diagnosis as normal or pathological (FK = 0.386). Most Fleiss’ Kappa values were in the range of “fare consensus”, while in the domains of diagnosing lung crepitus and wheezing, the values increased to the “substantial” level. Conclusions. Bio signals, as recorded auscultations of the heart and lung sounds serving the process of clinical assessment of remotely situated patients, do not achieve a high enough level of agreement between specialized physicians. These findings should serve as a catalyzer for improving the process of telemedicine-attained bio-signals and their clinical interpretation. Full article
(This article belongs to the Section Point-of-Care Diagnostics and Devices)
Show Figures

Figure 1

16 pages, 6447 KiB  
Article
StethAid: A Digital Auscultation Platform for Pediatrics
by Youness Arjoune, Trong N. Nguyen, Tyler Salvador, Anha Telluri, Jonathan C. Schroeder, Robert L. Geggel, Joseph W. May, Dinesh K. Pillai, Stephen J. Teach, Shilpa J. Patel, Robin W. Doroshow and Raj Shekhar
Sensors 2023, 23(12), 5750; https://doi.org/10.3390/s23125750 - 20 Jun 2023
Cited by 6 | Viewed by 4382
Abstract
(1) Background: Mastery of auscultation can be challenging for many healthcare providers. Artificial intelligence (AI)-powered digital support is emerging as an aid to assist with the interpretation of auscultated sounds. A few AI-augmented digital stethoscopes exist but none are dedicated to pediatrics. Our [...] Read more.
(1) Background: Mastery of auscultation can be challenging for many healthcare providers. Artificial intelligence (AI)-powered digital support is emerging as an aid to assist with the interpretation of auscultated sounds. A few AI-augmented digital stethoscopes exist but none are dedicated to pediatrics. Our goal was to develop a digital auscultation platform for pediatric medicine. (2) Methods: We developed StethAid—a digital platform for artificial intelligence-assisted auscultation and telehealth in pediatrics—that consists of a wireless digital stethoscope, mobile applications, customized patient-provider portals, and deep learning algorithms. To validate the StethAid platform, we characterized our stethoscope and used the platform in two clinical applications: (1) Still’s murmur identification and (2) wheeze detection. The platform has been deployed in four children’s medical centers to build the first and largest pediatric cardiopulmonary datasets, to our knowledge. We have trained and tested deep-learning models using these datasets. (3) Results: The frequency response of the StethAid stethoscope was comparable to those of the commercially available Eko Core, Thinklabs One, and Littman 3200 stethoscopes. The labels provided by our expert physician offline were in concordance with the labels of providers at the bedside using their acoustic stethoscopes for 79.3% of lungs cases and 98.3% of heart cases. Our deep learning algorithms achieved high sensitivity and specificity for both Still’s murmur identification (sensitivity of 91.9% and specificity of 92.6%) and wheeze detection (sensitivity of 83.7% and specificity of 84.4%). (4) Conclusions: Our team has created a technically and clinically validated pediatric digital AI-enabled auscultation platform. Use of our platform could improve efficacy and efficiency of clinical care for pediatric patients, reduce parental anxiety, and result in cost savings. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

24 pages, 1494 KiB  
Review
Digital Pulmonology Practice with Phonopulmography Leveraging Artificial Intelligence: Future Perspectives Using Dual Microwave Acoustic Sensing and Imaging
by Arshia K. Sethi, Pratyusha Muddaloor, Priyanka Anvekar, Joshika Agarwal, Anmol Mohan, Mansunderbir Singh, Keerthy Gopalakrishnan, Ashima Yadav, Aakriti Adhikari, Devanshi Damani, Kanchan Kulkarni, Christopher A. Aakre, Alexander J. Ryu, Vivek N. Iyer and Shivaram P. Arunachalam
Sensors 2023, 23(12), 5514; https://doi.org/10.3390/s23125514 - 12 Jun 2023
Cited by 4 | Viewed by 4298
Abstract
Respiratory disorders, being one of the leading causes of disability worldwide, account for constant evolution in management technologies, resulting in the incorporation of artificial intelligence (AI) in the recording and analysis of lung sounds to aid diagnosis in clinical pulmonology practice. Although lung [...] Read more.
Respiratory disorders, being one of the leading causes of disability worldwide, account for constant evolution in management technologies, resulting in the incorporation of artificial intelligence (AI) in the recording and analysis of lung sounds to aid diagnosis in clinical pulmonology practice. Although lung sound auscultation is a common clinical practice, its use in diagnosis is limited due to its high variability and subjectivity. We review the origin of lung sounds, various auscultation and processing methods over the years and their clinical applications to understand the potential for a lung sound auscultation and analysis device. Respiratory sounds result from the intra-pulmonary collision of molecules contained in the air, leading to turbulent flow and subsequent sound production. These sounds have been recorded via an electronic stethoscope and analyzed using back-propagation neural networks, wavelet transform models, Gaussian mixture models and recently with machine learning and deep learning models with possible use in asthma, COVID-19, asbestosis and interstitial lung disease. The purpose of this review was to summarize lung sound physiology, recording technologies and diagnostics methods using AI for digital pulmonology practice. Future research and development in recording and analyzing respiratory sounds in real time could revolutionize clinical practice for both the patients and the healthcare personnel. Full article
(This article belongs to the Special Issue Microwave and Antenna System in Medical Applications)
Show Figures

Figure 1

24 pages, 5289 KiB  
Review
Acoustic-Based Deep Learning Architectures for Lung Disease Diagnosis: A Comprehensive Overview
by Alyaa Hamel Sfayyih, Ahmad H. Sabry, Shymaa Mohammed Jameel, Nasri Sulaiman, Safanah Mudheher Raafat, Amjad J. Humaidi and Yasir Mahmood Al Kubaiaisi
Diagnostics 2023, 13(10), 1748; https://doi.org/10.3390/diagnostics13101748 - 16 May 2023
Cited by 27 | Viewed by 5729
Abstract
Lung auscultation has long been used as a valuable medical tool to assess respiratory health and has gotten a lot of attention in recent years, notably following the coronavirus epidemic. Lung auscultation is used to assess a patient’s respiratory role. Modern technological progress [...] Read more.
Lung auscultation has long been used as a valuable medical tool to assess respiratory health and has gotten a lot of attention in recent years, notably following the coronavirus epidemic. Lung auscultation is used to assess a patient’s respiratory role. Modern technological progress has guided the growth of computer-based respiratory speech investigation, a valuable tool for detecting lung abnormalities and diseases. Several recent studies have reviewed this important area, but none are specific to lung sound-based analysis with deep-learning architectures from one side and the provided information was not sufficient for a good understanding of these techniques. This paper gives a complete review of prior deep-learning-based architecture lung sound analysis. Deep-learning-based respiratory sound analysis articles are found in different databases including the Plos, ACM Digital Libraries, Elsevier, PubMed, MDPI, Springer, and IEEE. More than 160 publications were extracted and submitted for assessment. This paper discusses different trends in pathology/lung sound, the common features for classifying lung sounds, several considered datasets, classification methods, signal processing techniques, and some statistical information based on previous study findings. Finally, the assessment concludes with a discussion of potential future improvements and recommendations. Full article
(This article belongs to the Special Issue Classification of Diseases Using Machine Learning Algorithms)
Show Figures

Figure 1

19 pages, 1699 KiB  
Review
Review on the Advancements of Stethoscope Types in Chest Auscultation
by Jun Jie Seah, Jiale Zhao, De Yun Wang and Heow Pueh Lee
Diagnostics 2023, 13(9), 1545; https://doi.org/10.3390/diagnostics13091545 - 25 Apr 2023
Cited by 33 | Viewed by 12939
Abstract
Stethoscopes were originally designed for the auscultation of a patient’s chest for the purpose of listening to lung and heart sounds. These aid medical professionals in their evaluation of the cardiovascular and respiratory systems, as well as in other applications, such as listening [...] Read more.
Stethoscopes were originally designed for the auscultation of a patient’s chest for the purpose of listening to lung and heart sounds. These aid medical professionals in their evaluation of the cardiovascular and respiratory systems, as well as in other applications, such as listening to bowel sounds in the gastrointestinal system or assessing for vascular bruits. Listening to internal sounds during chest auscultation aids healthcare professionals in their diagnosis of a patient’s illness. We performed an extensive literature review on the currently available stethoscopes specifically for use in chest auscultation. By understanding the specificities of the different stethoscopes available, healthcare professionals can capitalize on their beneficial features, to serve both clinical and educational purposes. Additionally, the ongoing COVID-19 pandemic has also highlighted the unique application of digital stethoscopes for telemedicine. Thus, the advantages and limitations of digital stethoscopes are reviewed. Lastly, to determine the best available stethoscopes in the healthcare industry, this literature review explored various benchmarking methods that can be used to identify areas of improvement for existing stethoscopes, as well as to serve as a standard for the general comparison of stethoscope quality. The potential use of digital stethoscopes for telemedicine amidst ongoing technological advancements in wearable sensors and modern communication facilities such as 5G are also discussed. Based on the ongoing trend in advancements in wearable technology, telemedicine, and smart hospitals, understanding the benefits and limitations of the digital stethoscope is an essential consideration for potential equipment deployment, especially during the height of the current COVID-19 pandemic and, more importantly, for future healthcare crises when human and resource mobility is restricted. Full article
(This article belongs to the Special Issue Point-of-Care Diagnostics Technology and Applications)
Show Figures

Figure 1

Back to TopTop