Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (83)

Search Parameters:
Keywords = cardiac sounds

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
33 pages, 1512 KiB  
Review
Advances and Challenges in Deep Learning for Acoustic Pathology Detection: A Review
by Florin Bogdan and Mihaela-Ruxandra Lascu
Technologies 2025, 13(8), 329; https://doi.org/10.3390/technologies13080329 - 1 Aug 2025
Viewed by 219
Abstract
Recent advancements in data collection technologies, data science, and speech processing have fueled significant interest in the computational analysis of biological sounds. This enhanced analytical capability shows promise for improved understanding and detection of various pathological conditions, extending beyond traditional speech analysis to [...] Read more.
Recent advancements in data collection technologies, data science, and speech processing have fueled significant interest in the computational analysis of biological sounds. This enhanced analytical capability shows promise for improved understanding and detection of various pathological conditions, extending beyond traditional speech analysis to encompass other forms of acoustic data. A particularly promising and rapidly evolving area is the application of deep learning techniques for the detection and analysis of diverse pathologies, including respiratory, cardiac, and neurological disorders, through sound processing. This paper provides a comprehensive review of the current state-of-the-art in using deep learning for pathology detection via analysis of biological sounds. It highlights key successes achieved in the field, identifies existing challenges and limitations, and discusses potential future research directions. This review aims to serve as a valuable resource for researchers and clinicians working in this interdisciplinary domain. Full article
Show Figures

Graphical abstract

24 pages, 864 KiB  
Article
Application of Acoustic Cardiography in Assessment of Cardiac Function in Horses with Atrial Fibrillation Before and After Cardioversion
by Mélodie J. Schneider, Isabelle L. Piotrowski, Hannah K. Junge, Glenn van Steenkiste, Ingrid Vernemmen, Gunther van Loon and Colin C. Schwarzwald
Animals 2025, 15(13), 1993; https://doi.org/10.3390/ani15131993 - 7 Jul 2025
Viewed by 333
Abstract
Left atrial mechanical dysfunction is common in horses following the treatment of atrial fibrillation (AF). This study aimed to evaluate the use of an acoustic cardiography monitor (Audicor®) in quantifying cardiac mechanical and hemodynamic function in horses with AF before and [...] Read more.
Left atrial mechanical dysfunction is common in horses following the treatment of atrial fibrillation (AF). This study aimed to evaluate the use of an acoustic cardiography monitor (Audicor®) in quantifying cardiac mechanical and hemodynamic function in horses with AF before and after treatment and to correlate these findings with echocardiographic measures. Twenty-eight horses with AF and successful transvenous electrical cardioversion were included. Audicor® recordings with concomitant echocardiographic examinations were performed one day before, one day after, and two to seven days after cardioversion. Key variables measured by Audicor® included electromechanical activating time (EMAT), heart rate-corrected EMATc, left ventricular systolic time (LVST), heart rate-corrected LVSTc, systolic dysfunction index (SDI), and intensity and persistence of the third and fourth heart sound (S3, S4). A repeated-measures ANOVA with Tukey’s test was used to compare these variables over time, and linear regression and Bland–Altman analyses were applied to assess associations with echocardiographic findings. Following conversion to sinus rhythm, there was a significant decrease in EMATc and LVSTc (p < 0.0001) and a significant increase in LVST (p = 0.0001), indicating improved ventricular systolic function, with strong agreement between Audicor® snapshot and echocardiographic measures. However, S4 quantification did not show clinical value for assessing left atrial function after conversion. Full article
Show Figures

Figure 1

9 pages, 1717 KiB  
Proceeding Paper
Generative AI Respiratory and Cardiac Sound Separation Using Variational Autoencoders (VAEs)
by Arshad Jamal, R. Kanesaraj Ramasamy and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 9; https://doi.org/10.3390/cmsf2025010009 - 1 Jul 2025
Viewed by 262
Abstract
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, [...] Read more.
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, particularly convolutional neural networks (CNNs), to overcome these obstacles. Advanced machine learning models, denoising algorithms, and domain adaptation strategies address challenges such as frequency overlap, external noise, and limited labeled datasets. This study presents a robust methodology for detecting heart and lung diseases from audio signals using advanced preprocessing, feature extraction, and deep learning models. The approach integrates adaptive filtering and bandpass filtering as denoising techniques and variational autoencoders (VAEs) for feature extraction. The extracted features are input into a CNN, which classifies audio signals into different heart and lung conditions. The results highlight the potential of this combined approach for early and accurate disease detection, contributing to the development of reliable diagnostic tools for healthcare. Full article
Show Figures

Figure 1

29 pages, 4916 KiB  
Review
Pulsatile Tinnitus: A Comprehensive Clinical Approach to Diagnosis and Management
by Sofía Pacheco-López, Jose Pablo Martínez-Barbero, Heriberto Busquier-Hernández, Juan García-Valdecasas-Bernal and Juan Manuel Espinosa-Sánchez
J. Clin. Med. 2025, 14(13), 4428; https://doi.org/10.3390/jcm14134428 - 22 Jun 2025
Viewed by 1662
Abstract
Pulsatile tinnitus (PT) is a subtype of tinnitus characterized by a perception of heartbeat-synchronous sound. It represents approximately 5–10% of all tinnitus cases and may have either a vascular or non-vascular etiology. Accurate diagnosis is crucial due to the potentially serious implications this [...] Read more.
Pulsatile tinnitus (PT) is a subtype of tinnitus characterized by a perception of heartbeat-synchronous sound. It represents approximately 5–10% of all tinnitus cases and may have either a vascular or non-vascular etiology. Accurate diagnosis is crucial due to the potentially serious implications this condition can entail. Assessment through anamnesis and physical examination may often suggest a diagnosis of PT, but it is rarely definitive. Therefore, a comprehensive and specific imaging diagnostic protocol is essential when evaluating PT. A lack of consensus has been identified regarding the use of a standardized protocol for both pulsatile and non-pulsatile tinnitus, whether unilateral or bilateral. Consequently, neuroradiologists, otologists, and otoneurologists from a tertiary hospital have developed a new imaging diagnostic protocol for PT. The aim of this article is to present an updated approach to the diagnostic and therapeutic management of PT, aiming to establish a protocol that serves as a guide for clinicians assessing this symptom. In patients with bilateral PT, systemic conditions leading to increased cardiac output should generally be ruled out; in unilateral cases, focused imaging studies should be performed to exclude organic etiologies at the cervical and cranial levels. Full article
(This article belongs to the Section Otolaryngology)
Show Figures

Figure 1

19 pages, 3002 KiB  
Article
A Novel Method for ECG-Free Heart Sound Segmentation in Patients with Severe Aortic Valve Disease
by Elza Abdessater, Paniz Balali, Jimmy Pawlowski, Jérémy Rabineau, Cyril Tordeur, Vitalie Faoro, Philippe van de Borne and Amin Hossein
Sensors 2025, 25(11), 3360; https://doi.org/10.3390/s25113360 - 27 May 2025
Viewed by 557
Abstract
Severe aortic valve diseases (AVD) cause changes in heart sounds, making phonocardiogram (PCG) analyses challenging. This study presents a novel method for segmenting heart sounds without relying on an electrocardiogram (ECG), specifically targeting patients with severe AVD. Our algorithm enhances traditional Hidden Semi-Markov [...] Read more.
Severe aortic valve diseases (AVD) cause changes in heart sounds, making phonocardiogram (PCG) analyses challenging. This study presents a novel method for segmenting heart sounds without relying on an electrocardiogram (ECG), specifically targeting patients with severe AVD. Our algorithm enhances traditional Hidden Semi-Markov Models by incorporating signal envelope calculations and statistical tests to improve the detection of the first and second heart sounds (S1 and S2). We evaluated the method on the PhysioNet/CinC 2016 Challenge dataset and a newly acquired AVD-specific dataset. The method was tested on a total of 27,400 cardiac cycles. The proposed approach outperformed the existing methods, achieving a higher sensitivity and positive predictive value for S2, especially in the presence of severe heart murmurs. Notably, in patients with severe aortic stenosis, our proposed ECG-free method improved S2 sensitivity from 41% to 70%. Full article
Show Figures

Graphical abstract

12 pages, 2458 KiB  
Article
Abnormal Heart Sound Detection Using Common Spatial Patterns and Random Forests
by Turky N. Alotaiby, Nuwayyir A. Alsahle and Gaseb N. Alotibi
Electronics 2025, 14(8), 1512; https://doi.org/10.3390/electronics14081512 - 9 Apr 2025
Viewed by 612
Abstract
Early and accurate diagnosis of heart conditions is pivotal for effective treatment. Phonocardiography (PCG) has become a standard diagnostic tool for evaluating and detecting cardiac abnormalities. While traditional cardiac auscultation remains widely used, its accuracy is highly dependent on the clinician’s experience and [...] Read more.
Early and accurate diagnosis of heart conditions is pivotal for effective treatment. Phonocardiography (PCG) has become a standard diagnostic tool for evaluating and detecting cardiac abnormalities. While traditional cardiac auscultation remains widely used, its accuracy is highly dependent on the clinician’s experience and auditory skills. Consequently, there is a growing need for automated, objective methods of heart sound analysis. This study explores the efficacy of the Common Spatial Patterns (CSP) feature extraction algorithm paired with the Random Forest (RF) classifier to distinguish between normal and pathological heart sounds. The signal is denoised, transformed, and segmented into fixed-length segments. CSP is applied to extract discriminative features (a set of Spatial Patterns), which are then fed into the classifier for cardiac diagnosis. The proposed method was evaluated using PhysioNet/CinC Challenge 2016 and Yaseen2018 (Heart Sound Murmur) datasets. On the testing set of the PhysioNet dataset, the RF classifier achieved 100% precision, recall, accuracy, F1 score, and AUC. Similarly, on the testing set of the Yaseen2018 dataset, it achieved 96.30% precision, 1.00 recall, 98.08% accuracy, 98.11% F1 score, and 99.41% AUC. Full article
Show Figures

Figure 1

22 pages, 7716 KiB  
Article
A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations
by Leonel Orozco-Reyes, Miguel A. Alonso-Arévalo, Eloísa García-Canseco, Roilhi F. Ibarra-Hernández and Roberto Conte-Galván
Technologies 2025, 13(4), 147; https://doi.org/10.3390/technologies13040147 - 7 Apr 2025
Cited by 2 | Viewed by 1712
Abstract
Worldwide, heart disease is the leading cause of mortality. Cardiac auscultation, when conducted by a trained professional, is a non-invasive, cost-effective, and readily available method for the initial assessment of cardiac health. Automated heart sound analysis offers a promising and accessible approach to [...] Read more.
Worldwide, heart disease is the leading cause of mortality. Cardiac auscultation, when conducted by a trained professional, is a non-invasive, cost-effective, and readily available method for the initial assessment of cardiac health. Automated heart sound analysis offers a promising and accessible approach to supporting cardiac diagnosis. This work introduces a novel method for classifying heart sounds as normal or abnormal by leveraging time-frequency representations. Our approach combines three distinct time-frequency representations—short-time Fourier transform (STFT), mel-scale spectrogram, and wavelet synchrosqueezed transform (WSST)—to create images that enhance classification performance. These images are used to train five convolutional neural networks (CNNs): AlexNet, VGG-16, ResNet50, a CNN specialized in STFT images, and our proposed CNN model. The method was trained and tested using three public heart sound datasets: PhysioNet/CinC Challenge 2016, CirCor DigiScope Phonocardiogram Dataset 2022, and another open database. While individual representations achieve maximum accuracy of ≈85.9%, combining STFT, mel, and WSST boosts accuracy to ≈99%. By integrating complementary time-frequency features, our approach demonstrates robust heart sound analysis, achieving consistent classification performance across diverse CNN architectures, thus ensuring reliability and generalizability. Full article
Show Figures

Figure 1

19 pages, 3377 KiB  
Article
AI-Enhanced Detection of Heart Murmurs: Advancing Non-Invasive Cardiovascular Diagnostics
by Maria-Alexandra Zolya, Elena-Laura Popa, Cosmin Baltag, Dragoș-Vasile Bratu, Simona Coman and Sorin-Aurel Moraru
Sensors 2025, 25(6), 1682; https://doi.org/10.3390/s25061682 - 8 Mar 2025
Viewed by 1438
Abstract
Cardiovascular diseases (CVDs) are the leading cause of death worldwide, claiming over 17 million lives annually. Early detection of conditions like heart murmurs, often indicative of heart valve abnormalities, is critical for improving patient outcomes. Traditional diagnostic methods, including physical auscultation and advanced [...] Read more.
Cardiovascular diseases (CVDs) are the leading cause of death worldwide, claiming over 17 million lives annually. Early detection of conditions like heart murmurs, often indicative of heart valve abnormalities, is critical for improving patient outcomes. Traditional diagnostic methods, including physical auscultation and advanced imaging techniques, are constrained by their reliance on specialized clinical expertise, inherent procedural invasiveness, substantial financial costs, and limited accessibility, particularly in resource-limited healthcare environments. This study presents a novel convolutional recurrent neural network (CRNN) model designed for the non-invasive classification of heart murmurs. The model processes heart sound recordings using advanced pre-processing techniques such as z-score normalization, band-pass filtering, and data augmentation (Gaussian noise, time shift, and pitch shift) to enhance robustness. By combining convolutional and recurrent layers, the CRNN captures spatial and temporal features in audio data, achieving an accuracy of 90.5%, precision of 89%, and recall of 87%. These results underscore the potential of machine-learning technologies to revolutionize cardiac diagnostics by offering scalable, accessible solutions for the early detection of cardiovascular conditions. This approach paves the way for broader applications of AI in healthcare, particularly in underserved regions where traditional resources are scarce. Full article
Show Figures

Figure 1

21 pages, 6656 KiB  
Article
A Flexible PVDF Sensor for Forcecardiography
by Salvatore Parlato, Jessica Centracchio, Eliana Cinotti, Gaetano D. Gargiulo, Daniele Esposito, Paolo Bifulco and Emilio Andreozzi
Sensors 2025, 25(5), 1608; https://doi.org/10.3390/s25051608 - 6 Mar 2025
Cited by 1 | Viewed by 1659
Abstract
Forcecardiography (FCG) uses force sensors to record the mechanical vibrations induced on the chest wall by cardiac and respiratory activities. FCG is usually performed via piezoelectric lead-zirconate titanate (PZT) sensors, which simultaneously record the very slow respiratory movements of the chest, the slow [...] Read more.
Forcecardiography (FCG) uses force sensors to record the mechanical vibrations induced on the chest wall by cardiac and respiratory activities. FCG is usually performed via piezoelectric lead-zirconate titanate (PZT) sensors, which simultaneously record the very slow respiratory movements of the chest, the slow infrasonic vibrations due to emptying and filling of heart chambers, the faster infrasonic vibrations due to movements of heart valves, which are usually recorded via Seismocardiography (SCG), and the audible vibrations corresponding to heart sounds, commonly recorded via Phonocardiography (PCG). However, PZT sensors are not flexible and do not adapt very well to the deformations of soft tissues on the chest. This study presents a flexible FCG sensor based on a piezoelectric polyvinylidene fluoride (PVDF) transducer. The PVDF FCG sensor was compared with a well-assessed PZT FCG sensor, as well as with an electro-resistive respiratory band (ERB), an accelerometric SCG sensor, and an electronic stethoscope for PCG. Simultaneous recordings were acquired with these sensors and an electrocardiography (ECG) monitor from a cohort of 35 healthy subjects (16 males and 19 females). The PVDF sensor signals were compared in terms of morphology with those acquired simultaneously via the PZT sensor, the SCG sensor and the electronic stethoscope. Moreover, the estimation accuracies of PVDF and PZT sensors for inter-beat intervals (IBIs) and inter-breath intervals (IBrIs) were assessed against reference ECG and ERB measurements. The results of statistical analyses confirmed that the PVDF sensor provides FCG signals with very high similarity to those acquired via PZT sensors (median cross-correlation index of 0.96 across all subjects) as well as with SCG and PCG signals (median cross-correlation indices of 0.85 and 0.80, respectively). Moreover, the PVDF sensor provides very accurate estimates of IBIs, with R2 > 0.99 and Bland–Altman limits of agreement (LoA) of [−5.30; 5.00] ms, and of IBrIs, with R2 > 0.96 and LoA of [−0.510; 0.513] s. The flexibility of the PVDF sensor makes it more comfortable and ideal for wearable applications. Unlike PZT, PVDF is lead-free, which increases safety and biocompatibility for prolonged skin contact. Full article
(This article belongs to the Special Issue Sensors for Heart Rate Monitoring and Cardiovascular Disease)
Show Figures

Figure 1

11 pages, 3028 KiB  
Article
A New, Easy-to-Learn, Fear-Free Method to Stop Purring During Cardiac Auscultation in Cats
by Tessa Vliegenthart and Viktor Szatmári
Animals 2025, 15(2), 236; https://doi.org/10.3390/ani15020236 - 16 Jan 2025
Cited by 1 | Viewed by 4207
Abstract
Background: Purring in cats can interfere with cardiac auscultation. If the produced noise is loud enough, purring makes it impossible to perform a meaningful auscultation as it is much louder than heart sounds and murmurs. Our study introduced and tested a new, simple, [...] Read more.
Background: Purring in cats can interfere with cardiac auscultation. If the produced noise is loud enough, purring makes it impossible to perform a meaningful auscultation as it is much louder than heart sounds and murmurs. Our study introduced and tested a new, simple, fear-free, cat-friendly method to stop purring during auscultation. Methods: The technique involves grasping the cat’s larynx from ventral with one hand, while simultaneously holding the stethoscope in the other hand to perform the auscultation. Results: The incidence of purring was evaluated in 582 cats, in a veterinary teaching hospital and in a cat-friendly private practice. Fifty-one (8.8%) cats were purring during their physical examination. The tested method had a success rate of 89% in terminating purring. A comparison between investigators (a veterinary student versus an experienced veterinary cardiology specialist) showed no significant difference in the effectiveness of the method (p = 0.57). The incidence of purring was not significantly different between the teaching hospital and the cat-friendly practice (p = 1.00). Sick and older cats purred more often than healthy and younger cats. Conclusions: This new, simple, easy-to-master method is an improvement over previously reported techniques and supports the need for stress-free, cat-friendly handling in veterinary practice. Full article
(This article belongs to the Section Companion Animals)
Show Figures

Figure 1

15 pages, 4340 KiB  
Article
Prototype of Self-Service Electronic Stethoscope to Be Used by Patients During Online Medical Consultations
by Iwona Chuchnowska and Katarzyna Białas
Sensors 2025, 25(1), 226; https://doi.org/10.3390/s25010226 - 3 Jan 2025
Viewed by 1691
Abstract
This article presents the authors’ design of an electronic stethoscope intended for use during online medical consultations for patient auscultation. The goal of the project was to design an instrument that is durable, user-friendly, and affordable. Existing electronic components were used to create [...] Read more.
This article presents the authors’ design of an electronic stethoscope intended for use during online medical consultations for patient auscultation. The goal of the project was to design an instrument that is durable, user-friendly, and affordable. Existing electronic components were used to create the device and a traditional single-sided chest piece. Three-dimensional printing technology was employed to manufacture the prototype. Following the selection of the material, a static tensile strength test was conducted on the printed samples as part of the pre-implementation investigations. Results: Tests on samples made of PLA with a 50% hexagonal infill demonstrated a tensile strength of 36 MPa and an elongation of 4–5%, which was deemed satisfactory for the intended application in the stethoscope’s manufacture. The designed and manufactured electronic stethoscope presented in the article can be connected to headphones or speakers, enabling remote medical consultation. According to the opinion of doctors who tested it, it provides the appropriate sound quality for auscultation. This stethoscope facilitates the rapid detection and recognition of cardiac and respiratory activity in humans. Full article
(This article belongs to the Special Issue Non-Intrusive Sensors for Human Activity Detection and Recognition)
Show Figures

Figure 1

7 pages, 2675 KiB  
Proceeding Paper
“Smart Clothing” Technology for Heart Function Monitoring During a Session of “Dry” Immersion
by Liudmila Gerasimova-Meigal, Alexander Meigal, Vyacheslav Dimitrov, Maria Gerasimova and Anna Sklyarova
Eng. Proc. 2024, 82(1), 24; https://doi.org/10.3390/ecsa-11-20475 - 26 Nov 2024
Viewed by 809
Abstract
The study aimed at obtaining a precise view of the modification of heart rate variability (HRV) and respiratory rate with the help of “smart clothes” (the Hexoskin Smart Shirt, Hexoskin Smart Sensors & AI, Montreal, QC, Canada) during a 45 min session of [...] Read more.
The study aimed at obtaining a precise view of the modification of heart rate variability (HRV) and respiratory rate with the help of “smart clothes” (the Hexoskin Smart Shirt, Hexoskin Smart Sensors & AI, Montreal, QC, Canada) during a 45 min session of “dry” immersion (DI), which is considered a model of Earth-based weightlessness. Eight healthy subjects aged 19 to 21 years participated in the study. Hexoskin Smart Shirt provided a .wav sound file. For analysis, the ecg_peaks function of the neurokit2 library was applied. HRV parameters were calculated within 5 min segments with the help of the pyHRV toolbox. Time-domain (HR and SDNN) and frequency-domain (HF, LF, and VLF) HRV parameters, sample, and approximate entropy were calculated. Thus, the “smart cloth” technology appears as a reliable telemetric instrument to monitor cardiac and respiratory regulation during the DI session. Full article
Show Figures

Figure 1

37 pages, 4062 KiB  
Article
Heart Sound Classification Using Harmonic and Percussive Spectral Features from Phonocardiograms with a Deep ANN Approach
by Anupinder Singh, Vinay Arora and Mandeep Singh
Appl. Sci. 2024, 14(22), 10201; https://doi.org/10.3390/app142210201 - 6 Nov 2024
Cited by 3 | Viewed by 1940
Abstract
Cardiovascular diseases (CVDs) are a leading cause of mortality worldwide, with a particularly high burden in India. Non-invasive methods like Phonocardiogram (PCG) analysis capture the acoustic activity of the heart. This holds significant potential for the early detection and diagnosis of heart conditions. [...] Read more.
Cardiovascular diseases (CVDs) are a leading cause of mortality worldwide, with a particularly high burden in India. Non-invasive methods like Phonocardiogram (PCG) analysis capture the acoustic activity of the heart. This holds significant potential for the early detection and diagnosis of heart conditions. However, the complexity and variability of PCG signals pose considerable challenges for accurate classification. Traditional methods of PCG signal analysis, including time-domain, frequency-domain, and time-frequency domain techniques, often fall short in capturing the intricate details necessary for reliable diagnosis. This study introduces an innovative approach that leverages harmonic–percussive source separation (HPSS) to extract distinct harmonic and percussive spectral features from PCG signals. These features are then utilized to train a deep feed-forward artificial neural network (ANN), classifying heart conditions as normal or abnormal. The methodology involves advanced digital signal processing techniques applied to PCG recordings from the PhysioNet 2016 dataset. The feature set comprises 164 attributes, including the Chroma STFT, Chroma CENS, Mel-frequency cepstral coefficients (MFCCs), and statistical features. These are refined using the ROC-AUC feature selection method to ensure optimal performance. The deep feed-forward ANN model was rigorously trained and validated on a balanced dataset. Techniques such as noise reduction and outlier detection were used to improve model training. The proposed model achieved a validation accuracy of 93.40% with sensitivity and specificity rates of 82.40% and 80.60%, respectively. These results underscore the effectiveness of harmonic-based features and the robustness of the ANN in heart sound classification. This research highlights the potential for deploying such models in non-invasive cardiac diagnostics, particularly in resource-constrained settings. It also lays the groundwork for future advancements in cardiac signal analysis. Full article
(This article belongs to the Special Issue Machine Learning in Biomedical Applications)
Show Figures

Figure 1

20 pages, 1645 KiB  
Article
Classification of Acoustic Tones and Cardiac Murmurs Based on Digital Signal Analysis Leveraging Machine Learning Methods
by Nataliya Shakhovska and Ivan Zagorodniy
Computation 2024, 12(10), 208; https://doi.org/10.3390/computation12100208 - 17 Oct 2024
Cited by 2 | Viewed by 2184
Abstract
Heart murmurs are abnormal heart sounds that can indicate various heart diseases. Although traditional auscultation methods are effective, they depend more on specialists’ knowledge, making it difficult to make an accurate diagnosis. This paper presents a machine learning-based framework for the classification of [...] Read more.
Heart murmurs are abnormal heart sounds that can indicate various heart diseases. Although traditional auscultation methods are effective, they depend more on specialists’ knowledge, making it difficult to make an accurate diagnosis. This paper presents a machine learning-based framework for the classification of acoustic sounds and heart murmurs using digital signal analysis. Using advanced machine learning algorithms, we aim to improve the accuracy, speed, and accessibility of heart murmur detection. The proposed method includes feature extraction from digital auscultatory recordings, preprocessing using signal processing techniques, and classification using state-of-the-art machine learning models. We evaluated the performance of different machine learning algorithms, such as convolutional neural networks (CNNs), random forests (RFs) and support vector machines (SVMs), on a selected heart noise dataset. The results show that our framework achieves high accuracy in differentiating normal heart sounds from different types of heart murmurs and provides a robust tool for clinical decision-making. Full article
(This article belongs to the Special Issue Artificial Intelligence Applications in Public Health)
Show Figures

Figure 1

12 pages, 750 KiB  
Article
Phonocardiogram (PCG) Murmur Detection Based on the Mean Teacher Method
by Yi Luo, Zuoming Fu, Yantian Ding, Xiaojian Chen and Kai Ding
Sensors 2024, 24(20), 6646; https://doi.org/10.3390/s24206646 - 15 Oct 2024
Viewed by 2429
Abstract
Cardiovascular diseases (CVDs) are among the primary causes of mortality globally, highlighting the critical need for early detection to mitigate their impact. Phonocardiograms (PCGs), which record heart sounds, are essential for the non-invasive assessment of cardiac function, enabling the early identification of abnormalities [...] Read more.
Cardiovascular diseases (CVDs) are among the primary causes of mortality globally, highlighting the critical need for early detection to mitigate their impact. Phonocardiograms (PCGs), which record heart sounds, are essential for the non-invasive assessment of cardiac function, enabling the early identification of abnormalities such as murmurs. Particularly in underprivileged regions with high birth rates, the absence of early diagnosis poses a significant public health challenge. In pediatric populations, the analysis of PCG signals is invaluable for detecting abnormal sound waves indicative of congenital and acquired heart diseases, such as septal defects and defective cardiac valves. In the PhysioNet 2022 challenge, the murmur score is a weighted accuracy metric that reflects detection accuracy based on clinical significance. In our research, we proposed a mean teacher method tailored for murmur detection, making full use of the Phyionet2022 and Phyionet2016 PCG datasets, achieving the SOTA (State of Art) performance with a murmur score of 0.82 and an AUC score of 0.90, providing an accessible and high accuracy non-invasive early stage CVD assessment tool, especially for low and middle-income countries (LMICs). Full article
(This article belongs to the Special Issue Multi-sensor Fusion in Medical Imaging, Diagnosis and Therapy)
Show Figures

Figure 1

Back to TopTop