A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations
Abstract
:1. Introduction
2. Methodology
2.1. Heartsound Datasets
2.2. Preprocessing
2.3. Time-Frequency Analysis
2.3.1. Short-Time Fourier Transform
2.3.2. Mel Spectrogram
2.3.3. Synchrosqueezed Wavelet Transform
2.3.4. Combination of Time-Frequency Representations
2.4. Machine-Learning Classifiers
Image Data Generation and Network Implementation
- optimizer: stochastic gradient descent (SGD),
- learning rate: 0.0008
- loss function: categorical cross-entropy,
- batch size: 64 samples
- training: 150 epochs or less (depending on the test’s error rate, an early stop was implemented).
3. Results
3.1. Performance Metrics
- 1.
- True positive (TP) indicates the number of diseased PCGs the classifier correctly predicts as diseased.
- 2.
- False positive (FP) represents the number of diseased PCGs that the classifier wrongly predicts as being healthy.
- 3.
- True negative (TN), shows how many times the real and predicted values are both considered to be healthy (i.e., normal).
- 4.
- False positive (FP) shows how many times the classifier incorrectly predicted the PCG as diseased when it is not.
3.2. Experimental Results
3.3. Network Complexity
3.4. Discussion
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- OECD. Health at a Glance. 2023. Available online: https://www.oecd.org/health/health-at-a-glance/ (accessed on 1 February 2025).
- World Health Organization. World Health Statistics 2024: Monitoring Health for the SDGs, Sustainable Development Goals; Technical Report; World Health Organization: Geneva, Switzerland, 2024. [Google Scholar]
- Mahnke, C.B. Automated heartsound analysis/Computer-aided auscultation: A cardiologist’s perspective and suggestions for future development. In Proceedings of the 31st Annual International Conference of the IEEE Engineering in Medicine and Biology Society: Engineering the Future of Biomedicine, EMBC 2009, Minneapolis, MN, USA, 3–6 September 2009; pp. 3115–3118. [Google Scholar] [CrossRef]
- Abbas, A.K.; Bassam, R. Phonocardiography signal processing. Synth. Lect. Biomed. Eng. 2009, 4, 1–194. [Google Scholar]
- Arnott, P.; Pfeiffer, G.; Tavel, M. Spectral analysis of heart sounds: Relationships between some physical characteristics and frequency spectra of first and second heart sounds in normals and hypertensives. J. Biomed. Eng. 1984, 6, 121–128. [Google Scholar] [PubMed]
- Choi, S.; Jiang, Z. Cardiac sound murmurs classification with autoregressive spectral analysis and multi-support vector machine technique. Comput. Biol. Med. 2010, 40, 8–20. [Google Scholar] [PubMed]
- Dwivedi, A.K.; Imtiaz, S.A.; Rodriguez-Villegas, E. Algorithms for automatic analysis and classification of heart sounds–a systematic review. IEEE Access 2018, 7, 8316–8345. [Google Scholar]
- Chen, W.; Sun, Q.; Chen, X.; Xie, G.; Wu, H.; Xu, C. Deep learning methods for heart sounds classification: A systematic review. Entropy 2021, 23, 667. [Google Scholar] [CrossRef]
- Zhu, B.; Zhou, Z.; Yu, S.; Liang, X.; Xie, Y.; Sun, Q. Review of phonocardiogram signal analysis: Insights from the PhysioNet/CinC challenge 2016 database. Electronics 2024, 13, 3222. [Google Scholar] [CrossRef]
- Sun, S.; Wang, H.; Jiang, Z.; Fang, Y.; Tao, T. Segmentation-based heart sound feature extraction combined with classifier models for a VSD diagnosis system. Expert Syst. Appl. 2014, 41, 1769–1780. [Google Scholar]
- Giordano, N.; Knaflitz, M. A novel method for measuring the timing of heart sound components through digital phonocardiography. Sensors 2019, 19, 1868. [Google Scholar] [CrossRef]
- Zhang, W.; Han, J.; Deng, S. Abnormal heart sound detection using temporal quasi-periodic features and long short-term memory without segmentation. Biomed. Signal Process. Control 2019, 53, 101560. [Google Scholar]
- Arslan, Ö. Automated detection of heart valve disorders with time-frequency and deep features on PCG signals. Biomed. Signal Process. Control 2022, 78, 103929. [Google Scholar]
- Langley, P.; Murray, A. Heart sound classification from unsegmented phonocardiograms. Physiol. Meas. 2017, 38, 1658. [Google Scholar] [PubMed]
- Her, H.L.; Chiu, H.W. Using time-frequency features to recognize abnormal heart sounds. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 1145–1147. [Google Scholar]
- Fang, Y.; Leng, H.; Wang, W.; Liu, D. Multi-level feature encoding algorithm based on FBPSI for heart sound classification. Sci. Rep. 2024, 14, 29132. [Google Scholar]
- Tiwari, S.; Jain, A.; Sharma, A.K.; Almustafa, K.M. Phonocardiogram signal based multi-class cardiac diagnostic decision support system. IEEE Access 2021, 9, 110710–110722. [Google Scholar]
- Delgado-Trejos, E.; Quiceno-Manrique, A.; Godino-Llorente, J.; Blanco-Velasco, M.; Castellanos-Dominguez, G. Digital auscultation analysis for heart murmur detection. Ann. Biomed. Eng. 2009, 37, 337–353. [Google Scholar] [PubMed]
- Soeta, Y.; Bito, Y. Detection of features of prosthetic cardiac valve sound by spectrogram analysis. Appl. Acoust. 2015, 89, 28–33. [Google Scholar]
- Bozkurt, B.; Germanakis, I.; Stylianou, Y. A study of time-frequency features for CNN-based automatic heart sound classification for pathology detection. Comput. Biol. Med. 2018, 100, 132–143. [Google Scholar]
- Demir, F.; Şengür, A.; Bajaj, V.; Polat, K. Towards the classification of heart sounds based on convolutional deep neural network. Health Inf. Sci. Syst. 2019, 7, 1–9. [Google Scholar]
- Huang, N.E.; Shen, Z.; Long, S.R.; Wu, M.C.; Shih, H.H.; Zheng, Q.; Yen, N.C.; Tung, C.C.; Liu, H.H. The empirical mode decomposition and the Hilbert spectrum for nonlinear and non-stationary time series analysis. Proc. R. Soc. London. Ser. A Math. Phys. Eng. Sci. 1998, 454, 903–995. [Google Scholar]
- Obaidat, M. Phonocardiogram signal analysis: Techniques and performance comparison. J. Med Eng. Technol. 1993, 17, 221–227. [Google Scholar]
- Bentley, P.; Grant, P.; McDonnell, J. Time-frequency and time-scale techniques for the classification of native and bioprosthetic heart valve sounds. IEEE Trans. Biomed. Eng. 1998, 45, 125–128. [Google Scholar]
- Amit, G.; Gavriely, N.; Intrator, N. Cluster analysis and classification of heart sounds. Biomed. Signal Process. Control 2009, 4, 26–36. [Google Scholar] [CrossRef]
- Kay, E.; Agarwal, A. DropConnected neural networks trained on time-frequency and inter-beat features for classifying heart sounds. Physiol. Meas. 2017, 38, 1645. [Google Scholar] [CrossRef]
- Fahad, H.; Ghani Khan, M.U.; Saba, T.; Rehman, A.; Iqbal, S. Microscopic abnormality classification of cardiac murmurs using ANFIS and HMM. Microsc. Res. Tech. 2018, 81, 449–457. [Google Scholar] [CrossRef] [PubMed]
- Hamidi, M.; Ghassemian, H.; Imani, M. Classification of heart sound signal using curve fitting and fractal dimension. Biomed. Signal Process. Control 2018, 39, 351–359. [Google Scholar] [CrossRef]
- Zeng, W.; Lin, Z.; Yuan, C.; Wang, Q.; Liu, F.; Wang, Y. Detection of heart valve disorders from PCG signals using TQWT, FA-MVEMD, Shannon energy envelope and deterministic learning. Artif. Intell. Rev. 2021, 54, 6063–6100. [Google Scholar] [CrossRef]
- Khan, S.I.; Qaisar, S.M.; Pachori, R.B. Automated classification of valvular heart diseases using FBSE-EWT and PSR based geometrical features. Biomed. Signal Process. Control 2022, 73, 103445. [Google Scholar] [CrossRef]
- Jang, Y.; Jung, J.; Hong, Y.; Lee, J.; Jeong, H.; Shim, H.; Chang, H.J. Fully Convolutional Hybrid Fusion Network with Heterogeneous Representations for Identification of S1 and S2 from Phonocardiogram. IEEE J. Biomed. Health Inform. 2024, 28, 7151–7163. [Google Scholar] [CrossRef]
- Clifford, G.D.; Liu, C.; Moody, B.E.; Roig, J.M.; Schmidt, S.E.; Li, Q.; Silva, I.; Mark, R.G. Recent advances in heart sound analysis. Physiol. Meas. 2017, 38, E10–E25. [Google Scholar] [CrossRef]
- Potes, C.; Parvaneh, S.; Rahman, A.; Conroy, B. Ensemble of feature-based and deep learning-based classifiers for detection of abnormal heart sounds. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 621–624. [Google Scholar]
- Zabihi, M.; Rad, A.B.; Kiranyaz, S.; Gabbouj, M.; Katsaggelos, A.K. Heart sound anomaly and quality detection using ensemble of neural networks without segmentation. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 613–616. [Google Scholar]
- Kay, E.; Agarwal, A. Dropconnected neural network trained with diverse features for classifying heart sounds. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 617–620. [Google Scholar]
- Zheng, Y.; Guo, X.; Ding, X. A novel hybrid energy fraction and entropy-based approach for systolic heart murmurs identification. Expert Syst. Appl. 2015, 42, 2710–2721. [Google Scholar] [CrossRef]
- Zhang, X.; Durand, L.; Senhadji, L.; Lee, H.C.; Coatrieux, J.L. Time-frequency scaling transformation of the phonocardiogram based of the matching pursuit method. IEEE Trans. Biomed. Eng. 1998, 45, 972–979. [Google Scholar] [CrossRef]
- Ibarra-Hernández, R.F.; Bertin, N.; Alonso-Arévalo, M.A.; Guillén-Ramírez, H.A. A benchmark of heart sound classification systems based on sparse decompositions. In Proceedings of the 14th International Symposium on Medical Information Processing and Analysis, Mazatlan, Mexico, 24–26 October 2018; SPIE: Bellingham, WA, USA, 2018; Volume 10975, pp. 26–38. [Google Scholar]
- Had, A.; Sabri, K.; Aoutoul, M. Detection of heart valves closure instants in phonocardiogram signals. Wirel. Pers. Commun. 2020, 112, 1569–1585. [Google Scholar]
- Sun, H.; Chen, W.; Gong, J. An improved empirical mode decomposition-wavelet algorithm for phonocardiogram signal denoising and its application in the first and second heart sound extraction. In Proceedings of the 2013 6th International Conference on Biomedical Engineering and Informatics, Hangzhou, China, 16–18 December 2013; IEEE: Piscataway, NJ, USA, 2013; pp. 187–191. [Google Scholar]
- Homsi, M.N.; Medina, N.; Hernandez, M.; Quintero, N.; Perpiñan, G.; Quintana, A.; Warrick, P. Automatic heart sound recording classification using a nested set of ensemble algorithms. In Proceedings of the 2016 Computing in Cardiology Conference (CinC), Vancouver, BC, Canada, 11–14 September 2016; IEEE: Piscataway, NJ, USA, 2016; pp. 817–820. [Google Scholar]
- Tang, H.; Dai, Z.; Jiang, Y.; Li, T.; Liu, C. PCG classification using multidomain features and SVM classifier. Biomed Res. Int. 2018, 2018, 4205027. [Google Scholar]
- Li, F.; Tang, H.; Shang, S.; Mathiak, K.; Cong, F. Classification of heart sounds using convolutional neural network. Appl. Sci. 2020, 10, 3956. [Google Scholar] [CrossRef]
- Zhang, W.; Han, J.; Deng, S. Heart sound classification based on scaled spectrogram and partial least squares regression. Biomed. Signal Process. Control 2017, 32, 20–28. [Google Scholar]
- Nogueira, D.M.; Zarmehri, M.N.; Ferreira, C.A.; Jorge, A.M.; Antunes, L. Heart sounds classification using images from wavelet transformation. In Proceedings of the EPIA Conference on Artificial Intelligence, Madeira, Portugal, 3–6 September 2019; Springer: Berlin/Heidelberg, Germany, 2019; pp. 311–322. [Google Scholar]
- Ismail, S.; Ismail, B.; Siddiqi, I.; Akram, U. PCG classification through spectrogram using transfer learning. Biomed. Signal Process. Control 2023, 79, 104075. [Google Scholar] [CrossRef]
- Saraçoğlu, R. Hidden Markov model-based classification of heart valve disease with PCA for dimension reduction. Eng. Appl. Artif. Intell. 2012, 25, 1523–1528. [Google Scholar]
- Oliveira, J.; Renna, F.; Coimbra, M. A Subject-Driven Unsupervised Hidden Semi-Markov Model and Gaussian Mixture Model for Heart Sound Segmentation. IEEE J. Sel. Top. Signal Process. 2019, 13, 323–331. [Google Scholar] [CrossRef]
- Quiceno-Manrique, A.F.; Godino-Llorente, J.I.; Blanco-Velasco, M.; Castellanos-Dominguez, G. Selection of Dynamic Features Based on Time–Frequency Representations for Heart Murmur Detection from Phonocardiographic Signals. Ann. Biomed. Eng. 2010, 38, 118–137. [Google Scholar]
- Khan, F.A.; Abid, A.; Khan, M.S. Automatic heart sound classification from segmented/unsegmented phonocardiogram signals using time and frequency features. Physiol. Meas. 2020, 41, 055006. [Google Scholar]
- Wang, M.; Guo, B.; Hu, Y.; Zhao, Z.; Liu, C.; Tang, H. Transfer Learning Models for Detecting Six Categories of Phonocardiogram Recordings. J. Cardiovasc. Dev. Dis. 2022, 9, 86. [Google Scholar] [CrossRef]
- Mahmood, A.; Dhahri, H.; Alhajla, M.; Almaslukh, A. Enhanced Classification of Phonocardiograms using Modified Deep Learning. IEEE Access 2024, 12, 178909–178916. [Google Scholar]
- Raza, A.; Mehmood, A.; Ullah, S.; Ahmad, M.; Choi, G.S.; On, B.W. Heartbeat sound signal classification using deep learning. Sensors 2019, 19, 4819. [Google Scholar] [CrossRef] [PubMed]
- Alkhodari, M.; Fraiwan, L. Convolutional and recurrent neural networks for the detection of valvular heart diseases in phonocardiogram recordings. Comput. Methods Programs Biomed. 2021, 200, 105940. [Google Scholar]
- Deperlioglu, O.; Kose, U.; Gupta, D.; Khanna, A.; Sangaiah, A.K. Diagnosis of heart diseases by a secure internet of health things system based on autoencoder deep neural network. Comput. Commun. 2020, 162, 31–50. [Google Scholar]
- Abburi, R.; Hatai, I.; Jaros, R.; Martinek, R.; Babu, T.A.; Babu, S.A.; Samanta, S. Adopting artificial intelligence algorithms for remote fetal heart rate monitoring and classification using wearable fetal phonocardiography. Appl. Soft Comput. 2024, 165, 112049. [Google Scholar]
- Jamil, S.; Roy, A.M. An efficient and robust Phonocardiography (PCG)-based Valvular Heart Diseases (VHD) detection framework using Vision Transformer (ViT). Comput. Biol. Med. 2023, 158, 106734. [Google Scholar]
- Clifford, G.D.; Liu, C.; Moody, B.; Springer, D.; Silva, I.; Li, Q.; Mark, R.G. Classification of normal/abnormal heart sound recordings: The PhysioNet/Computing in Cardiology Challenge 2016. Comput. Cardiol. 2016, 43, 609–612. [Google Scholar] [CrossRef]
- Reyna, M.; Kiarashi, Y.; Elola, A.; Oliveira, J.; Renna, F.; Gu, A.; Perez Alday, E.A.; Sadr, N.; Mattos, S.; Coimbra, M.; et al. Heart Murmur Detection from Phonocardiogram Recordings: The George B. Moody PhysioNet Challenge 2022 (version 1.0.0). PhysioNet 2023, 2, e0000324. [Google Scholar] [CrossRef]
- Oliveira, J.; Renna, F.; Costa, P.D.; Nogueira, M.; Oliveira, C.; Ferreira, C.; Jorge, A.; Mattos, S.; Hatem, T.; Tavares, T.; et al. The CirCor DigiScope dataset: From murmur detection to murmur classification. IEEE J. Biomed. Health Informatics 2021, 26, 2524–2535. [Google Scholar]
- Yaseen; Son, G.Y.; Kwon, S. Classification of heart sound signal using multiple features. Appl. Sci. 2018, 8, 2344. [Google Scholar] [CrossRef]
- Mallat, S. A Wavelet Tour of Signal Processing: The Sparse Way; Academic Press: Cambridge, MA, USA, 2008. [Google Scholar]
- Smith, J.O. Spectral Audio Signal Processing; W3K. 2011. Available online: https://ccrma.stanford.edu/~jos/sasp/ (accessed on 1 February 2025).
- Hartmann, W.M. Signals, Sound, and Sensation; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2004. [Google Scholar]
- Abdollahpur, M.; Ghaffari, A.; Ghiasi, S.; Mollakazemi, M.J. Detection of pathological heart sounds. Physiol. Meas. 2017, 38, 1616. [Google Scholar]
- Daubechies, I.; Lu, J.; Wu, H.T. Synchrosqueezed wavelet transforms: An empirical mode decomposition-like tool. Appl. Comput. Harmon. Anal. 2011, 30, 243–261. [Google Scholar]
- Ibarra-Hernández, R.F.; Alonso-Arévalo, M.A.; Cruz-Gutiérrez, A.; Licona-Chávez, A.L.; Villarreal-Reyes, S. Design and evaluation of a parametric model for cardiac sounds. Comput. Biol. Med. 2017, 89, 170–180. [Google Scholar] [PubMed]
- Auger, F.; Flandrin, P.; Lin, Y.T.; McLaughlin, S.; Meignen, S.; Oberlin, T.; Wu, H.T. Time-frequency reassignment and synchrosqueezing: An overview. IEEE Signal Process. Mag. 2013, 30, 32–41. [Google Scholar]
- Lilly, J.M.; Olhede, S.C. Generalized Morse wavelets as a superfamily of analytic wavelets. IEEE Trans. Signal Process. 2012, 60, 6036–6041. [Google Scholar]
- LeCun, Y.; Bengio, Y.; Hinton, G. Deep learning. Nature 2015, 521, 436–444. [Google Scholar] [CrossRef]
- Rosebrock, A. Deep Learning for Computer Vision with Python—Starter. Pyimageresearch. 2017, p. 332. Available online: https://pyimagesearch.com/deep-learning-computer-vision-python-book/ (accessed on 1 February 2025).
- Aggarwal, C.C. Neural Networks and Deep Learning; Springer: Berlin/Heidelberg, Germany, 2018; p. 512. [Google Scholar] [CrossRef]
- Krizhevsky, A.; Sutskever, I.; Hinton, G.E. Imagenet classification with deep convolutional neural networks. Commun. ACM 2017, 60, 84–90. [Google Scholar]
- Simonyan, K.; Zisserman, A. Very deep convolutional networks for large-scale image recognition. In Proceedings of the 3rd International Conference on Learning Representations, ICLR 2015—Conference Track Proceedings, San Diego, CA, USA, 7–9 May 2015; pp. 1–14. [Google Scholar]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Ullah, A.; Anwar, S.M.; Bilal, M.; Mehmood, R.M. Classification of arrhythmia by using deep learning with 2-D ECG spectral image representation. Remote. Sens. 2020, 12, 1685. [Google Scholar] [CrossRef]
- He, H.; Garcia, E.A. Learning from imbalanced data. IEEE Trans. Knowl. Data Eng. 2009, 21, 1263–1284. [Google Scholar]
- Manning, C.D. An Introduction to Information Retrieval; Cambridge University Press: Cambridge, UK, 2009. [Google Scholar]
- Liu, C.; Springer, D.; Li, Q.; Moody, B.; Juan, R.A.; Chorro, F.J.; Castells, F.; Roig, J.M.; Silva, I.; Johnson, A.E.; et al. An open access database for the evaluation of heart sound algorithms. Physiol. Meas. 2016, 37, 2181–2213. [Google Scholar]
Dataset | Subjects | Recordings | Abnormal Sounds (%) | Normal Sounds (%) | Unfit for Classification (%) |
---|---|---|---|---|---|
A | 121 | 409 | 67.5 | 28.4 | 4.2 |
B | 106 | 490 | 14.9 | 60.2 | 24.9 |
C | 31 | 31 | 64.5 | 22.6 | 12.9 |
D | 38 | 55 | 47.3 | 47.3 | 5.5 |
E | 356 | 2054 | 7.1 | 86.7 | 6.2 |
F | 112 | 114 | 27.2 | 68.4 | 4.4 |
Total | 764 | 3153 | 18.1 | 73.0 | 8.8 |
Type | Class | Number of Recordings |
---|---|---|
Normal | N | 200 |
Abnormal | AS | 200 |
MR | 200 | |
MS | 200 | |
MVP | 200 | |
Total | 1000 |
AlexNet | S | M | W | WSM | WS | WM |
---|---|---|---|---|---|---|
Accuracy | 85.89 | 85.07 | 80.59 | 99.96 | 99.95 | 98.87 |
Precision | 84.15 | 82.88 | 79.11 | 99.99 | 99.99 | 98.44 |
Sensitivity | 80.68 | 79.31 | 75.39 | 99.99 | 99.81 | 99.12 |
Specificity | 84.81 | 83.62 | 80.09 | 99.99 | 99.99 | 98.43 |
F1-score | 82.38 | 81.06 | 77.21 | 99.95 | 99.99 | 98.77 |
VGG16 | S | M | W | WSM | WS | WM |
---|---|---|---|---|---|---|
Accuracy | 75.53 | 75.22 | 74.72 | 98.92 | 98.74 | 84.96 |
Precision | 75.45 | 76.15 | 75.71 | 99.62 | 99.31 | 85.78 |
Sensitivity | 72.94 | 74.21 | 73.62 | 99.41 | 99.21 | 85.19 |
Specificity | 76.27 | 76.76 | 76.37 | 99.61 | 99.31 | 85.88 |
F1-score | 74.17 | 75.17 | 74.65 | 99.51 | 99.26 | 85.48 |
Ullah [76] | S | M | W | WSM | WS | WM |
---|---|---|---|---|---|---|
Accuracy | 72.01 | 66.51 | 67.71 | 99.82 | 99.85 | 70.31 |
Precision | 70.73 | 86.25 | 73.68 | 99.81 | 99.99 | 83.59 |
Sensitivity | 73.92 | 38.13 | 64.52 | 99.61 | 99.51 | 42.45 |
Specificity | 69.41 | 93.92 | 76.96 | 99.82 | 99.99 | 91.66 |
F1-score | 72.29 | 52.88 | 68.79 | 99.71 | 99.72 | 56.31 |
Proposed Network | S | M | W | WSM | WS | WM |
---|---|---|---|---|---|---|
Accuracy | 81.65 | 79.88 | 74.04 | 99.99 | 99.99 | 99.04 |
Precision | 84.36 | 81.11 | 76.76 | 99.99 | 99.98 | 99.03 |
Sensitivity | 77.65 | 77.87 | 68.90 | 99.98 | 99.99 | 99.05 |
Specificity | 85.65 | 81.88 | 79.15 | 99.99 | 99.98 | 99.03 |
F1-score | 80.87 | 79.46 | 72.61 | 99.99 | 99.99 | 99.04 |
ResNet50 | S | M | W | WSM | WS | WM |
---|---|---|---|---|---|---|
Accuracy | 80.25 | 80.72 | 76.46 | 99.97 | 99.98 | 98.93 |
Precision | 80.98 | 82.75 | 77.56 | 99.99 | 99.99 | 98.86 |
Sensitivity | 79.06 | 77.81 | 74.44 | 99.94 | 99.96 | 99.00 |
Specificity | 81.42 | 83.65 | 78.46 | 99.99 | 99.99 | 98.86 |
F1-score | 80.01 | 80.21 | 75.97 | 99.97 | 99.98 | 98.93 |
Architecture | Total Parameters | Trainable Parameters | Non-Trainable Parameters |
---|---|---|---|
AlexNet | 46.7 M | 46.7 M | 2752 |
VGG-16 | 134.3 M | 134.3 M | 0 |
Our model | 43.6 M | 43.6 M | 896 |
ResNet50 | 23.6 M | 23.5 M | 53,120 |
Ullah | 3.8 M | 3.8 M | 0 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Orozco-Reyes, L.; Alonso-Arévalo, M.A.; García-Canseco, E.; Ibarra-Hernández, R.F.; Conte-Galván, R. A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations. Technologies 2025, 13, 147. https://doi.org/10.3390/technologies13040147
Orozco-Reyes L, Alonso-Arévalo MA, García-Canseco E, Ibarra-Hernández RF, Conte-Galván R. A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations. Technologies. 2025; 13(4):147. https://doi.org/10.3390/technologies13040147
Chicago/Turabian StyleOrozco-Reyes, Leonel, Miguel A. Alonso-Arévalo, Eloísa García-Canseco, Roilhi F. Ibarra-Hernández, and Roberto Conte-Galván. 2025. "A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations" Technologies 13, no. 4: 147. https://doi.org/10.3390/technologies13040147
APA StyleOrozco-Reyes, L., Alonso-Arévalo, M. A., García-Canseco, E., Ibarra-Hernández, R. F., & Conte-Galván, R. (2025). A Deep-Learning Approach to Heart Sound Classification Based on Combined Time-Frequency Representations. Technologies, 13(4), 147. https://doi.org/10.3390/technologies13040147