Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (4)

Search Parameters:
Keywords = computer-aided auscultation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4739 KiB  
Article
Abnormal Heart Sound Classification and Model Interpretability: A Transfer Learning Approach with Deep Learning
by Milan Marocchi, Leigh Abbott, Yue Rong, Sven Nordholm and Girish Dwivedi
J. Vasc. Dis. 2023, 2(4), 438-459; https://doi.org/10.3390/jvd2040034 - 4 Dec 2023
Cited by 4 | Viewed by 2820
Abstract
Physician detection of heart sound abnormality is complicated by the inherent difficulty of detecting critical abnormalities in the presence of noise. Computer-aided heart auscultation provides a promising alternative for more accurate detection, with recent deep learning approaches exceeding expert accuracy. Although combining phonocardiogram [...] Read more.
Physician detection of heart sound abnormality is complicated by the inherent difficulty of detecting critical abnormalities in the presence of noise. Computer-aided heart auscultation provides a promising alternative for more accurate detection, with recent deep learning approaches exceeding expert accuracy. Although combining phonocardiogram (PCG) data with electrocardiogram (ECG) data provides more information to an abnormal heart sound classifier, the scarce presence of labelled datasets with this combination impedes training. This paper explores fine-tuning deep convolutional neural networks such as ResNet, VGG, and inceptionv3, on images of spectrograms, mel-spectrograms, and scalograms. By fine-tuning deep pre-trained models on image representations of ECG and PCG, we achieve 91.25% accuracy on the training-a dataset of the PhysioNet Computing in Cardiology Challenge 2016, compared to a previous result of 81.48%. Interpretation of the model’s learned features is also provided, with the results indicative of clinical significance. Full article
(This article belongs to the Section Cardiovascular Diseases)
Show Figures

Figure 1

21 pages, 1337 KiB  
Review
Practicing Digital Gastroenterology through Phonoenterography Leveraging Artificial Intelligence: Future Perspectives Using Microwave Systems
by Renisha Redij, Avneet Kaur, Pratyusha Muddaloor, Arshia K. Sethi, Keirthana Aedma, Anjali Rajagopal, Keerthy Gopalakrishnan, Ashima Yadav, Devanshi N. Damani, Victor G. Chedid, Xiao Jing Wang, Christopher A. Aakre, Alexander J. Ryu and Shivaram P. Arunachalam
Sensors 2023, 23(4), 2302; https://doi.org/10.3390/s23042302 - 18 Feb 2023
Cited by 10 | Viewed by 7608
Abstract
Production of bowel sounds, established in the 1900s, has limited application in existing patient-care regimes and diagnostic modalities. We review the physiology of bowel sound production, the developments in recording technologies and the clinical application in various scenarios, to understand the potential of [...] Read more.
Production of bowel sounds, established in the 1900s, has limited application in existing patient-care regimes and diagnostic modalities. We review the physiology of bowel sound production, the developments in recording technologies and the clinical application in various scenarios, to understand the potential of a bowel sound recording and analysis device—the phonoenterogram in future gastroenterological practice. Bowel sound production depends on but is not entirely limited to the type of food consumed, amount of air ingested and the type of intestinal contractions. Recording technologies for extraction and analysis of these include the wavelet-based filtering, autoregressive moving average model, multivariate empirical mode decompression, radial basis function network, two-dimensional positional mapping, neural network model and acoustic biosensor technique. Prior studies evaluate the application of bowel sounds in conditions such as intestinal obstruction, acute appendicitis, large bowel disorders such as inflammatory bowel disease and bowel polyps, ascites, post-operative ileus, sepsis, irritable bowel syndrome, diabetes mellitus, neurodegenerative disorders such as Parkinson’s disease and neonatal conditions such as hypertrophic pyloric stenosis. Recording and analysis of bowel sounds using artificial intelligence is crucial for creating an accessible, inexpensive and safe device with a broad range of clinical applications. Microwave-based digital phonoenterography has huge potential for impacting GI practice and patient care. Full article
(This article belongs to the Special Issue Microwave and Antenna System in Medical Applications)
Show Figures

Figure 1

13 pages, 2916 KiB  
Article
New Methods for the Acoustic-Signal Segmentation of the Temporomandibular Joint
by Marcin Kajor, Dariusz Kucharski, Justyna Grochala and Jolanta E. Loster
J. Clin. Med. 2022, 11(10), 2706; https://doi.org/10.3390/jcm11102706 - 11 May 2022
Cited by 3 | Viewed by 2450
Abstract
(1) Background: The stethoscope is one of the main accessory tools in the diagnosis of temporomandibular joint disorders (TMD). However, the clinical auscultation of the masticatory system still lacks computer-aided support, which would decrease the time needed for each diagnosis. This can be [...] Read more.
(1) Background: The stethoscope is one of the main accessory tools in the diagnosis of temporomandibular joint disorders (TMD). However, the clinical auscultation of the masticatory system still lacks computer-aided support, which would decrease the time needed for each diagnosis. This can be achieved with digital signal processing and classification algorithms. The segmentation of acoustic signals is usually the first step in many sound processing methodologies. We postulate that it is possible to implement the automatic segmentation of the acoustic signals of the temporomandibular joint (TMJ), which can contribute to the development of advanced TMD classification algorithms. (2) Methods: In this paper, we compare two different methods for the segmentation of TMJ sounds which are used in diagnosis of the masticatory system. The first method is based solely on digital signal processing (DSP) and includes filtering and envelope calculation. The second method takes advantage of a deep learning approach established on a U-Net neural network, combined with long short-term memory (LSTM) architecture. (3) Results: Both developed methods were validated against our own TMJ sound database created from the signals recorded with an electronic stethoscope during a clinical diagnostic trail of TMJ. The Dice score of the DSP method was 0.86 and the sensitivity was 0.91; for the deep learning approach, Dice score was 0.85 and there was a sensitivity of 0.98. (4) Conclusions: The presented results indicate that with the use of signal processing and deep learning, it is possible to automatically segment the TMJ sounds into sections of diagnostic value. Such methods can provide representative data for the development of TMD classification algorithms. Full article
(This article belongs to the Collection Digital Dentistry: Advances and Challenges)
Show Figures

Figure 1

20 pages, 22193 KiB  
Article
Phonocardiogram Signal Processing for Automatic Diagnosis of Congenital Heart Disorders through Fusion of Temporal and Cepstral Features
by Sumair Aziz, Muhammad Umar Khan, Majed Alhaisoni, Tallha Akram and Muhammad Altaf
Sensors 2020, 20(13), 3790; https://doi.org/10.3390/s20133790 - 6 Jul 2020
Cited by 90 | Viewed by 7923
Abstract
Congenital heart disease (CHD) is a heart disorder associated with the devastating indications that result in increased mortality, increased morbidity, increased healthcare expenditure, and decreased quality of life. Ventricular Septal Defects (VSDs) and Arterial Septal Defects (ASDs) are the most common types of [...] Read more.
Congenital heart disease (CHD) is a heart disorder associated with the devastating indications that result in increased mortality, increased morbidity, increased healthcare expenditure, and decreased quality of life. Ventricular Septal Defects (VSDs) and Arterial Septal Defects (ASDs) are the most common types of CHD. CHDs can be controlled before reaching a serious phase with an early diagnosis. The phonocardiogram (PCG) or heart sound auscultation is a simple and non-invasive technique that may reveal obvious variations of different CHDs. Diagnosis based on heart sounds is difficult and requires a high level of medical training and skills due to human hearing limitations and the non-stationary nature of PCGs. An automated computer-aided system may boost the diagnostic objectivity and consistency of PCG signals in the detection of CHDs. The objective of this research was to assess the effects of various pattern recognition modalities for the design of an automated system that effectively differentiates normal, ASD, and VSD categories using short term PCG time series. The proposed model in this study adopts three-stage processing: pre-processing, feature extraction, and classification. Empirical mode decomposition (EMD) was used to denoise the raw PCG signals acquired from subjects. One-dimensional local ternary patterns (1D-LTPs) and Mel-frequency cepstral coefficients (MFCCs) were extracted from the denoised PCG signal for precise representation of data from different classes. In the final stage, the fused feature vector of 1D-LTPs and MFCCs was fed to the support vector machine (SVM) classifier using 10-fold cross-validation. The PCG signals were acquired from the subjects admitted to local hospitals and classified by applying various experiments. The proposed methodology achieves a mean accuracy of 95.24% in classifying ASD, VSD, and normal subjects. The proposed model can be put into practice and serve as a second opinion for cardiologists by providing more objective and faster interpretations of PCG signals. Full article
(This article belongs to the Special Issue Signal Processing Using Non-invasive Physiological Sensors)
Show Figures

Figure 1

Back to TopTop