Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline

Article Types

Countries / Regions

Search Results (5)

Search Parameters:
Authors = Junaidi Abdullah

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
9 pages, 1717 KiB  
Proceeding Paper
Generative AI Respiratory and Cardiac Sound Separation Using Variational Autoencoders (VAEs)
by Arshad Jamal, R. Kanesaraj Ramasamy and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 9; https://doi.org/10.3390/cmsf2025010009 - 1 Jul 2025
Viewed by 262
Abstract
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, [...] Read more.
The separation of respiratory and cardiac sounds is a significant challenge in biomedical signal processing due to their overlapping frequency and time characteristics. Traditional methods struggle with accurate extraction in noisy or diverse clinical environments. This study explores the application of machine learning, particularly convolutional neural networks (CNNs), to overcome these obstacles. Advanced machine learning models, denoising algorithms, and domain adaptation strategies address challenges such as frequency overlap, external noise, and limited labeled datasets. This study presents a robust methodology for detecting heart and lung diseases from audio signals using advanced preprocessing, feature extraction, and deep learning models. The approach integrates adaptive filtering and bandpass filtering as denoising techniques and variational autoencoders (VAEs) for feature extraction. The extracted features are input into a CNN, which classifies audio signals into different heart and lung conditions. The results highlight the potential of this combined approach for early and accurate disease detection, contributing to the development of reliable diagnostic tools for healthcare. Full article
Show Figures

Figure 1

8 pages, 1216 KiB  
Proceeding Paper
Enhanced Lung Disease Detection Using Double Denoising and 1D Convolutional Neural Networks on Respiratory Sound Analysis
by Reshma Sreejith, R. Kanesaraj Ramasamy, Wan-Noorshahida Mohd-Isa and Junaidi Abdullah
Comput. Sci. Math. Forum 2025, 10(1), 7; https://doi.org/10.3390/cmsf2025010007 - 24 Jun 2025
Viewed by 308
Abstract
The accurate and early detection of respiratory diseases is vital for effective diagnosis and treatment. This study presents a new approach for classifying lung sounds using a double denoising method combined with a 1D Convolutional Neural Network (CNN). The preprocessing uses Fast Fourier [...] Read more.
The accurate and early detection of respiratory diseases is vital for effective diagnosis and treatment. This study presents a new approach for classifying lung sounds using a double denoising method combined with a 1D Convolutional Neural Network (CNN). The preprocessing uses Fast Fourier Transform to clean up sounds and High-Pass Filtering to improve the quality of breathing sounds by eliminating noise and low-frequency interruptions. The Short-Time Fourier Transform (STFT) extracts features that capture localised frequency variations, crucial for distinguishing normal and abnormal respiratory sounds. These features are input into the 1D CNN, which classifies diseases such as bronchiectasis, pneumonia, asthma, COPD, healthy, and URTI. The dual denoising method enhances signal clarity and classification performance. The model achieved 96% validation accuracy, highlighting its reliability in detecting respiratory conditions. The results emphasise the effectiveness of combining signal augmentation with deep learning for automated respiratory sound analysis, with future research focusing on dataset expansion and model refinement for clinical use. Full article
Show Figures

Figure 1

11 pages, 3172 KiB  
Communication
Detection on Cell Cancer Using the Deep Transfer Learning and Histogram Based Image Focus Quality Assessment
by Md Roman Bhuiyan and Junaidi Abdullah
Sensors 2022, 22(18), 7007; https://doi.org/10.3390/s22187007 - 16 Sep 2022
Cited by 4 | Viewed by 2387
Abstract
In recent years, the number of studies using whole-slide imaging (WSIs) of histopathology slides has expanded significantly. For the development and validation of artificial intelligence (AI) systems, glass slides from retrospective cohorts including patient follow-up data have been digitized. It has become crucial [...] Read more.
In recent years, the number of studies using whole-slide imaging (WSIs) of histopathology slides has expanded significantly. For the development and validation of artificial intelligence (AI) systems, glass slides from retrospective cohorts including patient follow-up data have been digitized. It has become crucial to determine that the quality of such resources meets the minimum requirements for the development of AI in the future. The need for automated quality control is one of the obstacles preventing the clinical implementation of digital pathology work processes. As a consequence of the inaccuracy of scanners in determining the focus of the image, the resulting visual blur can render the scanned slide useless. Moreover, when scanned at a resolution of 20× or higher, the resulting picture size of a scanned slide is often enormous. Therefore, for digital pathology to be clinically relevant, computational algorithms must be used to rapidly and reliably measure the picture’s focus quality and decide if an image requires re-scanning. We propose a metric for evaluating the quality of digital pathology images that uses a sum of even-derivative filter bases to generate a human visual-system-like kernel, which is described as the inverse of the lens’ point spread function. This kernel is then used for a digital pathology image to change high-frequency image data degraded by the scanner’s optics and assess the patch-level focus quality. Through several studies, we demonstrate that our technique correlates with ground-truth z-level data better than previous methods, and is computationally efficient. Using deep learning techniques, our suggested system is able to identify positive and negative cancer cells in images. We further expand our technique to create a local slide-level focus quality heatmap, which can be utilized for automated slide quality control, and we illustrate our method’s value in clinical scan quality control by comparing it to subjective slide quality ratings. The proposed method, GoogleNet, VGGNet, and ResNet had accuracy values of 98.5%, 94.5%, 94.00%, and 95.00% respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

16 pages, 614 KiB  
Article
Deep Dilated Convolutional Neural Network for Crowd Density Image Classification with Dataset Augmentation for Hajj Pilgrimage
by Roman Bhuiyan, Junaidi Abdullah, Noramiza Hashim, Fahmid Al Farid, Wan Noorshahida Mohd Isa, Jia Uddin and Norra Abdullah
Sensors 2022, 22(14), 5102; https://doi.org/10.3390/s22145102 - 7 Jul 2022
Cited by 9 | Viewed by 2827
Abstract
Almost two million Muslim pilgrims from all around the globe visit Mecca each year to conduct Hajj. Each year, the number of pilgrims grows, creating worries about how to handle such large crowds and avoid unpleasant accidents or crowd congestion catastrophes. In this [...] Read more.
Almost two million Muslim pilgrims from all around the globe visit Mecca each year to conduct Hajj. Each year, the number of pilgrims grows, creating worries about how to handle such large crowds and avoid unpleasant accidents or crowd congestion catastrophes. In this paper, we introduced deep Hajj crowd dilated convolutional neural network (DHCDCNNet) for crowd density analysis. This research also presents augmentation technique to create additional dataset based on the hajj pilgrimage scenario. We utilized a single framework to extract both high-level and low-level features. For creating additional dataset we divide the process of images augmentation into two routes. In the first route, we utilized magnitude extraction followed by the polar magnitude. In the second route, we performed morphological operation followed by transforming the image into skeleton. This paper presented a solution to the challenge of measuring crowd density using a surveillance camera pointed at a distance. An FCNN-based technique for crowd analysis is included in the proposed methodology, particularly for classifying crowd density. There are several obstacles in video analysis when there are a large number of pilgrims moving around the tawaf area, with densities of between 7 and 8 per square meter. The proposed DHCDCNNet method has achieved accuracy of 97%, 89% and 100% for the JHU-CROWD dataset, the UCSD dataset and the proposed Hajj-Crowd dataset, respectively. The proposed Hajj-Crowd dataset, the UCSD dataset, and the JHU-CROW dataset all had accuracy of 98%, 97% and 97%, respectively, using the VGGNet approach. Using the ResNet50 approach, the proposed Hajj-Crowd dataset, the UCSD dataset, and the JHU-CROW dataset all had an accuracy of 99%, 91% and 97%, respectively. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

19 pages, 704 KiB  
Review
A Structured and Methodological Review on Vision-Based Hand Gesture Recognition System
by Fahmid Al Farid, Noramiza Hashim, Junaidi Abdullah, Md Roman Bhuiyan, Wan Noor Shahida Mohd Isa, Jia Uddin, Mohammad Ahsanul Haque and Mohd Nizam Husen
J. Imaging 2022, 8(6), 153; https://doi.org/10.3390/jimaging8060153 - 26 May 2022
Cited by 70 | Viewed by 13483
Abstract
Researchers have recently focused their attention on vision-based hand gesture recognition. However, due to several constraints, achieving an effective vision-driven hand gesture recognition system in real time has remained a challenge. This paper aims to uncover the limitations faced in image acquisition through [...] Read more.
Researchers have recently focused their attention on vision-based hand gesture recognition. However, due to several constraints, achieving an effective vision-driven hand gesture recognition system in real time has remained a challenge. This paper aims to uncover the limitations faced in image acquisition through the use of cameras, image segmentation and tracking, feature extraction, and gesture classification stages of vision-driven hand gesture recognition in various camera orientations. This paper looked at research on vision-based hand gesture recognition systems from 2012 to 2022. Its goal is to find areas that are getting better and those that need more work. We used specific keywords to find 108 articles in well-known online databases. In this article, we put together a collection of the most notable research works related to gesture recognition. We suggest different categories for gesture recognition-related research with subcategories to create a valuable resource in this domain. We summarize and analyze the methodologies in tabular form. After comparing similar types of methodologies in the gesture recognition field, we have drawn conclusions based on our findings. Our research also looked at how well the vision-based system recognized hand gestures in terms of recognition accuracy. There is a wide variation in identification accuracy, from 68% to 97%, with the average being 86.6 percent. The limitations considered comprise multiple text and interpretations of gestures and complex non-rigid hand characteristics. In comparison to current research, this paper is unique in that it discusses all types of gesture recognition techniques. Full article
(This article belongs to the Special Issue Advances in Human Action Recognition Using Deep Learning)
Show Figures

Figure 1

Back to TopTop