Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (44)

Search Parameters:
Keywords = Gabor wavelet

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
23 pages, 6001 KiB  
Article
Quantification of Flavonoid Contents in Holy Basil Using Hyperspectral Imaging and Deep Learning Approaches
by Apichat Suratanee, Panita Chutimanukul and Kitiporn Plaimas
Appl. Sci. 2025, 15(13), 7582; https://doi.org/10.3390/app15137582 - 6 Jul 2025
Viewed by 472
Abstract
Holy basil (Ocimum tenuiflorum L.) is a medicinal herb rich in bioactive flavonoids with therapeutic properties. Traditional quantification methods rely on time-consuming and destructive extraction processes, whereas hyperspectral imaging provides a rapid, non-destructive alternative by analysing spectral signatures. However, effectively linking hyperspectral [...] Read more.
Holy basil (Ocimum tenuiflorum L.) is a medicinal herb rich in bioactive flavonoids with therapeutic properties. Traditional quantification methods rely on time-consuming and destructive extraction processes, whereas hyperspectral imaging provides a rapid, non-destructive alternative by analysing spectral signatures. However, effectively linking hyperspectral data to flavonoid levels remains a challenge for developing early detection tools before harvest. This study integrates deep learning with hyperspectral imaging to quantify flavonoid contents in 113 samples from 26 Thai holy basil cultivars collected across diverse regions of Thailand. Two deep learning architectures, ResNet1D and CNN1D, were evaluated in combination with feature extraction techniques, including wavelet transformation and Gabor-like filtering. ResNet1D with wavelet transformation achieved optimal performance, yielding an area under the receiver operating characteristic curve (AUC) of 0.8246 and an accuracy of 0.7702 for flavonoid content classification. Cross-validation demonstrated the model’s robust predictive capability in identifying antioxidant-rich samples. Samples with the highest predicted flavonoid content were identified, and cultivars exhibiting elevated levels of both flavonoids and phenolics were highlighted across various regions of Thailand. These findings demonstrate the predictive capability of hyperspectral data combined with deep learning for phytochemical assessment. This approach offers a valuable tool for non-destructive quality evaluation and supports cultivar selection for higher phytochemical content in breeding programs and agricultural applications. Full article
Show Figures

Figure 1

12 pages, 2801 KiB  
Article
Multi-Algorithm Feature Extraction from Dual Sections for the Recognition of Three African Redwoods
by Jiawen Sun, Jiashun Niu, Liren Xu, Jianping Sun and Linhong Zhao
Forests 2025, 16(7), 1043; https://doi.org/10.3390/f16071043 - 21 Jun 2025
Viewed by 321
Abstract
To address the persistent challenge of low recognition accuracy in precious wood species classification, this study proposes a novel methodology for identifying Pterocarpus santalinus, Pterocarpus tinctorius (PTD), and Pterocarpus tinctorius (Zambia). This approach synergistically integrates artificial neural networks (ANNs) with advanced image feature [...] Read more.
To address the persistent challenge of low recognition accuracy in precious wood species classification, this study proposes a novel methodology for identifying Pterocarpus santalinus, Pterocarpus tinctorius (PTD), and Pterocarpus tinctorius (Zambia). This approach synergistically integrates artificial neural networks (ANNs) with advanced image feature extraction techniques, specifically Fast Fourier Transform, Gabor Transform, Wavelet Transform, and Gray-Level Co-occurrence Matrix. Features were extracted from both transverse and longitudinal wood sections. Fifteen distinct ANN models were subsequently developed: hybrid-section models combined features from different sections using a single algorithm, while multi-algorithm models aggregated features from the same section across all four algorithms. The dual-section hybrid wavelet model (LC4) demonstrated superior performance, achieving a perfect 100% recognition accuracy. High accuracies were also observed in the four-parameter combination models for longitudinal (L5) and transverse (C5) sections, yielding 97.62% and 91.67%, respectively. Notably, 92.31% of the LC4 model’s test samples exhibited an absolute error of ≤1%, highlighting its high reliability and precision. These findings confirm the efficacy of integrating image processing with neural networks for fine-grained wood identification and underscore the exceptional discriminative power of wavelet-based features in cross-sectional data fusion. Full article
(This article belongs to the Section Wood Science and Forest Products)
Show Figures

Figure 1

17 pages, 2781 KiB  
Article
Enhancing AI-Driven Diagnosis of Invasive Ductal Carcinoma with Morphologically Guided and Interpretable Deep Learning
by Suphakon Jarujunawong and Paramate Horkaew
Appl. Sci. 2025, 15(12), 6883; https://doi.org/10.3390/app15126883 - 18 Jun 2025
Viewed by 405
Abstract
Artificial intelligence is increasingly shaping the landscape of computer-aided diagnosis of breast cancer. Despite incrementally improved accuracy, pathologist supervision remains essential for verified interpretation. While prior research focused on devising deep model architecture, this study examines the pivotal role of multi-band visual-enhanced features [...] Read more.
Artificial intelligence is increasingly shaping the landscape of computer-aided diagnosis of breast cancer. Despite incrementally improved accuracy, pathologist supervision remains essential for verified interpretation. While prior research focused on devising deep model architecture, this study examines the pivotal role of multi-band visual-enhanced features in invasive ductal carcinoma classification using whole slide imaging. Our results showed that orientation invariant filters achieved an accuracy of 0.8125, F1-score of 0.8134, and AUC of 0.8761, while preserving cellular arrangement and tissue morphology. By utilizing spatial relationships across varying extents, the proposed fusion strategy aligns with pathological interpretation principles. While integrating Gabor wavelet responses into ResNet-50 enhanced feature association, the comparative analysis emphasized the benefits of weighted morphological fusion, further strengthening diagnostic performance. These insights underscore the crucial role of informative filters in advancing DL schemes for breast cancer screening. Future research incorporating diverse, multi-center datasets could further validate the approach and broaden its diagnostic applications. Full article
(This article belongs to the Special Issue Novel Insights into Medical Images Processing)
Show Figures

Figure 1

22 pages, 4582 KiB  
Article
Enhanced Object Detection in Thangka Images Using Gabor, Wavelet, and Color Feature Fusion
by Yukai Xian, Yurui Lee, Te Shen, Ping Lan, Qijun Zhao and Liang Yan
Sensors 2025, 25(11), 3565; https://doi.org/10.3390/s25113565 - 5 Jun 2025
Cited by 1 | Viewed by 549
Abstract
Thangka image detection poses unique challenges due to complex iconography, densely packed small-scale elements, and stylized color–texture compositions. Existing detectors often struggle to capture both global structures and local details and rarely leverage domain-specific visual priors. To address this, we propose a frequency- [...] Read more.
Thangka image detection poses unique challenges due to complex iconography, densely packed small-scale elements, and stylized color–texture compositions. Existing detectors often struggle to capture both global structures and local details and rarely leverage domain-specific visual priors. To address this, we propose a frequency- and prior-enhanced detection framework based on YOLOv11, specifically tailored for Thangka images. We introduce a Learnable Lifting Wavelet Block (LLWB) to decompose features into low- and high-frequency components, while LLWB_Down and LLWB_Up enable frequency-guided multi-scale fusion. To incorporate chromatic and directional cues, we design a Color-Gabor Block (CGBlock), a dual-branch attention module based on HSV histograms and Gabor responses, and embed it via the Color-Gabor Cross Gate (C2CG) residual fusion module. Furthermore, we redesign all detection heads with decoupled branches and introduce center-ness prediction, alongside an additional shallow detection head to improve recall for ultra-small targets. Extensive experiments on a curated Thangka dataset demonstrate that our model achieves 89.5% mAP@0.5, 59.4% mAP@[0.5:0.95], and 84.7% recall, surpassing all baseline detectors while maintaining a compact size of 20.9 M parameters. Ablation studies validate the individual and synergistic contributions of each proposed component. Our method provides a robust and interpretable solution for fine-grained object detection in complex heritage images. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

41 pages, 1802 KiB  
Review
A Systematic Review of CNN Architectures, Databases, Performance Metrics, and Applications in Face Recognition
by Andisani Nemavhola, Colin Chibaya and Serestina Viriri
Information 2025, 16(2), 107; https://doi.org/10.3390/info16020107 - 5 Feb 2025
Cited by 3 | Viewed by 5427
Abstract
This study provides a comparative evaluation of face recognition databases and Convolutional Neural Network (CNN) architectures used in training and testing face recognition systems. The databases span from early datasets like Olivetti Research Laboratory (ORL) and Facial Recognition Technology (FERET) to more recent [...] Read more.
This study provides a comparative evaluation of face recognition databases and Convolutional Neural Network (CNN) architectures used in training and testing face recognition systems. The databases span from early datasets like Olivetti Research Laboratory (ORL) and Facial Recognition Technology (FERET) to more recent collections such as MegaFace and Ms-Celeb-1M, offering a range of sizes, subject diversity, and image quality. Older databases, such as ORL and FERET, are smaller and cleaner, while newer datasets enable large-scale training with millions of images but pose challenges like inconsistent data quality and high computational costs. The study also examines CNN architectures, including FaceNet and Visual Geometry Group 16 (VGG16), which show strong performance on large datasets like Labeled Faces in the Wild (LFW) and VGGFace, achieving accuracy rates above 98%. In contrast, earlier models like Support Vector Machine (SVM) and Gabor Wavelets perform well on smaller datasets but lack scalability for larger, more complex datasets. The analysis highlights the growing importance of multi-task learning and ensemble methods, as seen in Multi-Task Cascaded Convolutional Networks (MTCNNs). Overall, the findings emphasize the need for advanced algorithms capable of handling large-scale, real-world challenges while optimizing accuracy and computational efficiency in face recognition systems. Full article
(This article belongs to the Special Issue Machine Learning and Data Mining for User Classification)
Show Figures

Figure 1

25 pages, 2248 KiB  
Article
SCMs: Systematic Conglomerated Models for Audio Cough Signal Classification
by Sunil Kumar Prabhakar and Dong-Ok Won
Algorithms 2024, 17(7), 302; https://doi.org/10.3390/a17070302 - 8 Jul 2024
Cited by 1 | Viewed by 1468
Abstract
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or [...] Read more.
A common and natural physiological response of the human body is cough, which tries to push air and other wastage thoroughly from the airways. Due to environmental factors, allergic responses, pollution or some diseases, cough occurs. A cough can be either dry or wet depending on the amount of mucus produced. A characteristic feature of the cough is the sound, which is a quacking sound mostly. Human cough sounds can be monitored continuously, and so, cough sound classification has attracted a lot of interest in the research community in the last decade. In this research, three systematic conglomerated models (SCMs) are proposed for audio cough signal classification. The first conglomerated technique utilizes the concept of robust models like the Cross-Correlation Function (CCF) and Partial Cross-Correlation Function (PCCF) model, Least Absolute Shrinkage and Selection Operator (LASSO) model, elastic net regularization model with Gabor dictionary analysis and efficient ensemble machine learning techniques, the second technique utilizes the concept of stacked conditional autoencoders (SAEs) and the third technique utilizes the concept of using some efficient feature extraction schemes like Tunable Q Wavelet Transform (TQWT), sparse TQWT, Maximal Information Coefficient (MIC), Distance Correlation Coefficient (DCC) and some feature selection techniques like the Binary Tunicate Swarm Algorithm (BTSA), aggregation functions (AFs), factor analysis (FA), explanatory factor analysis (EFA) classified with machine learning classifiers, kernel extreme learning machine (KELM), arc-cosine ELM, Rat Swarm Optimization (RSO)-based KELM, etc. The techniques are utilized on publicly available datasets, and the results show that the highest classification accuracy of 98.99% was obtained when sparse TQWT with AF was implemented with an arc-cosine ELM classifier. Full article
(This article belongs to the Special Issue Quantum and Classical Artificial Intelligence)
Show Figures

Figure 1

22 pages, 3024 KiB  
Article
Augmenting Aquaculture Efficiency through Involutional Neural Networks and Self-Attention for Oplegnathus Punctatus Feeding Intensity Classification from Log Mel Spectrograms
by Usama Iqbal, Daoliang Li, Zhuangzhuang Du, Muhammad Akhter, Zohaib Mushtaq, Muhammad Farrukh Qureshi and Hafiz Abbad Ur Rehman
Animals 2024, 14(11), 1690; https://doi.org/10.3390/ani14111690 - 5 Jun 2024
Cited by 7 | Viewed by 1613
Abstract
Understanding the feeding dynamics of aquatic animals is crucial for aquaculture optimization and ecosystem management. This paper proposes a novel framework for analyzing fish feeding behavior based on a fusion of spectrogram-extracted features and deep learning architecture. Raw audio waveforms are first transformed [...] Read more.
Understanding the feeding dynamics of aquatic animals is crucial for aquaculture optimization and ecosystem management. This paper proposes a novel framework for analyzing fish feeding behavior based on a fusion of spectrogram-extracted features and deep learning architecture. Raw audio waveforms are first transformed into Log Mel Spectrograms, and a fusion of features such as the Discrete Wavelet Transform, the Gabor filter, the Local Binary Pattern, and the Laplacian High Pass Filter, followed by a well-adapted deep model, is proposed to capture crucial spectral and spectral information that can help distinguish between the various forms of fish feeding behavior. The Involutional Neural Network (INN)-based deep learning model is used for classification, achieving an accuracy of up to 97% across various temporal segments. The proposed methodology is shown to be effective in accurately classifying the feeding intensities of Oplegnathus punctatus, enabling insights pertinent to aquaculture enhancement and ecosystem management. Future work may include additional feature extraction modalities and multi-modal data integration to further our understanding and contribute towards the sustainable management of marine resources. Full article
(This article belongs to the Special Issue Animal Health and Welfare in Aquaculture)
Show Figures

Figure 1

23 pages, 7093 KiB  
Article
Synthetic Aperture Radar Image Change Detection Based on Principal Component Analysis and Two-Level Clustering
by Liangliang Li, Hongbing Ma, Xueyu Zhang, Xiaobin Zhao, Ming Lv and Zhenhong Jia
Remote Sens. 2024, 16(11), 1861; https://doi.org/10.3390/rs16111861 - 23 May 2024
Cited by 31 | Viewed by 2702
Abstract
Synthetic aperture radar (SAR) change detection provides a powerful tool for continuous, reliable, and objective observation of the Earth, supporting a wide range of applications that require regular monitoring and assessment of changes in the natural and built environment. In this paper, we [...] Read more.
Synthetic aperture radar (SAR) change detection provides a powerful tool for continuous, reliable, and objective observation of the Earth, supporting a wide range of applications that require regular monitoring and assessment of changes in the natural and built environment. In this paper, we introduce a novel SAR image change detection method based on principal component analysis and two-level clustering. First, two difference images of the log-ratio and mean-ratio operators are computed, then the principal component analysis fusion model is used to fuse the two difference images, and a new difference image is generated. To incorporate contextual information during the feature extraction phase, Gabor wavelets are used to obtain the representation of the difference image across multiple scales and orientations. The maximum magnitude across all orientations at each scale is then concatenated to form the Gabor feature vector. Following this, a cascading clustering algorithm is developed within this discriminative feature space by merging the first-level fuzzy c-means clustering with the second-level neighbor rule. Ultimately, the two-level combination of the changed and unchanged results produces the final change map. Five SAR datasets are used for the experiment, and the results show that our algorithm has significant advantages in SAR change detection. Full article
(This article belongs to the Section Remote Sensing Image Processing)
Show Figures

Figure 1

20 pages, 3669 KiB  
Article
Ultrasonic Detection of Aliased Signal Separation Based on Adaptive Feature Dictionary and K–SVD Algorithm for Protective Coatings of Assembled Steel Structure
by Yiyi Liu, Ruiqi Zhou, Zhigang Wang, Qiufeng Li, Chao Lu and Haitao Wang
Coatings 2023, 13(7), 1239; https://doi.org/10.3390/coatings13071239 - 11 Jul 2023
Cited by 5 | Viewed by 1402
Abstract
When using ultrasound to detect the thickness of protective coatings on assembled steel structures, the coatings are extremely thin, which can cause echo signals to overlap and impair the detection accuracy. Therefore, the study of the separation of the superimposed signals is essential [...] Read more.
When using ultrasound to detect the thickness of protective coatings on assembled steel structures, the coatings are extremely thin, which can cause echo signals to overlap and impair the detection accuracy. Therefore, the study of the separation of the superimposed signals is essential for the precise measurement of the thickness of thinner coatings. A method for signal time domain feature extraction based on an adaptive feature dictionary and K–SVD is investigated. First, the wavelet transform, which is sensitive to singular signal values, is used to identify the extreme values of the signal and use them as the new signal to be processed. Then, the feature signal extracted by wavelet transform is transformed into Hankel matrix form, and the initial feature dictionary is constructed by period segmentation and random extraction. The optimized feature dictionary is subsequently obtained by enhancing the K–SVD algorithm. Finally, the time domain signal is reconstructed using the optimized feature dictionary. Simulations and experiments demonstrate that the method is more accurate in separating mixed signals and extracting signal time domain feature information than the conventional wavelet transform and Gabor dictionary-based MP algorithm, and that it is more advantageous in detecting the thickness of protective coatings. Full article
(This article belongs to the Section Functional Polymer Coatings and Films)
Show Figures

Figure 1

17 pages, 171558 KiB  
Communication
Two Filters for Acquiring the Profiles from Images Obtained from Weak-Light Background, Fluorescence Microscope, Transmission Electron Microscope, and Near-Infrared Camera
by Yinghui Huang, Ruoxi Yang, Xin Geng, Zongan Li and Ye Wu
Sensors 2023, 23(13), 6207; https://doi.org/10.3390/s23136207 - 6 Jul 2023
Cited by 3 | Viewed by 2084
Abstract
Extracting the profiles of images is critical because it can bring simplified description and draw special attention to particular areas in the images. In our work, we designed two filters via the exponential and hypotenuse functions for profile extraction. Their ability to extract [...] Read more.
Extracting the profiles of images is critical because it can bring simplified description and draw special attention to particular areas in the images. In our work, we designed two filters via the exponential and hypotenuse functions for profile extraction. Their ability to extract the profiles from the images obtained from weak-light conditions, fluorescence microscopes, transmission electron microscopes, and near-infrared cameras is proven. Moreover, they can be used to extract the nesting structures in the images. Furthermore, their performance in extracting images degraded by Gaussian noise is evaluated. We used Gaussian white noise with a mean value of 0.9 to create very noisy images. These filters are effective for extracting the edge morphology in the noisy images. For the purpose of a comparative study, we used several well-known filters to process these noisy images, including the filter based on Gabor wavelet, the filter based on the watershed algorithm, and the matched filter, the performances of which in profile extraction are either comparable or not effective when dealing with extensively noisy images. Our filters have shown the potential for use in the field of pattern recognition and object tracking. Full article
(This article belongs to the Special Issue Advanced Biomedical Optics and Imaging)
Show Figures

Figure 1

37 pages, 8265 KiB  
Article
Quantized Information in Spectral Cyberspace
by Milton A. Garcés
Entropy 2023, 25(3), 419; https://doi.org/10.3390/e25030419 - 26 Feb 2023
Cited by 2 | Viewed by 2263
Abstract
The constant-Q Gabor atom is developed for spectral power, information, and uncertainty quantification from time–frequency representations. Stable multiresolution spectral entropy algorithms are constructed with continuous wavelet and Stockwell transforms. The recommended processing and scaling method will depend on the signature of interest, the [...] Read more.
The constant-Q Gabor atom is developed for spectral power, information, and uncertainty quantification from time–frequency representations. Stable multiresolution spectral entropy algorithms are constructed with continuous wavelet and Stockwell transforms. The recommended processing and scaling method will depend on the signature of interest, the desired information, and the acceptable levels of uncertainty of signal and noise features. Selected Lamb wave signatures and information spectra from the 2022 Tonga eruption are presented as representative case studies. Resilient transformations from physical to information metrics are provided for sensor-agnostic signal processing, pattern recognition, and machine learning applications. Full article
(This article belongs to the Section Signal and Data Analysis)
Show Figures

Figure 1

23 pages, 2745 KiB  
Article
Cervical Cancer Diagnosis Based on Multi-Domain Features Using Deep Learning Enhanced by Handcrafted Descriptors
by Omneya Attallah
Appl. Sci. 2023, 13(3), 1916; https://doi.org/10.3390/app13031916 - 2 Feb 2023
Cited by 52 | Viewed by 4878
Abstract
Cervical cancer, among the most frequent adverse cancers in women, could be avoided through routine checks. The Pap smear check is a widespread screening methodology for the timely identification of cervical cancer, but it is susceptible to human mistakes. Artificial Intelligence-reliant computer-aided diagnostic [...] Read more.
Cervical cancer, among the most frequent adverse cancers in women, could be avoided through routine checks. The Pap smear check is a widespread screening methodology for the timely identification of cervical cancer, but it is susceptible to human mistakes. Artificial Intelligence-reliant computer-aided diagnostic (CAD) methods have been extensively explored to identify cervical cancer in order to enhance the conventional testing procedure. In order to attain remarkable classification results, most current CAD systems require pre-segmentation steps for the extraction of cervical cells from a pap smear slide, which is a complicated task. Furthermore, some CAD models use only hand-crafted feature extraction methods which cannot guarantee the sufficiency of classification phases. In addition, if there are few data samples, such as in cervical cell datasets, the use of deep learning (DL) alone is not the perfect choice. In addition, most existing CAD systems obtain attributes from one domain, but the integration of features from multiple domains usually increases performance. Hence, this article presents a CAD model based on extracting features from multiple domains not only one domain. It does not require a pre-segmentation process thus it is less complex than existing methods. It employs three compact DL models to obtain high-level spatial deep features rather than utilizing an individual DL model with large number of parameters and layers as used in current CADs. Moreover, it retrieves several statistical and textural descriptors from multiple domains including spatial and time–frequency domains instead of employing features from a single domain to demonstrate a clearer representation of cervical cancer features, which is not the case in most existing CADs. It examines the influence of each set of handcrafted attributes on diagnostic accuracy independently and hybrid. It then examines the consequences of combining each DL feature set obtained from each CNN with the combined handcrafted features. Finally, it uses principal component analysis to merge the entire DL features with the combined handcrafted features to investigate the effect of merging numerous DL features with various handcrafted features on classification results. With only 35 principal components, the accuracy achieved by the quatric SVM of the proposed CAD reached 100%. The performance of the described CAD proves that combining several DL features with numerous handcrafted descriptors from multiple domains is able to boost diagnostic accuracy. Additionally, the comparative performance analysis, along with other present studies, shows the competing capacity of the proposed CAD. Full article
(This article belongs to the Special Issue Artificial Intelligence (AI) in Healthcare)
Show Figures

Figure 1

21 pages, 3830 KiB  
Article
GabROP: Gabor Wavelets-Based CAD for Retinopathy of Prematurity Diagnosis via Convolutional Neural Networks
by Omneya Attallah
Diagnostics 2023, 13(2), 171; https://doi.org/10.3390/diagnostics13020171 - 4 Jan 2023
Cited by 34 | Viewed by 4417
Abstract
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous [...] Read more.
One of the most serious and dangerous ocular problems in premature infants is retinopathy of prematurity (ROP), a proliferative vascular disease. Ophthalmologists can use automatic computer-assisted diagnostic (CAD) tools to help them make a safe, accurate, and low-cost diagnosis of ROP. All previous CAD tools for ROP diagnosis use the original fundus images. Unfortunately, learning the discriminative representation from ROP-related fundus images is difficult. Textural analysis techniques, such as Gabor wavelets (GW), can demonstrate significant texture information that can help artificial intelligence (AI) based models to improve diagnostic accuracy. In this paper, an effective and automated CAD tool, namely GabROP, based on GW and multiple deep learning (DL) models is proposed. Initially, GabROP analyzes fundus images using GW and generates several sets of GW images. Next, these sets of images are used to train three convolutional neural networks (CNNs) models independently. Additionally, the actual fundus pictures are used to build these networks. Using the discrete wavelet transform (DWT), texture features retrieved from every CNN trained with various sets of GW images are combined to create a textural-spectral-temporal demonstration. Afterward, for each CNN, these features are concatenated with spatial deep features obtained from the original fundus images. Finally, the previous concatenated features of all three CNN are incorporated using the discrete cosine transform (DCT) to lessen the size of features caused by the fusion process. The outcomes of GabROP show that it is accurate and efficient for ophthalmologists. Additionally, the effectiveness of GabROP is compared to recently developed ROP diagnostic techniques. Due to GabROP’s superior performance compared to competing tools, ophthalmologists may be able to identify ROP more reliably and precisely, which could result in a reduction in diagnostic effort and examination time. Full article
(This article belongs to the Special Issue Advances in Retinopathy)
Show Figures

Figure 1

28 pages, 5448 KiB  
Article
Towards a Real-Time Oil Palm Fruit Maturity System Using Supervised Classifiers Based on Feature Analysis
by Meftah Salem M. Alfatni, Siti Khairunniza-Bejo, Mohammad Hamiruce B. Marhaban, Osama M. Ben Saaed, Aouache Mustapha and Abdul Rashid Mohamed Shariff
Agriculture 2022, 12(9), 1461; https://doi.org/10.3390/agriculture12091461 - 14 Sep 2022
Cited by 24 | Viewed by 5913
Abstract
Remote sensing sensors-based image processing techniques have been widely applied in non-destructive quality inspection systems of agricultural crops. Image processing and analysis were performed with computer vision and external grading systems by general and standard steps, such as image acquisition, pre-processing and segmentation, [...] Read more.
Remote sensing sensors-based image processing techniques have been widely applied in non-destructive quality inspection systems of agricultural crops. Image processing and analysis were performed with computer vision and external grading systems by general and standard steps, such as image acquisition, pre-processing and segmentation, extraction and classification of image characteristics. This paper describes the design and implementation of a real-time fresh fruit bunch (FFB) maturity classification system for palm oil based on unrestricted remote sensing (CCD camera sensor) and image processing techniques using five multivariate techniques (statistics, histograms, Gabor wavelets, GLCM and BGLAM) to extract fruit image characteristics and incorporate information on palm oil species classification FFB and maturity testing. To optimize the proposed solution in terms of performance reporting and processing time, supervised classifiers, such as support vector machine (SVM), K-nearest neighbor (KNN) and artificial neural network (ANN), were performed and evaluated via ROC and AUC measurements. The experimental results showed that the FFB classification system of non-destructive palm oil maturation in real time provided a significant result. Although the SVM classifier is generally a robust classifier, ANN has better performance due to the natural noise of the data. The highest precision was obtained on the basis of the ANN and BGLAM algorithms applied to the texture of the fruit. In particular, the robust image processing algorithm based on BGLAM feature extraction technology and the ANN classifier largely provided a high AUC test accuracy of over 93% and an image-processing time of 0,44 (s) for the detection of FFB palm oil species. Full article
(This article belongs to the Special Issue Digital Innovations in Agriculture)
Show Figures

Figure 1

13 pages, 2772 KiB  
Article
Multifilters-Based Unsupervised Method for Retinal Blood Vessel Segmentation
by Nayab Muzammil, Syed Ayaz Ali Shah, Aamir Shahzad, Muhammad Amir Khan and Rania M. Ghoniem
Appl. Sci. 2022, 12(13), 6393; https://doi.org/10.3390/app12136393 - 23 Jun 2022
Cited by 21 | Viewed by 3139
Abstract
Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the [...] Read more.
Fundus imaging is one of the crucial methods that help ophthalmologists for diagnosing the various eye diseases in modern medicine. An accurate vessel segmentation method can be a convenient tool to foresee and analyze fatal diseases, including hypertension or diabetes, which damage the retinal vessel’s appearance. This work suggests an unsupervised approach for vessels segmentation out of retinal images. The proposed method includes multiple steps. Firstly, from the colored retinal image, green channel is extracted and preprocessed utilizing Contrast Limited Histogram Equalization as well as Fuzzy Histogram Based Equalization for contrast enhancement. To expel geometrical articles (macula, optic disk) and noise, top-hat morphological operations are used. On the resulted enhanced image, matched filter and Gabor wavelet filter are applied, and the outputs from both is added to extract vessels pixels. The resulting image with the now noticeable blood vessel is binarized using human visual system (HVS). A final image of segmented blood vessel is obtained by applying post-processing. The suggested method is assessed on two public datasets (DRIVE and STARE) and showed comparable results with regard to sensitivity, specificity and accuracy. The results we achieved with respect to sensitivity, specificity together with accuracy on DRIVE database are 0.7271, 0.9798 and 0.9573, and on STARE database these are 0.7164, 0.9760, and 0.9560, respectively, in less than 3.17 s on average per image. Full article
Show Figures

Figure 1

Back to TopTop