Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (54)

Search Parameters:
Keywords = Haralick

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
18 pages, 4523 KB  
Article
Remote Sensing of Nematode Stress in Coffee: UAV-Based Multispectral and Thermal Imaging Approaches
by Daniele de Brum, Gabriel Araújo e Silva Ferraz, Luana Mendes dos Santos, Felipe Augusto Fernandes, Marco Antonio Zanella, Patrícia Ferreira Ponciano Ferraz, Willian César Terra, Vicente Paulo Campos, Thieres George Freire da Silva, Ênio Farias de França e Silva and Alexsandro Oliveira da Silva
AgriEngineering 2026, 8(1), 22; https://doi.org/10.3390/agriengineering8010022 - 8 Jan 2026
Abstract
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica [...] Read more.
Early and non-destructive detection of plant-parasitic nematodes is critical for implementing site-specific management in coffee production systems. This study evaluated the potential of unmanned aerial vehicle (UAV) multispectral and thermal imaging, combined with textural analysis, to detect Meloidogyne exigua infestation in Coffea arabica (Topázio variety). Field surveys were conducted in two contrasting seasons (dry and rainy), and nematode incidence was identified and quantified by counting root galls. Vegetation indices (NDVI, GNDVI, NGRDI, NDRE, OSAVI), individual spectral bands, canopy temperature, and Haralick texture features were extracted from UAV-derived imagery and correlated with gall counts. Under the conditions of this experiment, strong correlations were observed between gall number and the red spectral band in both seasons (R > 0.60), while GNDVI (dry season) and NGRDI (rainy season) showed strong negative correlations with gall density. Thermal imaging revealed moderate positive correlations with infestation levels during the dry season, indicating potential for early stress detection when foliar symptoms were absent. Texture metrics from the red and green bands further improved detection capacity, particularly with a 3 × 3 pixel window at 135°. These results demonstrate that UAV-based multispectral and thermal imaging, enhanced by texture analysis, can provide reliable early indicators of nematode infestation in coffee. Full article
Show Figures

Figure 1

23 pages, 6739 KB  
Article
SPX-GNN: An Explainable Graph Neural Network for Harnessing Long-Range Dependencies in Tuberculosis Classifications in Chest X-Ray Images
by Muhammed Ali Pala and Muhammet Burhan Navdar
Diagnostics 2025, 15(24), 3236; https://doi.org/10.3390/diagnostics15243236 - 18 Dec 2025
Viewed by 463
Abstract
Background/Objectives: Traditional medical image analysis methods often suffer from locality bias, limiting their ability to model long-range contextual relationships between spatially distributed anatomical structures. To overcome this challenge, this study proposes SPX-GNN (Superpixel Explainable Graph Neural Network). This novel method reformulates image [...] Read more.
Background/Objectives: Traditional medical image analysis methods often suffer from locality bias, limiting their ability to model long-range contextual relationships between spatially distributed anatomical structures. To overcome this challenge, this study proposes SPX-GNN (Superpixel Explainable Graph Neural Network). This novel method reformulates image analysis as a structural graph learning problem, capturing both local anomalies and global topological patterns in a holistic manner. Methods: The proposed framework decomposes images into semantically coherent superpixel regions, converting them into graph nodes that preserve topological relationships. Each node is enriched with a comprehensive feature vector encoding complementary diagnostic clues, including colour (CIELAB), texture (LBP and Haralick), shape (Hu moments), and spatial location. A Graph Neural Network is then employed to learn the relational dependencies between these enriched nodes. The method was rigorously evaluated using 5-fold stratified cross-validation on a public dataset comprising 4200 chest X-ray images. Results: SPX-GNN demonstrated exceptional performance in tuberculosis classification, achieving a mean accuracy of 99.82%, an F1-score of 99.45%, and a ROC-AUC of 100.00%. Furthermore, an integrated Explainable Artificial Intelligence module addresses the black box problem by generating semantic importance maps, which illuminate the decision mechanism and enhance clinical reliability. Conclusions: SPX-GNN offers a novel approach that successfully combines high diagnostic accuracy with methodological transparency. By providing a robust and interpretable workflow, this study presents a promising solution for medical imaging tasks where structural information is critical, paving the way for more reliable clinical decision support systems. Full article
(This article belongs to the Section Machine Learning and Artificial Intelligence in Diagnostics)
Show Figures

Figure 1

23 pages, 15283 KB  
Article
Quality Assessment of Despeckling Filters Based on the Analysis of Ratio Images
by Rubén Darío Vásquez-Salazar, William S. Puche, Alejandro C. Frery and Luis Gómez
Remote Sens. 2025, 17(24), 4048; https://doi.org/10.3390/rs17244048 - 17 Dec 2025
Viewed by 285
Abstract
We present a quantitative and qualitative evaluation of despeckling filters based on a set of Haralick-derived features and the Jensen–Shannon Divergence obtained from ratio images. To that aim, we propose a normalized composite index, called the Texture-Divergence Measurement (TDM), [...] Read more.
We present a quantitative and qualitative evaluation of despeckling filters based on a set of Haralick-derived features and the Jensen–Shannon Divergence obtained from ratio images. To that aim, we propose a normalized composite index, called the Texture-Divergence Measurement (TDM), that describes the statistical and structural behavior of the filtered images. Complementary qualitative analysis using Image Horizontal Visibility Graphs (IHVGs) confirms the results of the proposed metric. The combination of the proposed TDM metric and IHVG visualization provides a robust framework for assessing despeckling performance from both statistical and structural perspectives. Full article
Show Figures

Figure 1

16 pages, 2076 KB  
Article
Mortality Prediction from Patient’s First Day PAAC Radiograph in Internal Medicine Intensive Care Unit Using Artificial Intelligence Methods
by Orhan Gok, Türker Fedai Cavus, Ahmed Cihad Genc, Selcuk Yaylaci and Lacin Tatli Ayhan
Diagnostics 2025, 15(24), 3138; https://doi.org/10.3390/diagnostics15243138 - 10 Dec 2025
Viewed by 350
Abstract
Introduction: This study aims to predict mortality using chest radiographs obtained on the first day of intensive care admission, thereby contributing to better planning of doctors’ treatment strategies and more efficient use of limited resources through early and accurate predictions. Methods: We retrospectively [...] Read more.
Introduction: This study aims to predict mortality using chest radiographs obtained on the first day of intensive care admission, thereby contributing to better planning of doctors’ treatment strategies and more efficient use of limited resources through early and accurate predictions. Methods: We retrospectively analyzed 510 ICU patients. After data augmentation, a total of 3019 chest radiographs were used for model training and validation, while an independent, non-augmented test set of 100 patients (100 images) was reserved for final evaluation. Seventy-four (74) radiomic features were extracted from the images and analyzed using machine learning algorithms. Model performances were evaluated using the area under the ROC curve (AUC), sensitivity, and specificity metrics. Results: A total of 3019 data samples were included in the study. Through feature selection methods, the initial 74 features were gradually reduced to 10. The Subspace KNN algorithm demonstrated the highest prediction accuracy, achieving AUC 0.88, sensitivity 0.80, and specificity 0.87. Conclusions: Machine learning algorithms such as Subspace KNN and features obtained from PAAC radiographs, such as GLCM Contrast, Kurtosis, Cobb angle, Haralick, Bilateral Infiltrates, Cardiomegaly, Skewness, Unilateral Effusion, Median Intensity, and Intensity Range, are promising tools for mortality prediction in patients hospitalized in the internal medicine intensive care unit. These tools can be integrated into clinical decision support systems to provide benefits in patient management. Full article
(This article belongs to the Special Issue 3rd Edition: AI/ML-Based Medical Image Processing and Analysis)
Show Figures

Figure 1

22 pages, 1171 KB  
Article
Feature Extraction and Comparative Analysis of Firing Pin, Breech Face, and Annulus Impressions from Ballistic Cartridge Images
by Sangita Baruah, R. Suresh, Rajesh Babu Govindarajulu, Chandan Jyoti Kumar, Bibhakar Chanda, Lakshya Dugar and Manob Jyoti Saikia
Forensic Sci. 2025, 5(4), 62; https://doi.org/10.3390/forensicsci5040062 - 12 Nov 2025
Viewed by 2614
Abstract
Background/Objectives: Toolmark analysis on cartridge cases offers critical insights in forensic ballistics, as the impressions left on cartridge cases by firearm components—such as the firing pin, breech face, and annulus—carry distinctive patterns and act as unique identifiers that can be used for firearm [...] Read more.
Background/Objectives: Toolmark analysis on cartridge cases offers critical insights in forensic ballistics, as the impressions left on cartridge cases by firearm components—such as the firing pin, breech face, and annulus—carry distinctive patterns and act as unique identifiers that can be used for firearm linkage. This study aims to develop a systematic and interpretable feature extraction pipeline for these regions to support future automation and comparison studies in forensic cartridge case analysis. Methods: A dataset of 20 high-resolution cartridge case images was prepared, and each region of interest (firing pin impression, breech face, and annulus) was manually annotated using the LabelMe tool. ImageJ and Python-based scripts were employed for feature extraction, capturing geometric descriptors (area, perimeter, circularity, and eccentricity) and texture-based features (Local Binary Patterns and Haralick statistics). In total, 61 quantitative features were derived from the annotated regions. Similarity between cartridge cases was evaluated using Euclidean distance metrics after normalization. Results: The extracted and calibrated region-wise geometric and texture features demonstrated distinct variation patterns across firing pin, breech face, and annulus regions. Pairwise similarity analysis revealed measurable intra-class differences, indicating the discriminative potential of the extracted features even within cartridges likely fired from the same firearm. Conclusions: This study provides a foundational, region-wise quantitative framework for analysing cartridge case impressions. The extracted dataset and similarity outcomes establish a baseline for subsequent research on firearm identification and model-based classification in forensic ballistics. Full article
Show Figures

Figure 1

16 pages, 1247 KB  
Article
Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis
by Ouafa Sijilmassi
J. Imaging 2025, 11(9), 321; https://doi.org/10.3390/jimaging11090321 - 19 Sep 2025
Viewed by 542
Abstract
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick [...] Read more.
This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick texture features from the Gray-Level Co-occurrence Matrix. Significant differences in features such as energy, contrast, correlation, and entropy were found between healthy and pathological retinas. Pathological retinas exhibited lower textural complexity and higher uniformity, which correlates with vascular thinning and structural changes observed in DR and glaucoma. In parallel, the global color distribution of the full fundus area was analyzed without segmentation. RGB intensity histograms were calculated for each channel and averaged across groups. Statistical tests revealed significant differences, particularly in the green and blue channels. The Mahalanobis distance quantified the separability of the groups per channel. These results indicate that pathological changes in retinal tissue can also lead to detectable chromatic shifts in the fundus. The findings underscore the potential of both vascular texture and color features as non-invasive biomarkers for early retinal disease detection and classification. Full article
(This article belongs to the Special Issue Emerging Technologies for Less Invasive Diagnostic Imaging)
Show Figures

Figure 1

23 pages, 5770 KB  
Article
Assessment of Influencing Factors and Robustness of Computable Image Texture Features in Digital Images
by Diego Andrade, Howard C. Gifford and Mini Das
Tomography 2025, 11(8), 87; https://doi.org/10.3390/tomography11080087 - 31 Jul 2025
Viewed by 831
Abstract
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. [...] Read more.
Background/Objectives: There is significant interest in using texture features to extract hidden image-based information. In medical imaging applications using radiomics, AI, or personalized medicine, the quest is to extract patient or disease specific information while being insensitive to other system or processing variables. While we use digital breast tomosynthesis (DBT) to show these effects, our results would be generally applicable to a wider range of other imaging modalities and applications. Methods: We examine factors in texture estimation methods, such as quantization, pixel distance offset, and region of interest (ROI) size, that influence the magnitudes of these readily computable and widely used image texture features (specifically Haralick’s gray level co-occurrence matrix (GLCM) textural features). Results: Our results indicate that quantization is the most influential of these parameters, as it controls the size of the GLCM and range of values. We propose a new multi-resolution normalization (by either fixing ROI size or pixel offset) that can significantly reduce quantization magnitude disparities. We show reduction in mean differences in feature values by orders of magnitude; for example, reducing it to 7.34% between quantizations of 8–128, while preserving trends. Conclusions: When combining images from multiple vendors in a common analysis, large variations in texture magnitudes can arise due to differences in post-processing methods like filters. We show that significant changes in GLCM magnitude variations may arise simply due to the filter type or strength. These trends can also vary based on estimation variables (like offset distance or ROI) that can further complicate analysis and robustness. We show pathways to reduce sensitivity to such variations due to estimation methods while increasing the desired sensitivity to patient-specific information such as breast density. Finally, we show that our results obtained from simulated DBT images are consistent with what we see when applied to clinical DBT images. Full article
Show Figures

Figure 1

16 pages, 2228 KB  
Article
Potential Use of a New Energy Vision (NEV) Camera for Diagnostic Support of Carpal Tunnel Syndrome: Development of a Decision-Making Algorithm to Differentiate Carpal Tunnel-Affected Hands from Controls
by Dror Robinson, Mohammad Khatib, Mohammad Eissa and Mustafa Yassin
Diagnostics 2025, 15(11), 1417; https://doi.org/10.3390/diagnostics15111417 - 3 Jun 2025
Viewed by 990
Abstract
Introduction: Carpal Tunnel Syndrome (CTS) is a prevalent neuropathy requiring accurate, non-invasive diagnostics to minimize patient burden. This study evaluates the New Energy Vision (NEV) camera, an RGB-based multispectral imaging tool, to detect CTS through skin texture and color analysis, developing a machine [...] Read more.
Introduction: Carpal Tunnel Syndrome (CTS) is a prevalent neuropathy requiring accurate, non-invasive diagnostics to minimize patient burden. This study evaluates the New Energy Vision (NEV) camera, an RGB-based multispectral imaging tool, to detect CTS through skin texture and color analysis, developing a machine learning algorithm to distinguish CTS-affected hands from controls. Methods: A two-part observational study included 103 participants (50 controls, 53 CTS patients) in Part 1, using NEV camera images to train a Support Vector Machine (SVM) classifier. Part 2 compared median nerve-damaged (MED) and ulnar nerve-normal (ULN) palm areas in 32 CTS patients. Validations included nerve conduction tests (NCT), Semmes–Weinstein monofilament testing (SWMT), and Boston Carpal Tunnel Questionnaire (BCTQ). Results: The SVM classifier achieved 93.33% accuracy (confusion matrix: [[14, 1], [1, 14]]), with 81.79% cross-validation accuracy. Part 2 identified significant differences (p < 0.05) in color proportions (e.g., red_proportion) and Haralick texture features between MED and ULN areas, corroborated by BCTQ and SWMT. Conclusions: The NEV camera, leveraging multispectral imaging, offers a promising non-invasive CTS diagnostic tool using detection of nerve-related skin changes. Further validation is needed for clinical adoption. Full article
(This article belongs to the Special Issue New Trends in Musculoskeletal Imaging)
Show Figures

Figure 1

12 pages, 693 KB  
Article
Haralick Texture Analysis for Differentiating Suspicious Prostate Lesions from Normal Tissue in Low-Field MRI
by Dang Bich Thuy Le, Ram Narayanan, Meredith Sadinski, Aleksandar Nacev, Yuling Yan and Srirama S. Venkataraman
Bioengineering 2025, 12(1), 47; https://doi.org/10.3390/bioengineering12010047 - 9 Jan 2025
Cited by 2 | Viewed by 1622
Abstract
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason [...] Read more.
This study evaluates the feasibility of using Haralick texture analysis on low-field, T2-weighted MRI images for detecting prostate cancer, extending current research from high-field MRI to the more accessible and cost-effective low-field MRI. A total of twenty-one patients with biopsy-proven prostate cancer (Gleason score 4+3 or higher) were included. Before transperineal biopsy guided by low-field (58–74mT) MRI, a radiologist annotated suspicious regions of interest (ROIs) on high-field (3T) MRI. Rigid image registration was performed to align corresponding regions on both high- and low-field images, ensuring an accurate propagation of annotations to the co-registered low-field images for texture feature calculations. For each cancerous ROI, a matching ROI of identical size was drawn in a non-suspicious region presumed to be normal tissue. Four Haralick texture features (Energy, Correlation, Contrast, and Homogeneity) were extracted and compared between cancerous and non-suspicious ROIs. Two extraction methods were used: the direct computation of texture measures within the ROIs and a sliding window technique generating texture maps across the prostate from which average values were derived. The results demonstrated statistically significant differences in texture features between cancerous and non-suspicious regions. Specifically, Energy and Homogeneity were elevated (p-values: <0.00001–0.004), while Contrast and Correlation were reduced (p-values: <0.00001–0.03) in cancerous ROIs. These findings suggest that Haralick texture features are both feasible and informative for differentiating abnormalities, offering promise in assisting prostate cancer detection on low-field MRI. Full article
(This article belongs to the Special Issue Advancements in Medical Imaging Technology)
Show Figures

Figure 1

23 pages, 5776 KB  
Article
Estimating the Workability of Concrete with a Stereovision Camera during Mixing
by Teemu Ojala and Jouni Punkki
Sensors 2024, 24(14), 4472; https://doi.org/10.3390/s24144472 - 10 Jul 2024
Cited by 3 | Viewed by 2038
Abstract
The correct workability of concrete is an essential parameter for its placement and compaction. However, an absence of automatic and transparent measurement methods to estimate the workability of concrete hinders the adaptation from laborious traditional methods such as the slump test. In this [...] Read more.
The correct workability of concrete is an essential parameter for its placement and compaction. However, an absence of automatic and transparent measurement methods to estimate the workability of concrete hinders the adaptation from laborious traditional methods such as the slump test. In this paper, we developed a machine-learning framework for estimating the slump class of concrete in the mixer using a stereovision camera. Depth data from five different slump classes was transformed into Haralick texture features to train several machine-learning classifiers. The best-performing classifier achieved a multiclass classification accuracy of 0.8179 with the XGBoost algorithm. Furthermore, we found through statistical analysis that while the denoising of depth data has little effect on the accuracy, the feature extraction of mixer blades and the choice of region of interest significantly increase the accuracy and the efficiency of the classifiers. The proposed framework shows robust results, indicating that stereovision is a competitive solution to estimate the workability of concrete during concrete production. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

20 pages, 5700 KB  
Article
Relating Macroscopic PET Radiomics Features to Microscopic Tumor Phenotypes Using a Stochastic Mathematical Model of Cellular Metabolism and Proliferation
by Hailey S. H. Ahn, Yas Oloumi Yazdi, Brennan J. Wadsworth, Kevin L. Bennewith, Arman Rahmim and Ivan S. Klyuzhin
Cancers 2024, 16(12), 2215; https://doi.org/10.3390/cancers16122215 - 13 Jun 2024
Cited by 2 | Viewed by 1978
Abstract
Cancers can manifest large variations in tumor phenotypes due to genetic and microenvironmental factors, which has motivated the development of quantitative radiomics-based image analysis with the aim to robustly classify tumor phenotypes in vivo. Positron emission tomography (PET) imaging can be particularly helpful [...] Read more.
Cancers can manifest large variations in tumor phenotypes due to genetic and microenvironmental factors, which has motivated the development of quantitative radiomics-based image analysis with the aim to robustly classify tumor phenotypes in vivo. Positron emission tomography (PET) imaging can be particularly helpful in elucidating the metabolic profiles of tumors. However, the relatively low resolution, high noise, and limited PET data availability make it difficult to study the relationship between the microenvironment properties and metabolic tumor phenotype as seen on the images. Most of previously proposed digital PET phantoms of tumors are static, have an over-simplified morphology, and lack the link to cellular biology that ultimately governs the tumor evolution. In this work, we propose a novel method to investigate the relationship between microscopic tumor parameters and PET image characteristics based on the computational simulation of tumor growth. We use a hybrid, multiscale, stochastic mathematical model of cellular metabolism and proliferation to generate simulated cross-sections of tumors in vascularized normal tissue on a microscopic level. The generated longitudinal tumor growth sequences are converted to PET images with realistic resolution and noise. By changing the biological parameters of the model, such as the blood vessel density and conditions for necrosis, distinct tumor phenotypes can be obtained. The simulated cellular maps were compared to real histology slides of SiHa and WiDr xenografts imaged with Hoechst 33342 and pimonidazole. As an example application of the proposed method, we simulated six tumor phenotypes that contain various amounts of hypoxic and necrotic regions induced by a lack of oxygen and glucose, including phenotypes that are distinct on the microscopic level but visually similar in PET images. We computed 22 standardized Haralick texture features for each phenotype, and identified the features that could best discriminate the phenotypes with varying image noise levels. We demonstrated that “cluster shade” and “difference entropy” are the most effective and noise-resilient features for microscopic phenotype discrimination. Longitudinal analysis of the simulated tumor growth showed that radiomics analysis can be beneficial even in small lesions with a diameter of 3.5–4 resolution units, corresponding to 8.7–10.0 mm in modern PET scanners. Certain radiomics features were shown to change non-monotonically with tumor growth, which has implications for feature selection for tracking disease progression and therapy response. Full article
(This article belongs to the Special Issue PET/CT in Cancers Outcomes Prediction)
Show Figures

Figure 1

17 pages, 39975 KB  
Article
A Hybrid Learning-Architecture for Improved Brain Tumor Recognition
by Jose Dixon, Oluwatunmise Akinniyi, Abeer Abdelhamid, Gehad A. Saleh, Md Mahmudur Rahman and Fahmi Khalifa
Algorithms 2024, 17(6), 221; https://doi.org/10.3390/a17060221 - 21 May 2024
Cited by 26 | Viewed by 4986
Abstract
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture [...] Read more.
The accurate classification of brain tumors is an important step for early intervention. Artificial intelligence (AI)-based diagnostic systems have been utilized in recent years to help automate the process and provide more objective and faster diagnosis. This work introduces an enhanced AI-based architecture for improved brain tumor classification. We introduce a hybrid architecture that integrates vision transformer (ViT) and deep neural networks to create an ensemble classifier, resulting in a more robust brain tumor classification framework. The analysis pipeline begins with preprocessing and data normalization, followed by extracting three types of MRI-derived information-rich features. The latter included higher-order texture and structural feature sets to harness the spatial interactions between image intensities, which were derived using Haralick features and local binary patterns. Additionally, local deeper features of the brain images are extracted using an optimized convolutional neural networks (CNN) architecture. Finally, ViT-derived features are also integrated due to their ability to handle dependencies across larger distances while being less sensitive to data augmentation. The extracted features are then weighted, fused, and fed to a machine learning classifier for the final classification of brain MRIs. The proposed weighted ensemble architecture has been evaluated on publicly available and locally collected brain MRIs of four classes using various metrics. The results showed that leveraging the benefits of individual components of the proposed architecture leads to improved performance using ablation studies. Full article
(This article belongs to the Special Issue Algorithms for Computer Aided Diagnosis)
Show Figures

Figure 1

23 pages, 6506 KB  
Article
Selection of the Discriming Feature Using the BEMD’s BIMF for Classification of Breast Cancer Mammography Image
by Fatima Ghazi, Aziza Benkuider, Fouad Ayoub and Khalil Ibrahimi
BioMedInformatics 2024, 4(2), 1202-1224; https://doi.org/10.3390/biomedinformatics4020066 - 9 May 2024
Cited by 2 | Viewed by 2286
Abstract
Mammogram exam images are useful in identifying diseases, such as breast cancer, which is one of the deadliest cancers, affecting adult women around the world. Computational image analysis and machine learning techniques can help experts identify abnormalities in these images. In this work [...] Read more.
Mammogram exam images are useful in identifying diseases, such as breast cancer, which is one of the deadliest cancers, affecting adult women around the world. Computational image analysis and machine learning techniques can help experts identify abnormalities in these images. In this work we present a new system to help diagnose and analyze breast mammogram images. To do this, the system a method the Selection of the Most Discriminant Attributes of the images preprocessed by BEMD “SMDA-BEMD”, this entails picking the most pertinent traits from the collection of variables that characterize the state under study. A reduction of attribute based on a transformation of the data also called an extraction of characteristics by extracting the Haralick attributes from the Co-occurrence Matrices Methods “GLCM” this reduction which consists of replacing the initial set of data by a new reduced set, constructed at from the initial set of features extracted by images decomposed using Bidimensional Empirical Multimodal Decomposition “BEMD”, for discrimination of breast mammogram images (healthy and pathology) using BEMD. This decomposition makes it possible to decompose an image into several Bidimensional Intrinsic Mode Functions “BIMFs” modes and a residue. The results obtained show that mammographic images can be represented in a relatively short space by selecting the most discriminating features based on a supervised method where they can be differentiated with high reliability between healthy mammographic images and pathologies, However, certain aspects and findings demonstrate how successful the suggested strategy is to detect the tumor. A BEMD technique is used as preprocessing on mammographic images. This suggested methodology makes it possible to obtain consistent results and establishes the discrimination threshold for mammography images (healthy and pathological), the classification rate is improved (98.6%) compared to existing cutting-edge techniques in the field. This approach is tested and validated on mammographic medical images from the Kenitra-Morocco reproductive health reference center (CRSRKM) which contains breast mammographic images of normal and pathological cases. Full article
(This article belongs to the Special Issue Feature Papers on Methods in Biomedical Informatics)
Show Figures

Figure 1

19 pages, 3099 KB  
Article
Effect of Texture Feature Distribution on Agriculture Field Type Classification with Multitemporal UAV RGB Images
by Chun-Han Lee, Kuang-Yu Chen and Li-yu Daisy Liu
Remote Sens. 2024, 16(7), 1221; https://doi.org/10.3390/rs16071221 - 30 Mar 2024
Cited by 9 | Viewed by 3765
Abstract
Identifying farmland use has long been an important topic in large-scale agricultural production management. This study used multi-temporal visible RGB images taken from agricultural areas in Taiwan by UAV to build a model for classifying field types. We combined color and texture features [...] Read more.
Identifying farmland use has long been an important topic in large-scale agricultural production management. This study used multi-temporal visible RGB images taken from agricultural areas in Taiwan by UAV to build a model for classifying field types. We combined color and texture features to extract more information from RGB images. The vectorized gray-level co-occurrence matrix (GLCMv), instead of the common Haralick feature, was used as texture to improve the classification accuracy. To understand whether changes in the appearance of crops at different times affect image features and classification, this study designed a labeling method that combines image acquisition times and land use type to observe it. The Extreme Gradient Boosting (XGBoost) algorithm was chosen to build the classifier, and two classical algorithms, the Support Vector Machine and Classification and Regression Tree algorithms, were used for comparison. In the testing results, the highest overall accuracy reached 82%, and the best balance accuracy across categories reached 97%. In our comparison, the color feature provides the most information about the classification model and builds the most accurate classifier. If the color feature were used with the GLCMv, the accuracy would improve by about 3%. In contrast, the Haralick feature does not improve the accuracy, indicating that the GLCM itself contains more information that can be used to improve the prediction. It also shows that with combined image acquisition times in the label, the within-group sum of squares can be reduced by 2–31%, and the accuracy can be increased by 1–2% for some categories, showing that the change of crops over time was also an important factor of image features. Full article
Show Figures

Figure 1

25 pages, 10427 KB  
Article
Ensemble Learning-Based Solutions: An Approach for Evaluating Multiple Features in the Context of H&E Histological Images
by Jaqueline J. Tenguam, Leonardo H. da Costa Longo, Guilherme F. Roberto, Thaína A. A. Tosta, Paulo R. de Faria, Adriano M. Loyola, Sérgio V. Cardoso, Adriano B. Silva, Marcelo Z. do Nascimento and Leandro A. Neves
Appl. Sci. 2024, 14(3), 1084; https://doi.org/10.3390/app14031084 - 26 Jan 2024
Cited by 4 | Viewed by 2439
Abstract
In this paper, we propose an approach based on ensemble learning to classify histology tissues stained with hematoxylin and eosin. The proposal was applied to representative images of colorectal cancer, oral epithelial dysplasia, non-Hodgkin’s lymphoma, and liver tissues (the classification of gender and [...] Read more.
In this paper, we propose an approach based on ensemble learning to classify histology tissues stained with hematoxylin and eosin. The proposal was applied to representative images of colorectal cancer, oral epithelial dysplasia, non-Hodgkin’s lymphoma, and liver tissues (the classification of gender and age from liver tissue samples). The ensemble learning considered multiple combinations of techniques that are commonly used to develop computer-aided diagnosis methods in medical imaging. The feature extraction was defined with different descriptors, exploring the deep learning and handcrafted methods. The deep-learned features were obtained using five different convolutional neural network architectures. The handcrafted features were representatives of fractal techniques (multidimensional and multiscale approaches), Haralick descriptors, and local binary patterns. A two-stage feature selection process (ranking with metaheuristics) was defined to obtain the main combinations of descriptors and, consequently, techniques. Each combination was tested through a rigorous ensemble process, exploring heterogeneous classifiers, such as Random Forest, Support Vector Machine, K-Nearest Neighbors, Logistic Regression, and Naive Bayes. The ensemble learning presented here provided accuracy rates from 90.72% to 100.00% and offered relevant information about the combinations of techniques in multiple histological images and the main features present in the top-performing solutions, using smaller sets of descriptors (limited to a maximum of 53), which involved each ensemble process and solutions that have not yet been explored. The developed methodology, i.e., making the knowledge of each ensemble learning comprehensible to specialists, complements the main contributions of this study to supporting the development of computer-aided diagnosis systems for histological images. Full article
(This article belongs to the Special Issue Computer-Aided Image Processing and Analysis)
Show Figures

Figure 1

Back to TopTop