Previous Article in Journal
Empirical Evaluation of Invariances in Deep Vision Models
Previous Article in Special Issue
Uncertainty-Guided Active Learning for Access Route Segmentation and Planning in Transcatheter Aortic Valve Implantation
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis

Optics Department, Faculty of Optics and Optometry, Complutense University of Madrid, C/Arcos de Jalón, 118, 28037 Madrid, Spain
J. Imaging 2025, 11(9), 321; https://doi.org/10.3390/jimaging11090321
Submission received: 30 July 2025 / Revised: 12 September 2025 / Accepted: 15 September 2025 / Published: 19 September 2025
(This article belongs to the Special Issue Emerging Technologies for Less Invasive Diagnostic Imaging)

Abstract

This study analyzes retinal fundus images to distinguish healthy retinas from those affected by diabetic retinopathy (DR) and glaucoma using a dual-framework approach: vascular texture analysis and global color distribution analysis. The texture-based approach involved segmenting the retinal vasculature and extracting eight Haralick texture features from the Gray-Level Co-occurrence Matrix. Significant differences in features such as energy, contrast, correlation, and entropy were found between healthy and pathological retinas. Pathological retinas exhibited lower textural complexity and higher uniformity, which correlates with vascular thinning and structural changes observed in DR and glaucoma. In parallel, the global color distribution of the full fundus area was analyzed without segmentation. RGB intensity histograms were calculated for each channel and averaged across groups. Statistical tests revealed significant differences, particularly in the green and blue channels. The Mahalanobis distance quantified the separability of the groups per channel. These results indicate that pathological changes in retinal tissue can also lead to detectable chromatic shifts in the fundus. The findings underscore the potential of both vascular texture and color features as non-invasive biomarkers for early retinal disease detection and classification.

1. Introduction

Fundus imaging is a specialized medical imaging technique used to capture detailed images of the retinal fundus, enabling comprehensive assessment of the retina, its vasculature, the optic nerve head, and the vitreous body. Leveraging mathematical models and advanced algorithms, digital image processing facilitates the measurement, analysis, and quantification of retinal abnormalities with high precision. This approach plays a pivotal role in the early detection and diagnosis of various ophthalmic diseases, notably glaucoma and diabetic retinopathy (DR) [1,2,3].
Glaucoma is a chronic, irreversible neurodegenerative disorder of the optic nerve, primarily characterized by the apoptotic loss of retinal ganglion cells and progressive axonal degeneration, ultimately resulting in vision loss. The disease also induces structural alterations in the retinal microvasculature, including decreased vascular density and narrowing of retinal vessel diameters [4]. On the other hand, DR, one of the most common microvascular complications of diabetes mellitus, remains the leading cause of blindness among working-age adults (20–65 years) [5]. In its early stages, DR is marked by the presence of retinal capillary microaneurysms, intraretinal microvascular abnormalities, and increased vascular permeability, often culminating in dot-blot hemorrhages [6]. Both glaucoma and DR may lead to vision impairment or irreversible blindness, frequently manifesting as structural abnormalities visible in fundus images. The retinal vasculature is crucial for delivering oxygen and nutrients to the retina and supporting the transmission of visual information from the retina to the brain [7]. Therefore, morphological changes in retinal vessels serve as critical biomarkers for the onset and progression of various ocular and systemic diseases, including glaucoma and DR.
In recent years, the availability of large volumes of retinal image data has increased significantly; however, much of the embedded clinical information remains underexploited. Texture analysis has emerged as a powerful tool for detecting and segmenting anatomical structures that exhibit subtle alterations in image texture, such as damage to the retinal vasculature. By quantifying the spatial distribution of intensity values within an image, texture analysis facilitates the comparison of similar regions across different images, offering critical insights into pathological changes in retinal morphology.
In the broader field of medical image analysis, texture analysis has demonstrated its utility in the evaluation of various imaging modalities, including Magnetic Resonance Imaging (MRI), Computed Tomography (CT), ultrasound, and X-ray, across multiple organ systems such as the heart, lungs, brain, kidneys, and liver [8,9,10]. Among texture analysis techniques, the method introduced by Haralick et al. [11] based on the Gray-Level Co-occurrence Matrix (GLCM), has gained widespread adoption, particularly in biomedical imaging. The GLCM captures second-order statistical information by quantifying the spatial relationships between pairs of pixels separated by a defined distance and orientation, thereby enabling the extraction of discriminative texture features.
In the present study, Haralick texture features were extracted and analyzed to identify the most salient attributes for detecting and differentiating vascular abnormalities associated with glaucoma and DR in comparison to normal retinal conditions. Retinal blood vessels were segmented from color fundus images using an automated extraction pipeline, enabling objective quantification of vascular anomalies across control and pathological groups. Furthermore, histogram analysis was applied to the full retinal background, capturing texture information based on color distribution across the RGB channels. This approach allows for a comprehensive evaluation of retinal background characteristics, offering additional context for identifying disease-specific patterns.

Related Work

Recent advances in retinal image analysis have focused heavily on improving vascular segmentation and disease classification using both classical image processing and deep learning methods, which directly relate to the core components of this study. Traditional texture analysis remains a powerful tool for capturing spatial relationships in medical images. GLCM has been widely applied across various imaging modalities, including MRI, CT, ultrasound, and fundus imaging [8,9,12]. In retinal imaging, Haralick features have been explored for detecting vascular abnormalities; however, few studies have explicitly focused on segmented vessel structures for texture-based disease differentiation. The present research addresses this gap by concentrating on vessel-specific texture descriptors, enabling a more targeted and sensitive assessment of retinal pathology. Several prior works have leveraged classical image processing for retinal disease detection. Ahmad et al. [1] and Kumar et al. [2] explored statistical and morphological analysis for diabetic retinopathy and glaucoma detection. Color distribution has also been highlighted as a valuable diagnostic cue in studies by Stanley et al. [13] and Zhou et al. [14], which used RGB histograms for classification. Beyond texture and color, fractal geometry has emerged as a viable alternative for structural quantification. Igalla-El Youssfi and López-Alonso [15] proposed novel fractal and multifractal metrics for quantifying vascular complexity in diabetic retinopathy and glaucoma. Their results demonstrated that changes in multifractal spectra effectively capture disease progression, reinforcing the clinical utility of handcrafted and interpretable features especially those that describe topological or textural irregularities. Meanwhile, deep learning continues to push the boundaries of segmentation and classification. Karaali et al. [16] introduced DR-VNet, a Dense Residual U-Net tailored to thin and tiny vessels vessel segmentation. Wei et al. [17] proposed OCE-Net, integrating orientation-aware convolutions to maintain vascular continuity critical for downstream texture analysis. Li et al. [18] applied a multi-scale residual similarity gathering (MRSG) module to generate pixel-wise adaptive filters (PA-Filters), enhancing segmentation quality of retinal vessel images. On the classification front, Shamsan et al. [19] developed a hybrid classification method that integrates handcrafted features with deep learning representations to diagnose ocular diseases from color fundus imaging. This hybrid philosophy is echoed in the present study’s integration of vascular texture and color distribution features.
In summary, this work contributes to a body of literature that embraces both classical and modern techniques. By combining Haralick-based vascular texture with global RGB histogram analysis, a dual-feature, interpretable approach that aligns with recent trends while retaining computational simplicity and diagnostic transparency is presented.

2. Materials and Methods

2.1. Dataset

This study employed retinal fundus images from the publicly available High-Resolution Fundus (HRF) database [20], which provides high-quality color fundus images suitable for vascular and pathological analysis. The dataset comprises a total of 45 images, each with a resolution of 3504 × 2336 pixels, encoded in 24-bit color and stored in JPEG format with low compression to maintain high image quality.
The HRF dataset is divided into three equal subsets (Figure 1):
  • Healthy group: 15 images from subjects with no clinical signs of retinal disease.
  • Diabetic Retinopathy (DR) group: 15 images exhibiting vascular abnormalities consistent with DR, such as microaneurysms and hemorrhages.
  • Glaucoma group: 15 images from patients diagnosed with advanced-stage glaucoma, characterized by structural changes in the optic nerve head and retinal vasculature.
This dataset was selected due to its high resolution and well-annotated pathological categories, making it a robust resource for evaluating automated texture and vascular analysis techniques.

2.2. Features Extraction

Haralick texture features, derived from the Gray-Level Co-occurrence Matrix (GLCM), represent a well-established method for quantifying image texture. Their widespread use is attributed to both the computational simplicity of the method and the interpretability of the resulting descriptors. The concept was first introduced by Haralick et al. [21] who proposed a set of statistical features derived from the spatial relationships between pixel intensities within a grayscale image. Haralick’s approach is based on the premise that texture can be characterized by the frequency with which pairs of pixel values (grey levels) occur in specific spatial configurations.
The GLCM is constructed by computing how often a pixel with grey level i occurs in a defined spatial relationship (distance d and orientation angle θ) to a pixel with grey level j. Formally, the co-occurrence matrix element (i, j) reflects the number of times this pixel pair appears in the image under the specified conditions.
Given a grayscale image I of size N x × N y , the co-occurrence value for grey levels i and j at a distance d and orientation θ, P d , θ ( i , j ) , can be defined as [22]:
P d , θ ( i , j ) = x = 0 N x 1 y = 0 N y 1 δ d , θ , i , j ( x , y )
where
δ d , θ , i , j ( x , y ) = { 1 i f I ( x , y ) = i a n d I ( x + π x ( d , θ ) , y + π y ( d , θ ) ) = j 0 O t h e r w i s e
where I ( x , y ) represents the pixel value at position (x,y) in the quantized image. This matrix serves as the basis for extracting several second-order statistical features, such as contrast, correlation, energy, and homogeneity, that summarize texture characteristics across different regions of the image. These features are particularly relevant for characterizing pathological changes in retinal tissue, as they capture local structural variations that may not be apparent through intensity-based measures alone.
The offset (πx(d, θ), πy(d, θ)) defines the relative position of a pixel (x,y) with respect to its neighbor located at a distance d and orientation θ. In this study, the gray-level co-occurrence matrices (GLCMs) required for texture analysis were computed using MATLAB’s built-in graycomatrix function. Once the co-occurrence matrix is obtained, the corresponding Haralick texture features were extracted. The features considered in this work are those originally proposed by [21], as shown in Table 1.
Where μ x , μ y and σ x , σ y are the means and standard deviations, respectively, expressed as:
μ x = i = 1 N i . P x ( i ) μ y = j = 1 N j . P y ( j ) σ x 2 = i = 1 N ( i μ x ) 2 P x ( i ) σ y 2 = j = 1 N ( j μ y ) 2 P y ( j )
In this study, second-order texture features derived from the GLCM, specifically Haralick features, were selected due to their ability to capture spatial dependencies and textural patterns within retinal images. Unlike first-order features, which rely solely on pixel intensity statistics without accounting for spatial relationships, second-order features analyze the joint probability of pairs of pixel intensities, offering a more comprehensive representation of vascular texture.
Although fractal and geometrical features can provide valuable information regarding structural complexity and vessel morphology, they often require highly precise segmentation and may be less sensitive to subtle textural variations in retinal fundus images. Therefore, the use of second-order features enables a more robust and discriminative quantification of vascular abnormalities associated with diseases such as diabetic retinopathy and glaucoma
To identify structural abnormalities in the retinal vasculature, the vascular tree was segmented from each fundus image using a MATLAB®-based function developed by Kumar R. [23]. This method combines matched filtering with an adaptive, iterative thresholding algorithm that analyzes the intensity histogram to separate vessel structures from the background. Specifically, the thresholding component calculates the mean intensities of the foreground and background classes iteratively, updating the threshold value until convergence is achieved. This approach improves robustness under variable lighting conditions and enhances vessel detection in cases of low contrast or faint vasculature.
To further improve segmentation performance, particularly in challenging images, all input images underwent preprocessing using contrast-limited adaptive histogram equalization (CLAHE) to enhance local contrast, and median filtering to suppress background noise while preserving edge integrity. These preprocessing steps enhanced vessel-background separability and facilitated more accurate vascular tree extraction.
All segmentation and subsequent texture analyses were performed using MATLAB® software (version 2021b, The MathWorks Inc., Natick, MA, USA). Segmentation quality was assessed via visual comparison with the ground truth masks provided in the HRF database. Although quantitative validation metrics (e.g., Dice similarity coefficient) were not computed in this study, the visual inspections confirmed that the extracted vascular trees closely approximated the annotated references in most cases, supporting their suitability for downstream texture analysis.
Textural features were then extracted from the segmented vessels following the methodology proposed by [24], across eight Haralick texture metrics derived from the gray-level co-occurrence matrix (GLCM). Each image was first converted to 8-bit grayscale and quantized to 16 gray levels to reduce computational complexity while preserving texture patterns relevant for clinical analysis.
GLCMs were computed for a pixel distance (d) of 1 in four standard orientations: 0°, 45°, 90°, and 135°. The matrices were then normalized to represent co-occurrence probabilities (summing to 1), allowing for consistent comparison across images. To ensure directionally invariant feature extraction, Haralick features were calculated separately for each angle and then averaged across the four directions.
To statistically evaluate group differences between healthy and pathological retinas, the two-sample Kolmogorov–Smirnov test was applied using MATLAB’s built-in kstest2 function.

2.3. Histogram Analysis

A color image can be represented as a two-dimensional array of pixels, where each pixel consists of three components corresponding to the primary colors: Red (R), Green (G), and Blue (B). The RGB color space is widely used in digital imaging and represents an image as a numerical array of size M × N × 3, where M and N denote the height and width of the image, respectively, and the third dimension corresponds to the three-color channels. Each color channel is typically encoded using 8 bits per pixel, yielding intensity values in the range [0, 255], where 0 represents the absence of a given color and 255 corresponds to its maximum intensity [25]. The numerical range may vary depending on the data type used to store the image. A color histogram [26] characterizes the distribution of colors within an image by quantizing the color space and counting the frequency of each discrete color combination. This results in a probability distribution, where the x-axis represents the histogram bins (e.g., 0–255), and the y-axis indicates the number of pixels corresponding to each bin value. The number of bins is determined by the level of quantization and the number of distinct color values present. It is important to note that a histogram in the RGB color space is three times larger than one based solely on intensity values, due to the presence of three separate color channels. The histogram of an image, I(x,y), with height and width w, can be formally defined as follows [27,28]:
n i = | { ( x , y ) | I ( x , y ) = i } | , i = 0 , , k 1
where k is the number of colors and i = 0 k 1 n i = w . h .
All images used in this study were RGB color images with a resolution o9f 2336 × 3504 pixels and 8 bits per channel. To assess the normality of the data distribution, the lillietest function from MATLAB’s Statistics and Machine Learning Toolbox was employed. This function implements a variant of the Kolmogorov–Smirnov test, which evaluates whether a sample originates from a normally distributed population, based on the sample’s estimated mean and standard deviation. The results indicated that the data did not follow a normal distribution.
Subsequently, average histograms were computed by group and by color channel (R, G, B). The RGB histograms from the control and pathological groups were then compared using the non-parametric Wilcoxon-Mann-Whitney test, performed separately for each channel.
To further quantify differences between the two groups, the Mahalanobis distance was calculated between the average histograms of the normal and pathological images. This metric is particularly suitable for comparing multivariate distributions, as it accounts for correlations among variables and considers each group as a point cloud in a 256-dimensional feature space (one dimension per histogram bin).
Finally, to visualize the joint distribution of color intensities across channels, bivariate histograms were generated. These 2D histograms enable analysis of the co-distribution of two-color channels at a time, offering a more comprehensive view of RGB value relationships across the pixel data.

3. Results and Discussion

3.1. Texture Analysis

Haralick texture features were calculated for each retinal image. A total of eight features were extracted from the Gray-Level Co-occurrence Matrix (GLCM): energy, contrast, correlation, variance, sum average, sum variance, sum entropy, and entropy. The mean and standard deviation ( μ ± σ ) for each feature, along with the Kolmogorov–Smirnov test results comparing healthy individuals with patients diagnosed with glaucoma and DR, are presented in Table 2 and Figure 2, respectively.
Table 2 highlights the texture features that demonstrated statistically significant differences between healthy and pathological retinae vascularity. Specifically, the features correlation, variance, sum average, sum variance, sum entropy, and entropy were all significantly lower in both pathological groups compared to the healthy group. In contrast, energy was significantly higher in the glaucomatous and DR groups than in the control group.
Contrast quantifies local intensity variations within an image, ranging from 0 (indicating no contrast) up to a maximum value of ( N g 1 ) 2 , where Ng represents the number of gray levels determined by the image’s bit depth. This metric estimates the relative variation in luminance by correlating it with the intensity gradients present in the image. Higher contrast values correspond to greater variation among pixel intensities, reflecting more pronounced texture or edges [29]. Table 2 shows that all groups exhibited relatively similar values, suggesting general uniformity in vascular contrast across the sample.
Energy quantifies the uniformity or textural homogeneity of an image, with values ranging from 0 to 1. A value near 1 indicates a highly uniform image with minimal variation in pixel intensities, while a value close to 0 suggests high variability [30]. A high energy value implies that adjacent pixels have similar intensities, resulting in a smoother, less textured appearance [31]. In this study, pathological groups exhibited higher energy values compared to healthy retinae, indicating a more uniform distribution of grey levels. This feature reflects the degree of similarity between neighboring pixels. In the context of retinal vascularity, this suggests that the diseased retinae display more homogeneous textures or disorders in the texture, potentially due to reduced variation in vessel density or altered vascular organization. Such uniformity may be associated with structural changes in the retina. Glaucomatous eyes, for instance, are linked to a loss of structural complexity within the retinal vascularity [4]. Similarly, in DR, vessel branching tends to be reduced and vessels become narrower, leading to a more uniform and less complex vascular pattern [6]. The paper’s findings support the use of energy as a meaningful indicator of pathological changes in retinal vasculature.
Correlation measures the linear relationship between the intensity values of neighboring pixels in an image, ranging from −1 to 1, where 1 represents a perfect positive correlation, −1 a perfect negative correlation, and 0 no correlation. It provides insights into the regularity, alignment, and directionality of textures. A high correlation value indicates a strong linear dependency between adjacent pixel intensities, reflecting a more organized and coherent texture pattern. Conversely, a low correlation value suggests weaker pixel dependencies, indicating more disordered textures, suggesting disruptions in structural organization [31]. In the context of retinal imaging, correlation is closely linked to the spatial distribution and orientation of blood vessels. Structural disruptions caused by conditions such as DR or glaucoma can alter the regularity of vascular patterns, thereby reducing the correlation value. As such, changes in Haralick correlation serve as quantitative markers of retinal structural alterations, offering a way to numerically characterize the “smoothness” or “roughness” of the vasculature. This makes correlation a valuable feature for assessing disease-related changes in retinal architecture.
Variance is a statistical measure of texture heterogeneity that quantifies the dispersion of pixel intensity values within an image or region of interest. It ranges from 0 upwards, with higher values indicating greater variability in intensities, which reflects more complex and heterogeneous textures. Conversely, lower variance reflects more homogeneous textures, where pixel values are similar, a trait often associated with simpler or less structurally diverse tissues. In retinal imaging, reduced variance typically correlates with decreased vascular complexity. Conditions such as glaucoma and DR are characterized by diminished vascular density, reflected in fewer branches and capillary networks. This structural simplification leads to more uniform textures and therefore lower variance values. In DR, the observed decrease in vascular density is a manifestation of chronic microvascular damage, including capillary closure, dropout, and ischemia due to persistent hyperglycemia [32].
Entropy is a statistical metric that quantifies the degree of randomness, heterogeneity, and complexity in an image’s intensity distribution. It measures the unpredictability of pixel intensity values within a specified region, with values ranging from a minimum of 0 indicating complete uniformity to a maximum of log 2 ( N g ) . When entropy is high (close to the maximum log 2 ( N g ) ), it indicates that the distribution of gray levels is more uniform, with many different intensities present in an unpredictable manner. This high variability leads to greater visual complexity and a sense of structural disorder or heterogeneity, reflecting more chaotic textures or less predictable patterns. Lower entropy values correspond to more homogeneous textures with less variation. In the context of retinal vasculature, decreased entropy suggests reduced structural complexity. Comparative analysis between healthy individuals and patients with retinal pathology, such as glaucoma or DR, reveals that pathological retinae tend to exhibit lower entropy values. This reduction is associated with a loss of vascular complexity, including decreased capillary density, thinner vessels, and vessel dropout or occlusion. These structural changes lead to more uniform textures and diminished randomness in vascular patterns. Previous studies have shown that entropy-based analysis can enhance the detection of pathological regions in fundus images, particularly in DR, by identifying subtle textural alterations associated with microvascular damage [33].
To ensure the reliability of the texture features extracted, particularly entropy and Haralick-based metrics, it was first necessary to validate the accuracy of the vessel segmentation. The Dice Similarity Coefficient (DSC) was used to evaluate the accuracy of retinal vessel segmentation by comparing the results with the ground truth provided in the dataset. The DSC is a widely used performance metric that quantifies the similarity between two regions, in this case, the segmented retinal vessels and the corresponding ground truth annotations. The coefficient ranges from 0 to 1, where values closer to 1 indicate greater overlap and, therefore, higher segmentation accuracy.
The proposed method achieved a mean DSC of 0.86 ± 0.01, demonstrating a high degree of consistency with the ground truth. These results confirm the effectiveness and reliability of the segmentation process, which serves as a critical step for the subsequent extraction of Haralick-based texture features.
In this research, Haralick texture features were employed to quantitatively analyze the retinal vasculature with the aim of distinguishing between healthy and pathological retinae. These features capture subtle variations in vascular texture, allowing for a robust differentiation of structural patterns associated with disease progression. The results demonstrate that Haralick texture analysis is both feasible and informative for identifying retinal abnormalities. However, no correction for multiple comparisons was applied, as the study was exploratory in nature and based on a limited dataset. Consequently, individual p-values should be interpreted with caution. To minimize the risk of false positives, the analysis focused on features that showed consistent trends across both pathological groups and were supported by prior clinical or biological evidence. Future research involving larger and more diverse datasets will incorporate formal correction methods for multiple hypothesis testing to enhance statistical rigor.
While Haralick features are widely used in glaucoma and diabetic retinopathy detection studies, descriptive statistics (μ ± σ) of these features across different pathological groups are seldom reported. Most works focus on leveraging these features for classification tasks and do not provide quantitative comparisons between healthy and diseased groups (e.g., GLCM-based descriptors are listed but not summarized statistically) [12,34,35,36,37,38]. Reporting these values in the present study thus represents a novel contribution, and it underlines the need for future publications in this field to include group-wise feature distributions to facilitate comparative analysis and validation across datasets.

3.2. RGB Histogram and Color Distribution Analysis

In parallel with the vascular texture analysis, a complementary study was conducted to investigate color distribution differences between healthy and pathological retinae using RGB histograms extracted from retinal fundus images of the same simples. For each image, the intensity histograms of the red (R), green (G), and blue (B) channels were computed individually. Subsequently, average histograms were calculated for each group (control, DR, glaucoma) and channel. Statistical comparisons were performed using MATLAB’s ranksum function (Wilcoxon rank-sum test). The results are illustrated in Figure 3 and Figure 4.
The analysis revealed statistically significant differences (p < 0.05) between the control group and DR group across all three RGB channels. When comparing the control group with the glaucoma group, significant differences were found in the green and blue channels. However, no significant difference was observed in the red channel between these two groups (p ≥ 0.05).
To evaluate multivariate color distribution differences between the control and pathological groups, the Mahalanobis distance was computed independently for each color channel. This analysis aimed to determine which RGB components most effectively differentiate healthy retinas from diseased ones.
The analysis follows these steps:
-
Channelwise computation: Each color channel (R, G, B) was analyzed separately. Mahalanobis distance was calculated based on the histogram data for that specific channel only.
-
Distributional comparison: For each channel, the Mahalanobis distance quantified how distinct the color distributions of the control group were from those of each pathological group, accounting for both mean and covariance structures.
The results are summarized in Table 3. A higher Mahalanobis distance indicates greater statistical separation between healthy and pathological distributions within a given channel. When combining all three channels, the overall Mahalanobis distance provides an integrated measure of color texture alteration, potentially reflecting structural or vascular changes associated with retinal pathology.
To further explore color distribution patterns, bivariate histograms were generated for each image and are shown in Figure 5. These 2D histograms visualize the joint distribution of pixel values between pairs of color channels: red-green (RG), red-blue (RB), and green-blue (GB). Each plot includes a color scale, where yellow represents high-frequency regions and blue indicates low-frequency regions.
Across all groups, red remains the dominant color. However, the glaucoma and DR groups display clear shifts in the green and blue channel combinations compared to the control group, as evidenced by the broader and more diffuse yellow regions in the pathological histograms. These differences suggest changes in color composition likely to be tied to vascular or structural alterations.
Previous studies have demonstrated that color information is a valuable feature in the detection of various diseases [13,14]. The bivariate histogram analysis performed in this study supports the hypothesis that color distribution shifts, particularly in the green and blue channels, may serve as indicators of retinal disease, including glaucoma and DR.
This study demonstrates that combining Haralick texture features extracted from segmented retinal vasculature with global fundus color distribution analysis using RGB and bivariate histograms enables effective differentiation between healthy and pathological retinae affected by DR and glaucoma. The integration of these complementary structural and chromatic biomarkers provides a more comprehensive characterization of retinal abnormalities than approaches that analyze these features separately.
The novelty of this work lies in the dual-framework methodology applied to the same dataset, representing the first study to jointly leverage vessel-specific texture analysis and global color histogram modelling for retinal pathology assessment. Moreover, the use of multivariate statistical metrics, such as the Mahalanobis distance, to quantify group separability adds robustness and clinical relevance to the findings. This integrated approach advances the field by offering a non-invasive, image-based tool with potential for early detection, differentiation, and monitoring of retinal diseases in clinical and screening settings.
In summary, this study provides a statistical evaluation of texture and color features to assess their effectiveness in distinguishing between healthy and pathological retinae. While no machine learning methods were applied at this stage, the findings offer a solid foundation for future research involving supervised learning approaches to further validate the discriminative power of these features. Future work should also consider expanding the dataset to include a larger and more diverse population, as well as exploring the integration of complementary imaging modalities to enhance diagnostic performance.

4. Conclusions

This study demonstrated that quantitative texture analysis of the retinal vasculature, combined with global color distribution analysis of the fundus, enables effective discrimination between healthy and pathological retinae affected by DR and glaucoma. Haralick texture features extracted from segmented vascular structures captured relevant differences in structural organization. Pathological retinae showed significantly altered texture profiles, with increased energy and decreased entropy, correlation, and variance, indicators of reduced vascular complexity and tissue heterogeneity.
Simultaneously, RGB and bivariate histogram analyses of the full fundus area revealed consistent differences in color distribution between control and pathological groups. The green and blue channels showed statistically significant variation, and the Mahalanobis distance analysis confirmed that these color-based differences can be quantitatively measured.
The integration of these two approaches, vascular texture and fundus color analysis, offers a comprehensive, non-invasive method for characterizing retinal abnormalities. Notably, this is the first study, to the author’s knowledge, to apply both Haralick texture features and color histogram modelling to the same set of retinal images for disease classification purposes.
These findings support the potential of combining structural and chromatic biomarkers in retinal fundus imaging to aid in the early detection, differentiation, and monitoring of retinal pathologies.
While the sample size is limited, this study was conceived as a proof-of-concept to demonstrate the feasibility and diagnostic potential of combining vessel-specific texture features with global fundus color distribution metrics for retinal disease classification. The results underscore the strength of this dual-framework approach in capturing structural and chromatic alterations associated with retinal pathology.
Using 45 images from the HRF database allowed for a controlled and well-annotated setting but naturally restricts generalizability. To minimize bias, the dataset was balanced across groups and uniformly preprocessed.
Future work will focus on validating and extending this methodology with larger, more diverse datasets to assess its robustness, scalability, and clinical relevance in broader diagnostic and screening contexts.
Clinical Implications:
The results support the use of non-invasive texture and color-based biomarkers in the early screening and differential diagnosis of retinal diseases. The dual analysis approach offers a potential pathway for automated decision support systems in ophthalmology, particularly in resource-limited settings where fundus imaging is more accessible than advanced imaging modalities.
Limitations:
-
The sample size, though representative, may limit generalizability across broader populations.
-
The study was based on 2D fundus imaging, which does not capture depth information or fine capillary detail.
-
While recent advances in retinal image analysis have been dominated by deep learning approaches such as DR-VNet and OCE-Net, this study focuses on classical handcrafted features, specifically vascular texture and color histogram analysis, which offer greater interpretability and computational efficiency. Direct benchmarking against these deep learning models was beyond the current scope due to dataset size and resource constraints. Nevertheless, this method provides a complementary perspective that can be particularly valuable in settings with limited annotated data.
Future Directions:
-
Integration with deep learning frameworks could enable automated feature extraction and classification at scale.
-
Further validation of larger and more diverse datasets from multiple clinical centers is needed to confirm robustness and reproducibility.

Funding

This research received no external funding.

Institutional Review Board Statement

This study exclusively used publicly available and fully anonymized retinal fundus images from the High-Resolution Fundus (HRF) database [20]. As the research did not involve human subjects or the collection of new identifiable data, it was determined to be exempt from review by an Institutional Review Board. All methods and procedures were performed in accordance with the ethical guidelines outlined in the Declaration of Helsinki.

Informed Consent Statement

The fundus images were obtained from publicly available databases, and all personal identifying information was removed to protect patient privacy.

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

The author declares no conflicts of interest.

References

  1. Ahmad, A.; Mansoor, A.B.; Mumtaz, R.; Khan, M.; Mirza, S. Image processing and classification in diabetic retinopathy: A review. In Proceedings of the 2014 5th European Workshop on Visual Information Processing (EUVIP), Paris, France, 10–12 December 2014; pp. 1–6. [Google Scholar]
  2. Kumar, B.N.; Chauhan, R.; Dahiya, N. Detection of Glaucoma using image processing techniques: A review. In Proceedings of the 2016 International Conference on Microelectronics, Computing and Communications (MicroCom), Durgapur, India, 23–25 January 2016; pp. 1–6. [Google Scholar]
  3. Sarhan, A.; Rokne, J.; Alhajj, R. Glaucoma detection using image processing techniques: A literature review. Comput. Med. Imaging Graph 2019, 78, 101657. [Google Scholar] [CrossRef] [PubMed]
  4. Chan, K.K.; Tang, F.; Tham, C.C.; Young, A.L.; Cheung, C.Y. Retinal vasculature in glaucoma: A review. BMJ Open Ophthalmol. 2017, 1, e000032. [Google Scholar] [CrossRef] [PubMed]
  5. Barber, A.J. A new view of diabetic retinopathy: A neurodegenerative disease of the eye. Prog. Neuropsychopharmacol. Biol. Psychiatry 2003, 27, 283–290. [Google Scholar] [CrossRef]
  6. Nguyen, T.T.; Wong, T.Y. Retinal vascular changes and diabetic retinopathy. Curr. Diab. Rep. 2009, 9, 277–283. [Google Scholar] [CrossRef] [PubMed]
  7. Selvam, S.; Kumar, T.; Fruttiger, M. Retinal vasculature development in health and disease. Prog. Retin. Eye Res. 2018, 63, 1–19. [Google Scholar] [CrossRef]
  8. Ahmad, R.; Mohanty, B.K. Chronic kidney disease stage identification using texture analysis of ultrasound images. Biomed. Signal Process. Control 2021, 69, 102695. [Google Scholar] [CrossRef]
  9. Tesař, L.; Shimizu, A.; Smutek, D.; Kobatake, H.; Nawano, S. Medical image analysis of 3D CT images based on extension of Haralick texture features. Comput. Med. Imaging Graph. 2008, 32, 513–520. [Google Scholar] [CrossRef]
  10. Youssef, S.M.; Korany, E.A.; Salem, R.M. Contourlet-based feature extraction for computer aided diagnosis of medical patterns. In Proceedings of the 2011 IEEE 11th International Conference on Computer and Information Technology, Pafos, Cyprus, 31 August–2 September 2011; pp. 481–486. [Google Scholar]
  11. Haralick, R.M. Statistical and structural approaches to texture. Proc. IEEE 1979, 67, 786–804. [Google Scholar] [CrossRef]
  12. Gupta, S.; Sisodia, D.S. Automated detection of diabetic retinopathy from gray-scale fundus images using GLCM and GLRLM-based textural features—A comparative study. In Intelligent Computing Techniques in Biomedical Imaging; Elsevier: Amsterdam, The Netherlands, 2025; pp. 251–259. [Google Scholar]
  13. Stanley, R.J.; Moss, R.H.; Van Stoecker, W.; Aggarwal, C. A fuzzy-based histogram analysis technique for skin lesion discrimination in dermatology clinical images. Comput. Med. Imaging Graph 2003, 27, 387–396. [Google Scholar] [CrossRef]
  14. Zhou, J.; Zhang, Q.; Zhang, B.; Chen, X. TongueNet: A precise and fast tongue segmentation system using U-Net with a morphological processing layer. Appl. Sci. 2019, 9, 3128. [Google Scholar] [CrossRef]
  15. Igalla-El Youssfi, A.; López-Alonso, J.M. Fractal and multifractal new metrics for pathological characterization and quantification in diabetic retinopathy and glaucoma. Measurement 2025, 256, 118561. [Google Scholar] [CrossRef]
  16. Karaali, A.; Dahyot, R.; Sexton, D.J. DR-VNet: Retinal vessel segmentation via dense residual UNet. In Proceedings of the International Conference on Pattern Recognition and Artificial Intelligence, Xiamen China, 23–25 September 2022; pp. 198–210. [Google Scholar]
  17. Wei, X.; Yang, K.; Bzdok, D.; Li, Y. Orientation and context entangled network for retinal vessel segmentation. Expert Syst. Appl. 2023, 217, 119443. [Google Scholar] [CrossRef]
  18. Li, M.; Zhou, S.; Chen, C.; Zhang, Y.; Liu, D.; Xiong, Z. Retinal vessel segmentation with pixel-wise adaptive filters. In Proceedings of the 2022 IEEE 19th International Symposium on Biomedical Imaging (ISBI), Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
  19. Shamsan, A.; Senan, E.M.; Shatnawi, H.S.A. Automatic classification of colour fundus images for prediction eye disease types based on hybrid features. Diagnostics 2023, 13, 1706. [Google Scholar] [CrossRef]
  20. Odstrcilik, J.; Kolar, R.; Budai, A.; Hornegger, J.; Jan, J.; Gazarek, J.; Kubena, T.; Cernosek, P.; Svoboda, O.; Angelopoulou, E. Retinal vessel segmentation by improved matched filtering: Evaluation on a new high-resolution fundus image database. IET Image Process. 2013, 7, 373–383. [Google Scholar] [CrossRef]
  21. Haralick, R.M.; Shanmugam, K.; Dinstein, I.H. Textural features for image classification. IEEE Trans. Syst. Man Cybern. 2007, 6, 610–621. [Google Scholar] [CrossRef]
  22. Shabat, A.M.; Tapamo, J.-R. A comparative study of the use of local directional pattern for texture-based informal settlement classification. J. Appl. Res. Technol. 2017, 15, 250–258. [Google Scholar] [CrossRef]
  23. Kumar Ram, R. Segmentation of Blood Vessels in Retinal Images. Version 1.0.0. MATLAB Central File Exchange. Available online: https://www.mathworks.com/matlabcentral/fileexchange/102364-segmentation-of-blood-vessels-in-retinal-images (accessed on 30 July 2025).
  24. Monzel, R. haralickTextureFeatures Versión 1.3.1.0. Available online: https://es.mathworks.com/matlabcentral/fileexchange/58769-haralicktexturefeatures (accessed on 30 July 2025).
  25. Plataniotis, K.N.; Venetsanopoulos, A.N. Companion image processing software. In Color Image Processing and Applications; Springer: Berlin/Heidelberg, Germany, 2000; pp. 349–352. [Google Scholar]
  26. Swain, M.J.; Ballard, D.H. Color indexing. Int. J. Comput. Vis. 1991, 7, 11–32. [Google Scholar] [CrossRef]
  27. Hu, H. Advanced Man-Machine Interaction-Fundamentals and Implementation. Ind. Robot. Int. J. 2007, 34. [Google Scholar] [CrossRef]
  28. Gonzalez, R.C. Digital Image Processing; Pearson Education India: Delhi, India, 2009. [Google Scholar]
  29. Saleem, A.; Beghdadi, A.; Boashash, B. Image fusion-based contrast enhancement. EURASIP J. Image Video Process. 2012, 2012, 10. [Google Scholar] [CrossRef]
  30. Janney, J.B.; Roslin, S.E.; Kumar, S.K. Analysis of skin lesions using machine learning techniques. In Computational Intelligence and Its Applications in Healthcare; Elsevier: Amsterdam, The Netherlands, 2020; pp. 73–90. [Google Scholar]
  31. Xue, Y.; Mohamed, K.; Van Dyke, M.; Guner, D.; Sherizadeh, T. Quantifying the Texture of Coal Images with Different Lithotypes through Gray-Level Co-Occurrence Matrix. In Proceedings of the MINEXCHANGE SME Annual Conference and Expo, Society of Mining, Metallurgy and Explorartion, Phoenix, AZ, USA, 25–28 February 2024. [Google Scholar]
  32. Durbin, M.K.; An, L.; Shemonski, N.D.; Soares, M.; Santos, T.; Lopes, M.; Neves, C.; Cunha-Vaz, J. Quantification of retinal microvascular density in optical coherence tomographic angiography images in diabetic retinopathy. JAMA Ophthalmol. 2017, 135, 370–376. [Google Scholar] [CrossRef]
  33. Romero-Oraá, R.; Jiménez-García, J.; García, M.; López-Gálvez, M.I.; Oraá-Pérez, J.; Hornero, R. Entropy rate superpixel classification for automatic red lesion detection in fundus images. Entropy 2019, 21, 417. [Google Scholar] [CrossRef] [PubMed]
  34. Lee, J.; Zee, B.C.Y.; Li, Q. Detection of neovascularization based on fractal and texture analysis with interaction effects in diabetic retinopathy. PLoS ONE 2013, 8, e75699. [Google Scholar] [CrossRef] [PubMed]
  35. Zheng, Y.; Kwong, M.T.; MacCormick, I.J.; Beare, N.A.; Harding, S.P. A comprehensive texture segmentation framework for segmentation of capillary non-perfusion regions in fundus fluorescein angiograms. PLoS ONE 2014, 9, e93624. [Google Scholar] [CrossRef]
  36. Gayathri, S.; Krishna, A.K.; Gopi, V.P.; Palanisamy, P. Automated binary and multiclass classification of diabetic retinopathy using haralick and multiresolution features. IEEE Access 2020, 8, 57497–57504. [Google Scholar] [CrossRef]
  37. Patel, R.K.; Kashyap, M. Automated screening of glaucoma stages from retinal fundus images using BPS and LBP based GLCM features. Int. J. Imaging Syst. Technol. 2023, 33, 246–261. [Google Scholar] [CrossRef]
  38. Gupta, S.; Thakur, S.; Gupta, A. Comparative study of different machine learning models for automatic diabetic retinopathy detection using fundus image. Multimed. Tools Appl. 2024, 83, 34291–34322. [Google Scholar] [CrossRef]
Figure 1. Example retinal fundus images from the High-Resolution Fundus (HRF) database [20], available under the Creative Commons 4.0 Attribution License (CC BY 4.0). (A) Healthy retina. (B) Retina with glaucoma. (C) Retina with diabetic retinopathy.
Figure 1. Example retinal fundus images from the High-Resolution Fundus (HRF) database [20], available under the Creative Commons 4.0 Attribution License (CC BY 4.0). (A) Healthy retina. (B) Retina with glaucoma. (C) Retina with diabetic retinopathy.
Jimaging 11 00321 g001
Figure 2. Box plot representation of the eight Haralick texture features extracted from the segmented vasculature, shown for healthy (H), glaucomatous (G), and diabetic (DR) groups. The ‘+’ symbol represents outliers, and the red line in each subfigure indicates the median.
Figure 2. Box plot representation of the eight Haralick texture features extracted from the segmented vasculature, shown for healthy (H), glaucomatous (G), and diabetic (DR) groups. The ‘+’ symbol represents outliers, and the red line in each subfigure indicates the median.
Jimaging 11 00321 g002
Figure 3. Average intensity histograms of red (R), green (G), and blue (B) channels for healthy and glaucoma groups.
Figure 3. Average intensity histograms of red (R), green (G), and blue (B) channels for healthy and glaucoma groups.
Jimaging 11 00321 g003
Figure 4. Average intensity histograms of red (R), green (G), and blue (B) channels for healthy and DR groups.
Figure 4. Average intensity histograms of red (R), green (G), and blue (B) channels for healthy and DR groups.
Jimaging 11 00321 g004
Figure 5. Bivariate histograms of the red, green, and blue RGB values for each pixel to visualize the color distribution.
Figure 5. Bivariate histograms of the red, green, and blue RGB values for each pixel to visualize the color distribution.
Jimaging 11 00321 g005
Table 1. Haralick texture features calculated from GLCMs.
Table 1. Haralick texture features calculated from GLCMs.
Texture FeatureEquation
Energy or angular second moment i = 1 N j = 1 N P ( i , j ) 2 (3)
Contrast i = 1 N j = 1 N ( i j ) 2 P ( i , j ) (4)
Correlation i = 1 N j = 1 N i μ x σ x j μ y σ y P ( i , j ) (5)
Variance i = 1 N j = 1 N ( i μ ) 2 P ( i , j ) (6)
Sum Average k = 2 2 N k P x + y ( k ) ; w h e r e P x + y ( k ) = i = 1 N j = 1 N P ( i , j ) i + j = k (7)
Sum variance k = 2 2 N ( k μ x + y ) 2 P x + y ( k ) ; w h e r e μ x + y = k = 2 2 N k . P x + y ( k ) (8)
Sum entropy k = 2 2 N P x + y ( k ) log P x + y ( k ) (9)
Entropy i = 1 N j = 1 N P ( i , j ) log P ( i , j ) (10)
Table 2. Results of Haralick texture features using co-occurrence matrix (comparison between healthy group and glaucomatous and DR groups).
Table 2. Results of Haralick texture features using co-occurrence matrix (comparison between healthy group and glaucomatous and DR groups).
Texture Feature Healthy   ( μ ± σ ) Glaucoma
( μ ± σ )
DR
( μ ± σ )
Diseased vs. Normal Controls (p-Value)
Energy0.82 ± 0.020.86 ± 0.010.86 ± 0.02<0.05
Contrast0.52 ± 0.050.55 ± 0.060.52 ± 0.07>0.05
Correlation0.937 ± 0.0050.912 ± 0.0060.917 ± 0.006<0.05
Variance6.8 ± 0.65.3 ± 0.45.3 ± 0.6<0.05
Sum Average3.3 ± 0.12.96 ± 0.083.0 ± 0.1<0.05
Sum variance27.7 ± 2.019.1 ± 1.319.3 ± 2.2<0.05
Sum entropy0.36 ± 0.020.30 ± 0.020.30 ± 0.03<0.05
Entropy0.52 ± 0.030.44 ± 0.030.44 ± 0.05<0.05
Table 3. Mahalanobis distance for each color channel. (R: red. G: green. B: blue).
Table 3. Mahalanobis distance for each color channel. (R: red. G: green. B: blue).
Mahalanobis Distance Per Channel
RGB
Healthy vs. glaucoma3.453.504.20
Healthy vs. DR4.263.883.56
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Sijilmassi, O. Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis. J. Imaging 2025, 11, 321. https://doi.org/10.3390/jimaging11090321

AMA Style

Sijilmassi O. Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis. Journal of Imaging. 2025; 11(9):321. https://doi.org/10.3390/jimaging11090321

Chicago/Turabian Style

Sijilmassi, Ouafa. 2025. "Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis" Journal of Imaging 11, no. 9: 321. https://doi.org/10.3390/jimaging11090321

APA Style

Sijilmassi, O. (2025). Non-Invasive Retinal Pathology Assessment Using Haralick-Based Vascular Texture and Global Fundus Color Distribution Analysis. Journal of Imaging, 11(9), 321. https://doi.org/10.3390/jimaging11090321

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop