1. Introduction
Age-related macular degeneration (AMD) is a very complex, heterogeneous retinal disorder that is a common cause of vision damage in the elderly population [
1,
2]. AMD primarily affects the macula, the central region of the retina. It is clinically divided into two different types, namely wet AMD and dry AMD [
3]. Wet AMD is characterised by the presence of choroidal neovascularisation (CNV), which involves the growth of abnormal blood vessels and the presence of fluid in the central retina [
3]. Dry AMD involves outer retinal thinning and is characterised by degeneration of retinal pigment epithelial cells, and underlying choroidal capillaries [
3]. Dry AMD is the more common subtype and is linked to gradual vision loss, while wet AMD is associated with more rapid vision impairment [
3]. Notably, wet AMD can be successfully treated with intravitreal injections. As a result, early discovery and treatment are critical, and quick diagnosis has been associated with better results [
4]. Early detection of areas related to CNV lesions and the distinction between subjects with wet AMD and dry AMD are consequently prioritised in terms of effort and healthcare resources.
To aid the assessment and management of AMD, various retinal vasculature imaging modalities have been developed. Optical coherence tomography angiography (OCTA) is a dye-free imaging technology that involves non-invasive volumetric three-dimensional imaging. OCTA provides detailed visualisation of blood circulation in the retinal vascular layers and, in contrast to other recognised fundus imaging techniques, such as fluorescein angiography (FA) and indocyanine green angiography (IGA), OCTA is both rapid and non-invasive [
5,
6,
7]. OCTA characterises both moving and static elements in the retinal and choroidal vessels, enabling the visualisation of vascular abnormalities and other vascular details that can assist in differentiating healthy vascular appearance from that associated with dry and wet AMD.
The current clinical practice for detecting CNV lesions and assessing treatment effectiveness in wet AMD involves visually evaluating the textural appearance of OCTA images [
8,
9]. However, this process remains challenging due to the substantial volume of image data per OCTA scan, individual variations in textural patterns, and the visual similarity of CNV, non-CNV, and healthy vascular regions across different patients [
10]. The textural appearance of retinal vascular layers in OCTA images for eyes with varying conditions is illustrated in
Figure 1. These images demonstrate vascular layers in eyes without vascular pathologies, with dry AMD, and with wet AMD, underscoring the intricate patterns of blood vessels across the different layers.
OCTA has become an essential modality for diagnosing and monitoring AMD, particularly for detecting and characterising CNV in wet AMD. Its depth-resolved imaging and ability to visualise microvascular flow without dye injection allow for more precise delineation of neovascular membranes. Specifically, OCTA facilitates direct visualisation and measurement of CNV area, location, vessel density, and flow patterns [
7,
11]. Unlike FA and IGA, which often obscure neovascular details due to dye leakage, OCTA offers high-resolution, dye-free images that preserve vascular integrity. This feature enables a more accurate assessment of disease progression and the effectiveness of anti-VEGF treatments in clinical follow-up [
12,
13].
OCTA imaging enables the visualisation of the vascular architecture within four depth-resolved layers: the superficial inner retina, the deep inner retinal layer, the outer retinal layer, and the choriocapillaris. Each layer contributes specific insights into AMD pathology. The superficial and deep inner retinal layers, although more commonly affected in diabetic retinopathy and glaucoma, may also show changes in AMD patients. These include reduced capillary density and flow voids, potentially linked to retinal pigment epithelium (RPE) dysfunction or secondary ischaemic processes. The outer retinal layer may display hyperintense vascular networks in cases of neovascular AMD, indicating pathological invasion of new vessels from the choroid through Bruch’s membrane. The choriocapillaris layer is particularly relevant in AMD and, among others, OCTA can detect choriocapillaris dropout and hypoperfusion that may precede or accompany RPE degeneration in dry AMD [
11,
14].
These vascular changes, CNV branching patterns, capillary non-perfusion, and choriocapillaris signal attenuation can be quantitatively assessed using OCTA-derived metrics including vessel density, perfusion density, and CNV area. Such quantitative analyses are key for monitoring disease progression, evaluating therapeutic outcomes, and identifying patients at increased risk of progressing from dry to wet AMD [
7,
12]. By enabling non-invasive, high-resolution visualisation of these vascular alterations, OCTA provides clinicians with an invaluable tool for both diagnosis and prognosis.
The texture of images contains rich details describing complex visual patterns distinguishable by brightness, size, or shape [
15]. In medical imaging, texture information relates to the macro- and micro-structural properties of images representing biomedical tissues [
16]. Clinicians are trained to interpret, establish standardised qualitative features, and link visual texture patterns to specific pathologies in medical images. Early attempts to identify ocular vascular pathologies related to AMD in OCTA image data focused on qualitative analysis approaches [
8,
9,
17,
18,
19,
20,
21,
22].
However, qualitative features often fail to fully describe texture characteristics and are limited to low-level terms like uniformness, randomness, smoothness, and coarseness, whereas human perception of textures is far richer [
16]. Differences in mapping patterns may lead to interpretation errors with undesirable consequences [
16,
23,
24,
25]. These issues stem from the complexity of human biology, anatomy, image acquisition techniques, and observer training [
16], worsened by the absence of specific diagnostic criteria for OCTA images in retinal diseases like AMD [
13]. Moreover, recognising textural information related to higher-order statistics or spectral properties is challenging for the human eye [
26].
Consequently, automating OCTA image analysis is expected to assist ophthalmologists in extracting meaningful features that may be visually challenging to distinguish. Additional benefits include reducing ophthalmologists’ workload while enhancing efficiency, consistency, and reliability in clinical diagnosis. Automation can reduce patient waiting times and dependency on subjective interpretation.
To overcome the challenges identified in the literature, this study presents two automated methods for detecting and quantifying AMD in OCTA image data. The first algorithm is built upon the
descriptor [
27]. This descriptor extracts local texture features that are invariant to illumination changes and rotation, making it highly suitable for analysing subtle vascular texture patterns in OCTA images. The extracted
features are directly used by a supervised classifier that distinguishes between healthy, dry AMD, and wet AMD images. The second algorithm extends the first by incorporating a dimensionality reduction step using
. The
features are first extracted and then transformed into a lower-dimensional space using
, yielding a hybrid descriptor referred to as
. This transformation captures the most significant variance in the texture patterns and suppresses noise and redundancy. The resulting features are subsequently used by a supervised classifier that distinguishes between healthy, dry AMD, and wet AMD images. These algorithmic pipelines are designed to be fully automated, with no requirement for manual feature selection or expert intervention, ensuring clinical scalability and reproducibility. In both cases, the trained classifiers were evaluated for three distinct diagnostic tasks: distinguishing healthy from wet AMD images, differentiating dry from wet AMD, and identifying CNV versus non-CNV lesions. The algorithms demonstrated high accuracy and robustness across OCTA image datasets obtained from two independent hospitals.
Therefore, the key contributions are summarised as follows:
Development of two domain-specific texture descriptors for AMD detection in OCTA images: a supervised descriptor based on greyscale and rotation-invariant uniform local binary patterns (
) [
27], and a hybrid descriptor (
) that integrates
with principal component analysis (
).
Construction of two fully automated classification algorithms for AMD detection. The first algorithm utilises extracted features. While the second algorithm leverages transformed features.
The proposed techniques have contributed to the development of three diagnostic applications for OCTA-based AMD pathology detection: classifying healthy images from those with wet AMD, differentiating between dry and wet AMD images, and identifying OCTA images with CNV lesions versus non-CNV lesions.
These algorithmic pipelines are evaluated using diverse OCTA image datasets from two hospitals, the Manchester Royal Eye Hospital and the Moorfields Eye Hospital, demonstrating promising results.
Related Works
Several studies [
8,
9,
18,
19,
20,
21,
22,
28,
29,
30,
31,
32,
33] have focused on quantifying and identifying AMD in OCTA images. However, automated detection of AMD in OCTA data has received limited attention. Recent advancements in automating OCTA texture analysis for AMD generally fall into two categories: image segmentation and image classification.
The automation of OCTA image texture analysis through image segmentation aims to partition the image into disjoint regions, simplifying its complex representation into a more interpretable form by highlighting key areas such as CNV regions. Recent studies addressing automated segmentation and quantification of CNV lesions in OCTA images for AMD patients include works by Jia et al. [
6], Liu et al. [
10], Zhang et al. [
34], and Taibouni et al. [
35].
Jia et al. [
6] proposed an early automated segmentation approach for analysing OCTA images to detect and quantify CNV lesion regions. Their method utilises two-dimensional greyscale OCTA images of the deep inner and outer retinal layers, captured from a 3 mm × 3 mm field of view centred around the fovea, with imaging depth manually adjusted to highlight CNV areas. Involving 10 eyes (five with wet AMD and five normal controls), their segmentation begins by applying a
-pixel Gaussian filter to the deep inner retinal layer image to create a binary map of large vessel projections, removing superficial blood flow artefacts. This filtered image is subtracted from the outer retinal layer image to eliminate large vessel projections, followed by a
-pixel Gaussian filter to produce a binary outer retinal flow map devoid of residual projections. This process yields a clean map of the typically avascular outer retinal layer, enabling further analysis, such as measuring CNV area size.
The segmentation scheme proposed by Liu et al. [
10], building upon the work of Jia et al. [
6], introduces enhancements for accurate recognition of choroidal neovascularisation (CNV) areas in greyscale OCTA images of the deep inner and outer retinal layers, captured from a 3 mm × 3 mm field of view centred around the fovea. The study involved OCTA images manually adjusted for optimal imaging depth from seven eyes of participants diagnosed with wet AMD. Their method, which assumes the CNV region occupies a large portion of the OCTA image, begins with pre-processing using a
-pixel Gaussian filter, followed by subtraction to highlight CNV regions by removing deep inner retinal blood vessels. A context-aware saliency model based on orientation, brightness, and location refines the CNV area by eliminating noise and generating a saliency map, which is further processed through nonlinear filtering, thresholding, and morphological operations to delineate the CNV boundary. The final CNV area is measured by estimating the proportion of flow pixels within the boundary [
6,
10].
Zhang et al. [
34] proposed an automated segmentation algorithm for identifying CNV lesions and quantifying the size of CNV regions in OCTA images, using two-dimensional greyscale images of the outer retinal layers captured from 3 mm × 3 mm and 6 mm × 6 mm fields of view centred around the fovea. The study involved 27 eyes from 23 AMD-diagnosed subjects, employing a semi-automated segmentation procedure with manual corrections to imaging depth levels for precise visualisation of CNV lesions. The algorithm follows six steps: inputting an OCTA image, enhancing contrast via adaptive thresholding, smoothing with a Gaussian filter, thresholding to create a binary image, applying morphological dilation to detect CNV boundaries, and estimating lesion size based on pixel proportions. Results showed reliable CNV measurements with 3 mm × 3 mm OCTA images, but challenges in accurately quantifying CNV regions with 6 mm × 6 mm images.
Taibouni et al. [
35] developed an automated quantification and segmentation algorithm to distinguish various shapes, sizes, and locations of CNV lesions in AMD patients using two-dimensional greyscale OCTA images of the outer retinal layers, captured from a 3 mm × 3 mm field of view centred on the macula. The study included 54 eyes from 54 wet AMD-diagnosed patients, with manual adjustments to imaging depth levels ensuring optimal visualisation of CNV regions. Patients were divided into two categories based on CNV lesion topology: densely crowded networks without branching patterns (category 1) and networks with noticeable separate branching patterns (category 2). A distinct segmentation algorithm was designed for each category, involving common initial steps of contrast enhancement and median filtering to reduce noise and delineate CNV regions. Clinicians manually marked lesion locations for segmentation, and pixel-intensity-based measures, such as the BVD metric, quantified the proportion of blood vessels in CNV regions. The algorithm for category 2 patients demonstrated superior performance compared to the one for category 1.
Automated segmentation algorithms [
6,
10,
34,
35] offer notable advantages to clinicians, such as enabling rapid and accurate detection and quantification of CNV lesions in AMD patients. These approaches reduce clinicians’ efforts in interpreting complex OCTA images of retinal vascular layers. Despite their potential clinical benefits and innovative contributions, these algorithms have limitations, including the exclusion of some AMD patients from automated analysis in earlier studies [
10,
34]. In certain cases, patients were grouped based on CNV lesion topology or texture characteristics, leading to the development of distinct algorithms [
35]. This grouping often stemmed from challenges such as graders’ inability to identify CNV regions or CNV lesions being insufficiently perceptible or fully contained within OCTA images [
10,
34,
35]. Accurate detection of CNV areas in OCTA images remains essential for precise quantification over regions of interest.
Moreover, many automated segmentation methods [
6,
10,
34,
35] rely on expert clinicians to manually adjust the depth levels of OCTA imaging to capture optimal details of CNV lesion regions. While this adjustment can enhance image quality, it introduces potential bias, limiting the utility of the OCTA technique in providing automated segmentation of retinal vascular layers. These limitations hinder the clinical applicability and accuracy of most existing approaches for analysing AMD patients [
6,
10,
34,
35]. Consequently, the benefits of automated segmentation techniques and the broader adoption of OCTA imaging remain constrained by these challenges.
Image or texture classification tasks differ from image segmentation, as they aim to assign an entire image or texture region to a predefined category based on training samples. While several studies [
6,
10,
34,
35] focus on automating the segmentation of OCTA image textures for AMD, automated classification of OCTA images with AMD is underexplored. This gap arises due to the novelty of the OCTA imaging technique, the scarcity of labelled OCTA datasets for AMD, and the difficulty in obtaining healthy control samples essential for classification tasks. Despite these challenges, notable advancements in automated OCTA image classification for AMD include works by Vaghefi et al. [
36] and Wang et al. [
37].
The study by Vaghefi et al. [
36] explored the integration of various ocular vascular imaging modalities (OCTA, OCT, and colour fundus photography (CFP)) to enhance dry AMD detection accuracy compared to single-modality analysis. CFP, unlike techniques such as FA and IGA, does not require a contrast agent, using white light to capture full-colour images of the retina [
38]. The study involved 75 participants divided into three groups: young healthy (YH), old healthy (OH), and dry AMD patients. Each participant underwent multiple imaging techniques, including CFP, OCT, and OCTA, ensuring comprehensive data collection. The study used raw image data without pre-processing, with individual retinal and choroidal layers from OCTA identified and extracted automatically.
Vaghefi et al. [
36] employed deep learning to develop and evaluate Convolutional Neural Networks (CNNs)-based image classification models using single, dual, and multimodal data combinations. Three designs were tested: single-modality CNNs trained separately on CFP, OCT, and OCTA; dual-modality CNNs combining OCT + OCTA and OCT + CFP; a multimodality CNN combining OCT + OCTA + CFP. These models classified participants into YH, OH, or dry AMD groups. Results showed that single-modality CNNs using CFP and OCTA data were most effective for detecting dry AMD, while the OCT-based CNN was better at identifying ageing groups (YH and OH). Diagnostic accuracy improved with multimodal data, with the multimodality CNN achieving near-perfect accuracy (99%).
The evaluation confirmed that combining imaging modalities significantly enhances diagnostic performance for dry AMD and ageing detection. Single-modality OCTA-based CNNs achieved 91% accuracy, dual-modality models (OCT + OCTA) reached 96%, and the multimodality CNN (OCT + OCTA + CFP) yielded 99% accuracy.
Table 1 summarises the performance improvements across CNN designs, highlighting the added diagnostic value of integrating multimodal data. This underscores the potential of leveraging diverse imaging techniques for advancing retinal image analysis and understanding.
The study by Wang et al. [
37] introduces a novel automated algorithm for identifying and segmenting CNV lesions in OCTA images, specifically addressing late-stage AMD (wet AMD). The algorithm integrates classification and segmentation tasks using two CNN models, which complement each other to classify OCTA images based on the presence of CNV lesions and segment these areas if present. The process begins by classifying OCTA images as CNV-positive or CNV-free, followed by segmenting CNV areas in the identified cases. The algorithm leverages diverse and information-rich OCTA datasets, including images from various retinal layers and volumetric data, ensuring robust training. Pre-processing steps, such as depth-level adjustments, layer subtraction, and manual annotation of CNV lesions by clinicians, were employed to construct accurate ground truth data.
The datasets comprised 1676 OCTA images from 607 eyes, including 117 CNV cases and 490 non-CNV controls. These datasets underwent rigorous pre-processing to ensure accurate segmentation and classification. In training, diverse representations of OCTA images were used, while testing relied on a single OCTA image of the outer retinal layer per eye, chosen for its clarity in visualising CNV lesions. The algorithm was evaluated with distinct datasets to avoid overlap between training and testing, ensuring unbiased results. The testing set included 50 CNV and 60 non-CNV eyes, reflecting the outer retinal layer’s prominence in detecting CNV lesions. The training set had 764 images from CNV eyes and 802 from non-CNV eyes, showcasing a broad representation of conditions.
Evaluation of the algorithm demonstrated exceptional performance. The classification achieved a sensitivity of 100% and specificity of 95%, with an AUC measure of 99%, indicating near-perfect diagnostic accuracy. The segmentation tasks were equally successful, producing precise blood vessel masks corresponding to CNV lesions. The robust performance underscores the algorithm’s potential for advancing OCTA image analysis, with its dual focus on classification and segmentation paving the way for improved diagnostic workflows in clinical settings.
From an image segmentation perspective, the automated algorithm by Wang et al. [
37] demonstrated promising results in detecting CNV lesions in OCTA images, achieving a mean intersection over union (IOU) value of
. This measure reflects the overlap accuracy between manually and automatically segmented regions, with values over 0.50 indicating reliable performance. In comparison, Liu et al. [
10] achieved a mean IOU of
on the same dataset. These findings underscore the capability of automated segmentation schemes to facilitate precise lesion identification. Moreover, automated diagnostic algorithms, such as those proposed by Wang et al. [
37] and Vaghefi et al. [
36], offer significant clinical advantages by reducing clinician workload and minimising human errors, while maximising the potential of imaging modalities like OCTA in understanding conditions such as AMD.
Vaghefi et al. [
36] highlighted the sensitivity of OCT, OCTA, and CFP imaging techniques to various ocular conditions, revealing nuanced insights like the higher sensitivity of OCT to ageing and CFP/OCTA to vascular pathologies such as AMD. Similarly, Wang et al. [
37] demonstrated that automating CNV lesion segmentation improved patient care by streamlining tasks and identifying potential biomarkers for disease progression. However, both algorithms face challenges, including dependency on small, labelled datasets, which limits their generalisability. Rigorous evaluation using large datasets or cross-validation strategies is crucial for enhancing their robustness but is hindered by the computational demands and limited availability of labelled OCTA images due to the technique’s recent introduction.
The reliance on deep learning-based algorithms [
36,
37] introduces additional complexities, such as overfitting or underfitting due to inadequate datasets, and resource-intensive requirements like GPU hardware. While these challenges can be mitigated by combining imaging data from various modalities, as demonstrated by Vaghefi et al. [
36], or creating binary classifications, as in Wang et al.’s [
37] work, such approaches are not without limitations. For instance, diverse and messy data can obscure the identification of robust, representative features, and the pre-processing steps required, such as manually adjusting imaging depth and segmenting regions, introduce subjectivity and potential biases, complicating the results.
Despite the limitations, OCTA remains a valuable imaging modality, albeit with restricted availability of datasets for ocular conditions like AMD [
36,
37]. Current studies, including Wang et al. [
37] and Vaghefi et al. [
36], serve as proofs of concept, demonstrating the potential of automated algorithms to address clinical challenges. However, these algorithms require further validation with larger datasets to ensure reliability. The ideal solution under current constraints would involve developing algorithms that maintain high diagnostic accuracy with limited labelled data, enabling the effective utilisation of OCTA for AMD diagnosis and monitoring.
3. Results
The evaluation of the algorithms in
Section 2.2.1 and
Section 2.2.2 was conducted on the diverse OCTA image datasets outlined in
Section 2.1. This evaluation framed the problem as binary image classification on OCTA data from Manchester Royal Eye Hospital and Moorfields Eye Hospital. For Manchester, classification distinguished healthy from wet AMD. For Moorfields, it differentiated dry AMD from wet AMD. Additionally, a further binary classification on Moorfields data distinguished CNV (wet AMD plus secondary CNV) from non-CNV (dry AMD), as secondary CNV images share vascular abnormalities with wet AMD.
Let , , and represent the individual feature vectors extracted from OCTA images for each distinct ocular vascular layer, using the texture descriptors , , and , respectively, as defined earlier. Consequently, for all conducted experiments, binary image classifications were performed as follows:
- 1.
Based on the individual layer feature vector, for a single ocular vascular layer
, the feature vector
is directly represented by the following histogram:
- 2.
Based on concatenating two layers, for two ocular vascular layers
and
, the concatenated feature vector
is provided by the following:
where ⊕ denotes the concatenation operation.
- 3.
Based on concatenating three layers, for three ocular vascular layers
,
, and
, the concatenated feature vector
is as follows:
- 4.
Based on concatenating all layers, for all ocular vascular layers
(namely, the superficial inner retina, the deep inner retina, the outer retina, and the choriocapillaris), the global feature vector
is as follows:
Therefore, the following cases summarise the various binary image classifications implemented:
- 1.
Single Layer: Classification is performed individually using , where is any of the ocular vascular layers.
- 2.
Two Layers: Classification is performed using , where and are any two ocular vascular layers.
- 3.
Three Layers: Classification is performed using , where are any three ocular vascular layers.
- 4.
All Layers: Classification is performed using , the concatenated feature vector of all ocular vascular layers.
There are several motivations for performing binary image classification in these ways. Analysing each OCTA image from separate ocular vascular layers may help identify the most predictive layer containing information on vascular abnormalities linked to AMD, such as CNV regions. Furthermore, the textural appearance of vascular pathologies related to AMD can be more perceptible in certain ocular vascular layers than in others. Therefore, performing binary image classification by concatenating two feature vectors extracted from two OCTA images of different ocular vascular layers, three feature vectors from three OCTA images, or all feature vectors from all OCTA images may help identify complementary relationships between features from different layers. Additionally, this approach may address the large within-class variation issue and improve the detection of AMD.
However, the Manchester Royal Eye Hospital and Moorfields Eye Hospital OCTA datasets are imbalanced. The number of eyes in the OCTA datasets for each class of eye condition (e.g., healthy, wet AMD, and dry AMD) is unequal. Since identifying all classes of eye conditions is crucial, the following evaluation strategies were conducted on the OCTA image datasets from both hospitals:
- 1.
Employ the stratified K-fold cross-validation strategy with K = 10 to divide the OCTA datasets into 10 stratified folds, ensuring each training and testing set preserves the class distribution (i.e., healthy, wet AMD, and dry AMD). The choice of K = 10 is motivated by empirical results demonstrating this value produces performance estimates that avoid high bias (e.g., overestimated performance) or high variance (e.g., significant fluctuations in performance estimates) [
58]. This resampling technique supports reliable evaluation of the proposed algorithms’ predictive performance and mitigates overfitting.
- 2.
Compute the area under the receiver operating characteristic curve (AUC) score to provide equal weight to different eye condition classes in binary classification tasks (e.g., healthy vs. wet AMD, dry AMD vs. wet AMD, and CNV vs. non-CNV).
As the evaluation involved employing the stratified K = 10-fold cross-validation strategy and computing the AUC scores, the mean AUC scores along with the standard deviations (std) were estimated. Hence, the overall performances of the algorithms are estimated based on (mean AUC scores ± std) using the two different machine learning classifiers, specifically the KNN and the SVM.
A hyper-parameter search was conducted for the KNN and SVM classifiers used in the two classification algorithms. This involved defining a grid of hyper-parameter values and evaluating each grid point individually using cross-validation. For the KNN classifier, the hyper-parameters were empirically explored by varying the value of K nearest neighbours, , and changing the distance metrics, using Euclidean, Manhattan, and Chebyshev functions.
For the SVM classifier, the penalty parameter C was varied with , alongside different kernel functions: linear, Radial Basis Function (), and Polynomial (). When using the and kernels, the and d parameters were fine-tuned, with and . Optimal hyper-parameter combinations for each classifier were selected based on cross-validation to achieve the best classification performance.
Additionally, the parameters of the three texture descriptors, i.e., ( and ) for the and , and ( and ) for , were empirically fine-tuned. The motivation for these evaluations is twofold: firstly, to enable comprehensive evaluation and validation of the proposed descriptor for quantifying AMD textural appearance in OCTA images compared to and ; secondly, to identify optimal ocular vascular-specific parameters for the descriptors, facilitating rich texture representations of AMD in OCTA images.
All experiments were conducted on a personal computer (PC) running Windows 7, equipped with an Intel Core i7 3.4 GHz quad-core processor and 16 GB of RAM. The software environment consisted of Python 2.7, utilising essential libraries such as scikit-learn [
59] for machine learning model development and evaluation and OpenCV [
60] for image processing. To ensure reproducibility, a fixed random seed of 42 was used across all experiments.
Consequently, the above evaluation strategies provide accurate insight into overall performance and enhance validation for the two developed algorithms.
Table 4,
Table 5 and
Table 6 summarise the best classification results achieved by the automated OCTA image classification algorithm using whole local texture features for the healthy vs. wet AMD, dry AMD vs. wet AMD, and CNV vs. non-CNV tasks, respectively. The optimal components, including the best local texture descriptors and classifiers that improved performance, are also listed.
Table 7,
Table 8 and
Table 9 summarise the optimal classification results achieved by the automated OCTA image classification algorithm, based on the reduced local texture features proposed for the healthy vs. wet AMD, dry AMD vs. wet AMD, and CNV vs. non-CNV classification tasks, respectively. Additionally, the optimal components, including the best local texture descriptors and classifiers that enhanced performance, are detailed.
4. Discussion
This section discusses the significance of classification findings from the evaluation in
Section 3.
Table 10,
Table 11 and
Table 12 compare the performance of classification algorithms using reduced local and whole local texture features for the healthy vs. wet AMD, dry AMD vs. wet AMD, and CNV vs. non-CNV tasks, respectively.
Broadly, the classification algorithm based on reduced local texture features achieved the best results in most classification experiments on individual OCTA images of different ocular vascular layers, as shown in
Table 10,
Table 11 and
Table 12. However, performance varied across binary classification tasks. For healthy vs. wet AMD, the reduced local texture algorithm performed best on OCTA images of the superficial inner retina and choriocapillaris layers, while the whole local texture algorithm excelled on deep inner retina images. Both algorithms achieved comparable scores, with mean AUC and std.
, on outer retina images.
For the dry AMD vs. wet AMD classification task, the algorithm based on reduced local texture features significantly outperformed the one based on whole local texture features in nearly all experiments conducted on individual OCTA images of various ocular vascular layers, except those of the choriocapillaris layer (see
Table 11). Conversely, for the CNV vs. non-CNV classification task, the algorithm based on whole local texture features achieved the best performance solely on OCTA images of the superficial inner retina layer. However, the reduced local texture features algorithm demonstrated superior results across almost all other ocular vascular layers.
When performing binary classification tasks based on layer combinations, the algorithm using whole local texture features generally showed superior performance. For instance, perfect classification performance (mean AUC score and std. ) was achieved by concatenating feature vectors from the OCTA images of the outer retina and choriocapillaris layers for the healthy vs. wet AMD task. However, using reduced local texture features generally yielded better performance for dry AMD vs. wet AMD and CNV vs. non-CNV tasks, achieving mean AUC scores and std. and , respectively.
When the classification relationship between local texture features and the individual classes of interest (i.e., dry AMD, wet AMD, and healthy) is influenced by the variability of those features, employing the
technique may help establish a suitable relationship between the decorrelated or reduced local texture features and the target classes to be distinguished.
Figure 9,
Figure 10 and
Figure 11 show bar chart plots quantifying the explained variance ratios of individual
in binary classification tasks: healthy vs. wet AMD, dry AMD vs. wet AMD, and CNV vs. non-CNV. These plots represent the explained variance ratios of
after applying
on feature vectors from OCTA images of ocular vascular layers, focusing on cases where reduced local texture features improved classification performance.
Looking at the plots in
Figure 9,
Figure 10 and
Figure 11, the first
K typically capture most of the variance in the original data. The improved performance of the classification algorithm based on reduced local texture features suggests these problems are tied to the variability in the original features. Retaining
of the variance proved effective for image classification tasks while reducing potential redundancies in the original features.
Nevertheless, the classification results achieved by the two purely OCTA data-driven classification algorithms developed in this paper should not be underestimated. Specifically, they recognise subtle texture variations in OCTA images of ocular vascular layers (superficial and deep inner retina) not typically used for AMD detection. These findings are key for diagnosing vascular pathologies related to AMD, e.g., CNV and non-CNV lesions, from OCTA images. Identifying abnormalities in the superficial and deep inner retina layers can greatly aid clinicians, especially when such anomalies are not easily observed in the outer retina and choriocapillaris layers typically used for AMD diagnosis.
Unlike Wang et al. [
37], which identifies subjects with wet AMD based solely on visibly perceptible CNV lesions in OCTA images of the outer retina layer, this paper discriminates between healthy subjects and those with various AMD stages (e.g., dry AMD and wet AMD) using OCTA images from different ocular vascular layers, regardless of CNV lesion visibility. This is possible because CNV regions can appear more distinguishable in certain vascular layers than others.
From a wet AMD detection perspective, the optimal binary OCTA image classification task, enabling the most accurate discrimination of wet AMD cases, is the healthy vs. wet AMD task.
Table 13 provides an in-depth evaluation of the two classification algorithms developed in this paper for the optimal binary classification task, i.e., healthy vs. wet AMD, which significantly improved wet AMD detection accuracy in OCTA images. These algorithms are assessed using the best components, such as optimal combinations of local texture descriptors and classifiers, as summarised in
Table 4 and
Table 7. Evaluation results in
Table 13 are determined using measures including accuracy, sensitivity (recall), specificity, precision, and AUC, applying stratified K = 10 folds cross-validation. Results are presented as mean scores ± std. for each measure.
While the classification algorithm of Wang et al. [
37] achieved perfect sensitivity/recall, i.e.,
, it was tested on a single small OCTA dataset manually depth-adjusted to clearly visualise CNV lesion regions. However, its generalisation to unseen OCTA data remains uncertain, as manual depth adjustments are subjective and impractical for routine clinical use. In contrast, the classification algorithms in this paper are evaluated using cross-validation and larger, unaltered OCTA datasets.
The study by Wang et al. [
37] demonstrates a sophisticated deep learning approach that combines classification and segmentation tasks through the use of dual CNN architectures. However, their method is heavily reliant on extensive manual pre-processing steps, including depth adjustment of imaging slices, layer subtraction operations, and manual annotation of CNV lesions, all of which introduce potential biases and reduce reproducibility. Furthermore, the segmentation model depends on visual features that are enhanced or even artificially emphasised through these pre-processing procedures. While this design may yield strong performance within the confines of a controlled dataset, it poses significant limitations in terms of practical deployment, particularly in settings where expert annotation and consistent imaging quality cannot be guaranteed.
In contrast, our methods avoid the dependence on complex pre-processing and instead leverages a feature-driven strategy that focuses on the intrinsic textural and vascular characteristics of the OCTA images. The features employed are carefully designed to be robust to variations in imaging quality and to capture diagnostically relevant patterns without requiring intensive manual input or image manipulation. This not only enhances the reproducibility and interpretability of the results but also improves the feasibility of clinical deployment in resource-constrained environments. Additionally, our evaluation is conducted using a statistically rigorous validation framework that enhances the robustness and generalisability of the proposed method.
While both classification algorithms in this paper demonstrated effective results, the algorithm based on reduced local texture features generally outperformed in most binary classification experiments on individual OCTA images of different ocular vascular layers, as shown in
Table 10 and
Table 13. Conversely, the algorithm using whole local texture features excelled in binary classification experiments conducted via layer combination (see
Table 10 and
Table 13).
A key contribution of this study lies in the targeted integration of established techniques, specifically descriptors and , within the specific context of OCTA image analysis for AMD classification. Rather than introducing a completely new algorithm, this research focuses on designing an effective, interpretable, and resource-efficient pipeline that can realistically be applied in clinical practice. By performing classification based on the entire OCTA image, we avoid the need for lesion segmentation. This reduces the dependency on detailed manual annotations while directly addressing the challenge of overlapping healthy and pathological textures.
Our findings indicate that reducing the dimensionality of texture features through not only lowers computational requirements but also improves classification accuracy. This improvement is likely due to the removal of inter-class redundancies that often occur in full OCTA images. The results support the importance of domain-informed feature engineering, particularly in medical imaging scenarios where sample sizes are limited and deep learning approaches may not be feasible or easily interpretable.
Although and are not novel techniques on their own, their deliberate and problem-specific adaptation provides a practical contribution. This work demonstrates how existing methods, when carefully tailored to suit the characteristics of medical imaging data, can yield a scalable solution to a complex classification problem.
Furthermore, to evaluate the generalisability of the proposed feature-based frameworks, we employed two classic classification methods: SVM and KNN, which represent distinct decision strategies. These were deliberately chosen for their widespread use in texture-based analysis and their effectiveness in low-data medical imaging contexts. The consistent performance observed across both classifiers highlights the robustness of the extracted features. While this study prioritises feature design over classifier complexity, future work could incorporate additional classification schemes, such as Random Forests or Naïve Bayes, to further assess the adaptability and scalability of the proposed methods.
5. Conclusions
To conclude, the diagnostic techniques developed in this paper have effectively achieved their intended objectives. However, there are several opportunities for further refinement, comprehensive evaluation, and wider application. These opportunities are summarised as follows:
The automated diagnostic classification algorithms demonstrated excellent performance in differentiating healthy subjects from wet AMD patients across various OCTA images. Consequently, extending these algorithms to distinguish healthy individuals from patients with other ocular disorders, such as diabetic retinopathy (DR) or glaucoma, would be a valuable direction. Achieving this will require the acquisition of carefully curated OCTA image data representing DR or glaucoma conditions.
Evaluating automated diagnostic classification algorithms on more complex tasks offers significant potential. For example, it is clinically valuable to quantify and differentiate variations among wet AMD patients, such as distinguishing between those with active CNV lesions requiring treatment and those with inactive CNV lesions suitable for observation. Achieving this will require carefully curated OCTA image data representing these wet AMD variations. Additionally, the OCTA image datasets used in this research are smaller than ideal. Collecting a much larger dataset would enable comprehensive assessments of the automated diagnostic algorithms developed in this study, ensuring robust performance evaluation and validation.
Finally, the trend towards automated image texture analysis for medical image diagnostics, such as classification or segmentation, emphasises deeper, more complex architectures. Exploring deep CNN models within this context is promising but requires curated OCTA datasets representing various eye conditions, including healthy eyes and AMD. Notably, data augmentation techniques, while increasing dataset size, are typically unsuitable here as they may distort OCTA textures, leading to misleading results. In addition, future research may benefit from investigating novel deep learning techniques, such as capsule networks with residual pose routing [
61], which have shown promise in capturing spatial hierarchies and part-whole relationships that could enhance the model’s ability to interpret complex OCTA textures.