Next Article in Journal
Melampyrum nemorosum L. Herb Extracts: Phytochemical Composition and Screening of Pharmacological Activities
Previous Article in Journal
Impact of Heat Exchanger Effectiveness and EGR on Energy and Emission Performance of a CI Engine
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Retinal OCT Images: Graph-Based Layer Segmentation and Clinical Validation †

by
Priyanka Roy
1,2,3,*,
Mohana Kuppuswamy Parthasarathy
2,4 and
Vasudevan Lakshminarayanan
2,3,*
1
Department of Biomedical Engineering, University of Illinois Chicago, Chicago, IL 60607, USA
2
School of Optometry and Vision Science, University of Waterloo, Waterloo, ON N2L 3G1, Canada
3
Department of Systems Design Engineering, University of Waterloo, Waterloo, ON N2L 3G1, Canada
4
Department of Psychology, University of Nevada, Reno, NV 89557, USA
*
Authors to whom correspondence should be addressed.
Partial results from this study were presented at the SPIE Photonics West Meeting, San Francisco, CA, USA, 29–31 January 2018 and at the SPIE Medical Imaging Conference, Houston, TX, USA, 11–13 February 2018. The work described is from the Master’s thesis of P.R., University of Waterloo.
Appl. Sci. 2025, 15(16), 8783; https://doi.org/10.3390/app15168783
Submission received: 30 December 2024 / Revised: 31 July 2025 / Accepted: 6 August 2025 / Published: 8 August 2025
(This article belongs to the Section Optics and Lasers)

Abstract

Spectral-domain Optical Coherence Tomography (SD-OCT) is a critical tool in ophthalmology, providing high-resolution cross-sectional images of the retina. Accurate segmentation of sub-retinal layers is essential for diagnosing and monitoring retinal diseases. While manual segmentation by clinicians is the gold standard, it is subjective, time-intensive, and impractical for large-scale use. This study introduces an automated segmentation algorithm based on graph theory, utilizing a shortest-path graph-search technique to delineate seven intra-retinal boundaries. The algorithm incorporates a region of interest (ROI) selection to enhance efficiency, achieving a mean computation time of 0.93 s on standard systems suitable for real-time clinical applications. Image denoising was evaluated using Gaussian and wavelet-based filters. While wavelet-based denoising improved accuracy to some extent, its increased computation time (~10 s/image) was the trade-off. The intra-retinal layer thicknesses computed by the segmentation algorithm was consistent with previous studies and demonstrated high accuracy with respect to manual segmentation, thus indicating clinical relevance. Future research will explore integrating machine learning to improve robustness across diverse retinal pathologies, enhancing the algorithm’s applicability in clinical settings.

1. Introduction

Optical Coherence Tomography (OCT) is a fast, non-invasive medical imaging method that provides detailed cross-sectional images of the retina [1,2]. This technology is essential in ophthalmology, enabling the early identification of eye diseases by examining the retina’s microstructure. Properties of the ocular media such as its high spectral transmissibility and interferometric sensitivity are the confounding factors that make OCT a high-resolution imaging technique for the cross-section of the retina [3,4,5]. This capability makes OCT a critical tool in ophthalmology, allowing for detailed examination of the retina’s microstructure and the early detection of pathological changes [6,7,8].
The early detection and monitoring of several ophthalmic and neurodegenerative pathologies, such as glaucoma, diabetic retinopathy (DR), age-related macular degeneration (AMD), and macular edema, require detailed investigation of the individual sub-retinal layers [6,7,8]. OCT images are segmented to precisely outline these layer boundaries, allowing for their measurement and visualization. This process is crucial for understanding how these diseases develop and respond to treatment.
Despite its advantages, the increasing volume of volumetric imaging data presents significant challenges for manual segmentation. Manual segmentation is labor-intensive, highly subjective, and involves substantial time requirements and overhead costs, creating a significant bottleneck in clinical workflows [8,9]. Additionally, the performances of the in-built segmentation algorithms of the OCT imaging devices exhibit inconsistencies between versions and across different manufacturer [10]. Furthermore, these algorithms are highly variable across image quality and often struggle with poor image contrast and heavy speckle noise, particularly when dealing with pathological retinas [10,11].
These limitations underscore the pressing need for the development of fast, accurate, robust, and optimized automated segmentation algorithms [10,11]. Such algorithms would enhance the clinical utility of OCT by providing consistent and reliable segmentation results, independent of operator expertise. Automated segmentation has the potential to revolutionize clinical practice by significantly reducing the time and effort required for image analysis, thereby improving workflow efficiency and diagnostic accuracy [12,13].
The primary motivation for this study was to develop a robust segmentation algorithm to automate the process of intra-retinal layer segmentation in OCT images. One of the primary challenges to be tackled by automated segmentation algorithms is the impact of heavy speckle noise and low optical contrast in retinal OCT scans [8,14,15,16,17]. These factors complicate the accurate delineation of the retinal layers, necessitating the use of effective image denoising techniques. Hence, this study also aimed at exploring the impact of advanced denoising techniques on the accuracy of segmentation. Simple Gaussian filters, commonly used for denoising, can result in over-smoothing, which blurs the edges within images and adversely impacts boundary detection accuracy [18]. Advanced denoising techniques, such as those utilizing wavelet domains preceded by bilateral filtering, offer a more sophisticated approach by retaining boundary details while effectively reducing noise. We wanted to determine whether improvements in restoring the finer details of the boundaries within OCT images through wavelet-based denoising can enhance the overall efficiency and clinical performance of the automated segmentation by the algorithm, in comparison to traditional low-pass filter smoothing methods [19].
A validation study was also performed to assess efficacy of the segmentation results obtained from the automated graph-based segmentation algorithm. This study compared retinal layer thicknesses obtained from the automated segmentation approach with those derived from manual segmentation by expert clinicians. The comparison focused on evaluating the consistency and reliability of the automated method, highlighting its potential to provide accurate and reproducible results suitable for clinical use.
In summary, the development of a robust automated segmentation algorithm for OCT images addresses several critical needs in ophthalmology. By leveraging on a fundamental graph-based delineation approach and an efficient denoising technique, the proposed algorithm aims to provide accurate and robust segmentation of retinal layers. This research contributes to the ongoing efforts to enhance the diagnostic capabilities of OCT, ultimately improving patient outcomes through timely and precise detection of retinal pathologies.

2. Materials and Methods

2.1. Image Dataset

A dataset (n = 25) comprising de-identified macular SD-OCT images of healthy adult retinas was utilized to validate the segmentation algorithm. These anonymized images, captured in JPEG format, were provided by the Medical Research Foundation, a unit of Sankara Nethralaya in Chennai, India. Each image measured 6 mm transversally and was obtained using a Cirrus HD-OCT device (Carl Zeiss Meditec, Dublin, CA, USA). Figure 1 illustrates a sample SD-OCT image from the test dataset.

2.2. Preprocessing

The SD-OCT images were a mixture of both foveal and non-foveal slices. A sliding window method was incorporated to enable the user to select a region of interest within the image slice [8,20]. The user-defined ROI was the specific slice or region within the image selected for further processing. By focusing only on the pixels within the ROI, the algorithm optimized computational resources, reducing time and memory requirements.
To facilitate further processing and segmentation, the resized images were converted to grayscale, as shown below in Figure 2. This conversion enhanced the differences between pixel intensities, simplifying the calculation of gradients and the construction of the graph for segmentation.

2.2.1. Gaussian Filter-Based Denoising

Initial denoising was performed by convolving the SD-OCT images with low-pass Gaussian filters [8]. This method determined edge kernels from the graph weights and then filtered out the high-frequency components from the image, thus smoothening it and saving processing time and memory. However, this approach risked over-smoothing the images, potentially blurring essential edge details that might have been necessary for accurate segmentation [14,18].

2.2.2. Wavelet-Based Denoising

To address the limitations of Gaussian low-pass filtering, an advanced image denoising technique involving a bilateral filter followed by wavelet thresholding was also implemented, based on a pilot study with a partial dataset as previously discussed by the authors at a SPIE conference [19]. This advanced denoising technique, developed by a previous study [18], aimed to retain boundary details while effectively reducing noise. Gaussian white noise was removed from the images using a combination of Gaussian and bilateral filters. The method noise (MN) was defined as the difference between the noisy image (I) and the image after Gaussian and bilateral filtering (IGF), expressed as:
MN = I − IGF
The noisy image (I) could be represented as the sum of the original image (Io) and an added white Gaussian noise (GN):
I = Io + GN
Wavelet thresholding was then applied based on the value of the computed method noise, reconstructing the images and retaining edge details.

2.3. Retinal Layer Segmentation and Thickness Computation

A graph G = (V,E) was constructed for each SD-OCT image. Each pixel within the image was represented by a node V [8,20]. An edge E defined the connection between a pair of nodes (Vi, Vj). Each edge was assigned a weight Wij based on the normalized vertical image gradients (gi and gj) for the pair of nodes [8,21]. The vertical gradient adjacency matrices were computed based on the transition from bright to dark layers or dark to bright layers and normalized to values between 0 and 1. These gradient values were determined using [1; −1] and [−1; 1] edge maps [8]. Finally, the vertical image gradients were used in the computation of the edge weights within the image graphs. The edge weight Wij was calculated as follows:
W i j = 2 g i + g j + W m i n
where Wmin is a small positive constant value representing the hypothetical minimum weight within the graph.
A sparse adjacency matrix was generated from the assigned edge weights. The images were converted to grayscale to simplify computation to a single color channel and emphasizing the differences between pixel intensities, thus aiding in the adjacency matrix computation.
Potential graph cuts within the ROI were defined by edges Eij between a pair of nodes. The lowest-weighted graph cut min (Eij) was used to detect and trace a layer boundary between the nodes (Vi, Vj), maximizing the similarity between all nodes along the boundary [2,8,15,21,22,23,24,25]. This defined a graph cut that was used to delineate each of the retinal layers within the OCT images.
Boundary point indices (BPIs) were determined for each delineated boundary. The range of pixels that fell between two boundaries constituted to an intra-retinal layer. Layer thicknesses were computed from the mean difference between the corresponding BPIs of two consecutive boundaries along the same vertical image gradient [8,26,27,28,29,30]. Partial results from the pilot study by the authors of this study were previously discussed at a conference [12].
Figure 3 schematically represents the methodological workflow used to develop the segmentation algorithm. The flowchart details the steps from image preprocessing and denoising to graph construction, boundary detection, and layer thickness computation, culminating in the segmented image with computed layer thicknesses.

2.4. Efficacy of the Algorithm: Comparative Analysis

The layer thickness values obtained from our segmentation algorithm were compared to normalized layer thickness values reported by other studies in the literature [26]. Normalization ensured a fair comparative analysis between the corresponding layers’ thickness values from both studies. Statistical differences in layer thicknesses were assessed to determine which denoising technique yielded better accuracy for the graph-based segmentation algorithm. Furthermore, to assess the efficacy of each denoising technique, the layer thickness values obtained by our segmentation algorithm after Gaussian filtering and after wavelet-based denoising were each compared to the normalized layer thickness values reported in literature. This was conducted to determine which denoising approach worked best with our graph-based segmentation approach in terms of accuracy and time consumption.

2.5. Validation Study

The graph-based segmentation algorithm was applied to a de-identified dataset comprising 25 macular OCT images healthy subjects. The algorithm segmented 7 retinal boundaries and determined the thickness of 6 layers. Gaussian filters were used for initial denoising. Manual segmentation was performed by an expert clinician using a custom designed graphical user interface (GUI). The layer thickness values obtained from manual segmentation were used to validate the efficacy of the automated segmentation by comparing the mean differences between the thickness values obtained from automated and manual segmentations, respectively, for each layer across the entire dataset. There were three primary steps in the validation study, as described in detail in the subsequent sections below.

2.5.1. Custom-Built Graphical User Interface to Facilitate Manual Segmentation

The manual segmentation was conducted by a clinician with many years of OCT segmentation experience. In order to facilitate the manual segmentation and make the comparison to automated segmentation easier, a special graphical user interface (GUI) was used for the study, based on a previously developed software [20]. Figure 4 illustrates the different options provided by the graphical user interface to help the clinician segment the retinal layer boundaries manually.
The clinician could choose the name of the boundary from the list that they wanted to re-segment or correct manually. They had the option to either re-segment the whole boundary selected by clicking on the “manual” button on the GUI or only a portion of the boundary that seemed incorrect by clicking on the “semi auto” button. Once the re-segmentation of the whole image was conducted to the satisfaction of the clinician, the “Exit” button was pressed to save the new segmentation file. The program would re-compute the layer thicknesses according to the new segmented boundaries using the exact same technique that was used to compute the layer thicknesses in the automated segmentation step.

2.5.2. Layer Thickness Comparison

In order to validate the automated segmentation with respect to the manual segmentation as gold standard, the intra-retinal layer thickness values computed by both were considered for comparison. The mean difference between the thickness from manual segmentation (MS) and that from automated segmentation (AS) gave us the mean error, which was determined by the following equation:
M e a n   e r r o r = M e a n T h i c k n e s s M S M e a n T h i c k n e s s A S ± S D
Furthermore, the accuracy of the segmentation of the automated algorithm with respect to manual segmentation for each intra-retinal layer was also determined from the equation below:
A c c u r a c y = 100 M e a n   e r r o r M e a n   t h i c k n e s s M S × 100

2.5.3. Computational Efficiency

The computational time for each denoising and segmentation method was recorded and analyzed to evaluate the efficiency of the algorithm. The balance between accuracy and computational load was assessed to determine the practical applicability of the denoising techniques in clinical settings.

3. Results

3.1. Retinal Layer Segmentation and Thickness Computation

The graph-based automated segmentation algorithm successfully delineated seven boundaries within the SD-OCT images, corresponding to six distinct retinal layers. These layers, segmented from both foveal and non-foveal slices, include the internal limiting membrane (ILM) at the topmost boundary, followed by the retinal nerve fiber layer and ganglion cell layer (RNFL + GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer and inner segment (ONL + IS), and the outer segment and retinal pigmented epithelium (OS + RPE). Figure 5 illustrates these segmented layers, with each layer’s abbreviation and full name provided in the legend for clarity. Table 1 quantifies the mean thicknesses (±standard deviation) of each of the retinal layers (in microns) segmented by the algorithm for the foveal and non-foveal slices, respectively, across the entire dataset.

3.2. Gaussian Filter-Based Denoising Versus Wavelet-Based Denoising and Their Impacts on the Segmentation Results

Figure 6a,b depicts a sample SD-OCT image from the test dataset after being denoised using a simple Gaussian filter and the corresponding segmentation results—zoomed into the user-defined ROI—produced by the graph-based algorithm, respectively. The algorithm was able to segment the retinal layers effectively, providing clear delineation of each boundary.
The results of denoising using a bilateral filter followed by wavelet reconstruction for a sample SD-OCT image are shown in Figure 7a, Figure 7b and Figure 7c, respectively. The corresponding segmentation results, illustrated in Figure 7c, demonstrate the effectiveness of this advanced denoising technique in preserving edge details and enhancing boundary detection.
The graph-based algorithm computed the mean thicknesses of the six segmented layers for each SD-OCT image within the test dataset. Table 2 quantifies the retinal layer thicknesses (in microns) of the segmented image and the mean computation time per image over the entire dataset, when executed on a specific computer (64-bit Windows10 OS with Intel Core i5 processor, 8 GB RAM, and 1 GB Radeon graphics card), after being denoised by Gaussian filtering and wavelet-based denoising techniques.

3.3. Performance Evaluation

The performance of the retinal layer segmentation by our algorithm was evaluated by two methods: firstly, through a comparative analysis with layer thicknesses computed by a previously published study, and secondly, by comparing the segmentation results with that of manual segmentation conducted as part of this study using the custom-built GUI. Both are described in further detail in the subsequent sections below.

3.3.1. Segmentation Accuracy with Respect to Previous Studies

The automated segmentation algorithm effectively segmented the retinal layers, with the ILM being the topmost boundary, followed by RNFL + GCL, IPL, INL, OPL, ONL + IS, and OS + RPE. The mean layer thicknesses, in microns, computed by the algorithm for the foveal SD-OCT scans, after Gaussian filtering, were compared to the normalized layer thickness values reported by a previously published study. This was conducted to validate the findings from our method in terms of the retinal layer thicknesses determined by our algorithm when compared to those computed by other already published studies. Although the data being compared might have been collected from different demographics and devices, the values were averaged across the datasets to omit the finer details of each individual and normalized to reflect similarity between the retinal layers being compared.
For normalization, multiple consecutive layers were combined based on the layers segmented by the current graph-based algorithm. The mean of the thickness values reported by the previous study across nine macular sectors were determined for each of the combined layers, to compare with corresponding mean layer thickness values from the current study, by one-sample t-tests for individual layers. Table 3 summarizes the statistical results from this comparative analysis, showing that the differences between the mean (±SD) thicknesses of each retinal layer and those reported by the previous study were statistically insignificant (one-sample t-test, p > 0.05), except for layer 4 (p = 0.04). The overall mean retinal thickness computed by the algorithm did not vary significantly (p = 0.17) compared to the reference values, confirming the accuracy of the segmentation of the newly developed algorithm.
To evaluate the impact of the two different denoising techniques implemented in the study, another comparative analysis was performed between the thicknesses computed by our algorithm when denoised by either Gaussian filtering or the wavelet-based denoising technique to that of the results from the previous study. Figure 8 summarizes the results from the comparative analysis of the mean layer thicknesses computed by this novel segmentation algorithm (after simple Gaussian filter-based denoising and after advanced denoising using wavelet-based thresholding) and the normalized layer thickness values from a previously reported study.
The comparison of the reference algorithm with the current segmentation algorithm when images were denoised using simple Gaussian filter, shows no statistically significant differences between the mean (±SD) thicknesses of each of the retinal layers 1, 2, 3, 5, and 6 (p > 0.05), except for layer 4 (p = 0.04). When the reference layer thicknesses were compared to the layer thicknesses computed post-wavelet-based denoising, the reported results show non-significant differences (p > 0.05) for all six layers, including layer 4. However, the overall mean retinal thickness computed by this algorithm did not vary significantly when either of the denoising techniques were implemented before segmentation (pgauss = 0.17 and pwavelet = 0.66).

3.3.2. Segmentation Accuracy with Respect to Manual Segmentation

The six intra-retinal layers segmented by the graph-based automated algorithm were manually re-segmented or the erroneous regions were corrected by the expert clinician using the graphical user interface described above. Figure 9 illustrates the manual segmentation of six intra-retinal layers of a sample macular SD-OCT image from the test dataset.
Table 4 summarizes the mean thicknesses from automated segmentation and manual segmentation, respectively, for each of the six segmented layers. The mean error is reported as the difference between the layer thickness from manual segmentation and that from automated segmentation. The negative (-) sign denotes that the layer thickness from automated segmentation has a larger value than that from manual segmentation. No sign indicates a positive value, which signifies that the value of layer thickness from manual segmentation is greater than that from automated segmentation. The accuracy of the segmentation of each layer has also been determined and reported from Equation (5) as previously described.
The automated graph-based segmentation algorithm demonstrated impressive speed and accuracy in segmenting the retinal layers and computing their mean thicknesses. The algorithm took approximately 5 s to produce the final output, including the segmented image and the layer thicknesses. In contrast, manual segmentation followed by thickness computation using the graphical user interface took about 10 min. Table 5 shows the average computation time for segmenting the macular SD-OCT images using automated and manual techniques.

4. Discussion

The application of the novel graph-based algorithm for segmenting retinal layers in spectral-domain Optical Coherence Tomography (SD-OCT) images represents a significant advancement in the field of ophthalmic imaging. This study highlights the algorithm’s efficacy, especially when different denoising techniques are applied prior to segmentation. The comparative analysis between wavelet-based denoising and Gaussian filter-based denoising reveals important insights into the trade-offs between accuracy and computational efficiency.

4.1. Denoising Techniques and Segmentation Accuracy

The results demonstrate that wavelet-based denoising improves the accuracy of retinal layer thickness computation across all layers, including those where Gaussian-filtered images showed discrepancies. However, this increased accuracy comes at the cost of significantly longer computation times. The wavelet-based approach requires approximately 10 s for denoising before segmentation can commence, compared to the much quicker Gaussian filtering.
This study also notes that variations in normative retinal layer thickness values can be attributed to factors such as different OCT devices, regions of the retina scanned, and demographic variables like ethnicity, age, and sex. These factors must be considered when comparing results across different studies and populations. While wavelet-based denoising provides marginally better accuracy, the simplicity and speed of Gaussian filtering make it a viable option for many clinical applications, particularly where time and computational resources are limited.
The overall segmentation results indicate that the graph-based algorithm performs efficiently even without advanced denoising techniques. This finding is consistent with previous literature on graph-based image segmentation algorithms, reinforcing the utility of Gaussian filters for preprocessing SD-OCT images.

4.2. Algorithm Performance

The algorithm employs a shortest-path graph-search technique to accurately delineate seven intra-retinal boundaries, effectively segmenting six retinal layers. This includes the retinal nerve fiber layer and ganglion cell layer (RNFL + GCL), inner plexiform layer (IPL), inner nuclear layer (INL), outer plexiform layer (OPL), outer nuclear layer and inner segment (ONL + IS), and the outer segment and retinal pigmented epithelium (OS + RPE). The algorithm’s effectiveness on both foveal and non-foveal slices of the retina makes it more useful in clinical settings, allowing for thorough examination of macular SD-OCT scans.
One of the key features of the algorithm is its region of interest (ROI) selection tool, which focuses processing power on relevant image areas, thus saving time and computational resources. This feature, combined with the algorithm’s robustness in noisy conditions facilitated by simple Gaussian filters, underscores its efficiency and practicality in clinical settings.
The layer thickness values computed by the algorithm provide critical markers for retinal health, assisting clinicians in diagnosing pathologies. Accurate segmentation of retinal layers can reveal abnormalities indicative of diseases such as macular degeneration and glaucoma, which typically alter the thickness of specific retinal layers.

4.3. Validation Against Manual Segmentation

The validation study compared the automated segmentation results with manual segmentation (MS) performed by an experienced clinician. The mean error between the two methods served as a metric for accuracy, with lower errors indicating higher accuracy. Negative mean error values for layers such as RNFL + GCL, OPL, and OS + RPE suggested that the automated algorithm tended to overestimate thickness compared to manual segmentation, whereas positive values for other layers indicated the opposite.
The algorithm’s accuracy varied across different layers, with the inner plexiform layer (IPL) showing the lowest accuracy (74.58%) and the combined outer nuclear layer and inner segment (ONL + IS) displaying the highest accuracy (98.90%). These findings underscore the potential of the automated algorithm to reliably segment retinal layers and provide accurate measurements that can be used for clinical diagnostics.

5. Conclusions

The graph-based automated segmentation algorithm developed in this study demonstrates high-speed, accurate delineation of seven intra-retinal boundaries, segmenting six retinal layers in SD-OCT images. The comparative analysis of denoising techniques highlights the balance between accuracy and computational efficiency, with Gaussian filtering emerging as a practical option for clinical use despite the slight accuracy improvement offered by wavelet-based denoising.
The algorithm significantly reduces the time and costs associated with manual segmentation of OCT images, making it a valuable tool for routine clinical practice. Its ability to accurately compute layer thicknesses enables the detection of retinal pathologies, potentially improving patient outcomes through timely diagnosis and monitoring.
Future modifications could further enhance the algorithm’s applicability to images with pathological features, broadening its utility in ophthalmic diagnostics. Overall, the validated automated graph-based segmentation algorithm represents a robust, efficient solution for the segmentation of intra-retinal layers in SD-OCT images, paving the way for its integration into clinical workflows and research settings.
Future research should focus on refining the algorithm to enhance its performance in pathological cases, where retinal layers may be disrupted or altered due to disease. Integrating machine learning techniques could improve the algorithm’s adaptability and accuracy, particularly in complex cases. Additionally, expanding the dataset to include diverse populations and a broader range of retinal conditions will help validate the algorithm’s robustness and generalizability. Collaboration with clinical practitioners will be essential to ensure that the algorithm meets the practical needs of ophthalmologists and contributes to improved patient care.

Author Contributions

Conceptualization, V.L. and P.R.; methodology, P.R.; software, P.R.; validation, M.K.P.; formal analysis, P.R.; investigation, P.R.; resources, V.L.; writing—original draft preparation, P.R.; writing—review and editing, V.L.; visualization, P.R.; supervision, V.L.; project administration, V.L.; funding acquisition, V.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by a NSERC Discovery Grant (V.L.).

Institutional Review Board Statement

Ethical review and approval were waived for this study since de-identified retinal OCT images used in this research were acquired externally from the Medical Research Foundation, Sankara Nethralaya, Chennai, India. The authors and researchers did not have access to any of the clinical data or identifiers associated with the dataset.

Informed Consent Statement

Not applicable.

Data Availability Statement

An open-source dataset was constructed and previously published by the authors [31], which also includes the subset of SD-OCT images from normal healthy retina that were used for this study and is now publicly available.

Acknowledgments

The authors would like to acknowledge Medical Research Foundation, Sankara Nethralaya, Chennai, India, for sharing the retinal SD-OCT images, and specially mention the contributions on Janarthanam Jothi Balaji for helping with the data curation.

Conflicts of Interest

The authors declare no conflicts of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript; or in the decision to publish the results.

Abbreviations

The following abbreviations are used in this manuscript:
OCTOptical Coherence Tomography
SD-OCTSpectral Domain Optical Coherence Tomography
DRDiabetic Retinopathy
AMDAge-related Macular Degeneration
ROIRegion of Interest
BPIBoundary Point Indices
GUIGraphical User Interface
ILMInternal Limiting Membrane
RNFLRetinal Nerve Fiber Layer
GCLGanglion Cell Layer
IPLInner Plexiform Layer
INL Inner Nuclear Layer
OPLOuter Plexiform Layer
ONLOuter Nuclear Layer
ISInner Segment
OSOuter Segment
RPERetinal Pigment Epithelium
RAMRandom Access Memory

References

  1. Fercher, A.F.; Drexler, W.; Hitzenberger, C.K.; Lasser, T. Optical Coherence Tomography—Principles and Applications. Rep. Prog. Phys. 2003, 66, 239–303. [Google Scholar] [CrossRef]
  2. Danesh, H.; Kafieh, R.; Rabbani, H.; Hajizadeh, F. Segmentation of Choroidal Boundary in Enhanced Depth Imaging OCTs Using a Multiresolution Texture Based Modeling in Graph Cuts. Comput. Math. Methods Med. 2014, 2014, 1–9. [Google Scholar] [CrossRef]
  3. Huang, D.; Swanson, E.A.; Lin, C.P.; Schuman, J.S.; Stinson, W.G.; Chang, W.; Hee, M.R.; Flotte, T.; Gregory, K.; Puliafito, C.A.; et al. Optical Coherence Tomography HHS Public Access. Science 1991, 254, 1178–1181. [Google Scholar] [CrossRef]
  4. Hee, M.R.; Izatt, J.A.; Swanson, E.A.; Huang, D.; Schuman, J.S.; Lin, C.P.; Puliafito, C.A.; Fujimoto, J.G. Optical Coherence Tomography of the Human Retina. Arch. Ophthalmol. 1995, 113, 325–332. [Google Scholar] [CrossRef]
  5. Schuman, J.S. Spectral Domain Optical Coherence Tomography for Glaucoma (an AOS Thesis). Trans. Am. Ophthalmol. Soc. 2008, 106, 426–458. [Google Scholar]
  6. Shi, F.; Chen, X.; Zhao, H.; Zhu, W.; Xiang, D.; Gao, E.; Sonka, M.; Chen, H. Automated 3-D Retinal Layer Segmentation of Macular Optical Coherence Tomography Images with Serous Pigment Epithelial Detachments. IEEE Trans. Med. Imaging 2015, 34, 441–452. [Google Scholar] [CrossRef]
  7. Srinivasan, P.P.; Kim, L.A.; Mettu, P.S.; Cousins, S.W.; Comer, G.M.; Izatt, J.A.; Farsiu, S. Fully Automated Detection of Diabetic Macular Edema and Dry Age-Related Macular Degeneration from Optical Coherence Tomography Images. Biomed. Opt. Express 2014, 5, 3568–3577. [Google Scholar] [CrossRef]
  8. Chiu, S.J.; Li, X.T.; Nicholas, P.; Toth, C.A.; Izatt, J.A.; Farsiu, S. Automatic Segmentation of Seven Retinal Layers in SDOCT Images Congruent with Expert Manual Segmentation. Opt. Express 2010, 18, 19413–19428. [Google Scholar] [CrossRef] [PubMed]
  9. Srinivasan, P.P.; Heflin, S.J.; Izatt, J.A.; Arshavsky, V.Y.; Farsiu, S. Automatic segmentation of up to ten layer boundaries in SD-OCT images of the mouse retina with and without missing layers due to pathology. Biomed. Opt. Express 2014, 5, 348–365. [Google Scholar] [CrossRef]
  10. Tian, J.; Varga, B.; Somfai, G.M.; Lee, W.-H.; Smiddy, W.E.; DeBuc, D.C. Real-Time Automatic Segmentation of Optical Coherence Tomography Volume Data of the Macular Region. PLoS ONE 2015, 10, e0133908. [Google Scholar] [CrossRef] [PubMed]
  11. DeBuc, D.C. A Review of Algorithms for Segmentation of Retinal Image Data Using Optical Coherence Tomography. In Image Segmentation; Ho, P.-G., Ed.; Intech Publishers: London, UK, 2011; pp. 15–54. [Google Scholar]
  12. Roy, P.; Lakshminarayanan, V.; Parthasarathy, M.K.; Zelek, J.S.; Gholami, P. Automated Intraretinal Layer Segmentation of Optical Coherence Tomography Images Using Graph-Theoretical Methods. In Proceedings of the Optical Coherence Tomography and Coherence Domain Optical Methods in Biomedicine XXII, San Francisco, CA, USA, 29–31 January 2018. [Google Scholar] [CrossRef]
  13. Roy, P. Automated Segmentation of Retinal Optical Coherence Tomography Images. Master’s Thesis, University of Waterloo, Waterloo, Canada, 2018. [Google Scholar]
  14. Vijaya, G.; Vasudevan, V. A Simple Algorithm for Image Denoising Based on MS Segmentation. Int. J. Comput. Appl. 2010, 2, 9–15. [Google Scholar] [CrossRef]
  15. Mishra, A.; Wong, A.; Bizheva, K.; Clausi, D.A. Intra-Retinal Layer Segmentation in Optical Coherence Tomography Images. Opt. Express 2009, 17, 23719–23728. [Google Scholar] [CrossRef] [PubMed]
  16. Garvin, M.K.; Abramoff, M.D.; Wu, X.; Russell, S.R.; Burns, T.L.; Sonka, M. Automated 3-D Intraretinal Layer Segmentation of Macular Spectral-Domain Optical Coherence Tomography Images. IEEE Trans. Med. Imaging 2009, 28, 1436–1447. [Google Scholar] [CrossRef]
  17. Rabbani, H.; Kafieh, R.; Kermani, S. A Review of Algorithms for Segmentation of Optical Coherence Tomography from Retina. J. Med. Signals Sens. 2013, 3, 45–60. [Google Scholar] [CrossRef]
  18. Kumar, B.K.S. Image Denoising Based On Gaussian/Bilateral Filter and Its Method Noise Thresholding. Signal Image Video Process. 2013, 7, 1159–1172. [Google Scholar] [CrossRef]
  19. Roy, P.; Parthasarathy, M.K.; Zelek, J.; Lakshminarayanan, V. Comparison of Gaussian Filter Versus Wavelet-Based Denoising on Graph-Based Segmentation of Retinal OCT Images. In Proceedings of the Biomedical Applications in Molecular, Structural, and Functional Imaging, Houston, TX, USA, 11–13 February 2018. [Google Scholar]
  20. Teng, P. Caserel—An Open Source Software for Computer-Aided Segmentation of Retinal Layers in Optical Coherence Tomography Images. 2013. Available online: https://github.com/pangyuteng/caserel (accessed on 29 December 2017).
  21. Felzenszwalb, P.F.; Huttenlocher, D.P. Efficient Graph-Based Image Segmentation. Int. J. Comput. Vis. 2004, 59, 167–181. [Google Scholar] [CrossRef]
  22. Chen, X.; Niemeijer, M.; Zhang, L.; Lee, K.; Abramoff, M.D.; Sonka, M. Three-Dimensional Segmentation of Fluid-Associated Abnormalities in Retinal OCT: Probability Constrained Graph-Search-Graph-Cut. IEEE Trans. Med. Imaging 2012, 31, 1521–1531. [Google Scholar] [CrossRef]
  23. Dufour, P.A.; Ceklic, L.; Abdillahi, H.; Schroder, S.; De Dzanet, S.; Wolf-Schnurrbusch, U.; Kowal, J. Graph-Based Multi-Surface Segmentation of OCT Data Using Trained Hard and Soft Constraints. IEEE Trans. Med. Imaging 2013, 32, 531–543. [Google Scholar] [CrossRef]
  24. Li, K.; Wu, X.; Chen, D.; Sonka, M. Optimal Surface Segmentation in Volumetric Images—A Graph-Theoretic Approach. IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 119–134. [Google Scholar] [CrossRef]
  25. Fabijańska, A. Graph Based Image Segmentation. Automatyka/Akademia Górniczo-Hutnicza im. Stanisława Staszica w Krakowie 2011, 15, 93–103. [Google Scholar]
  26. Kafieh, R.; Rabbani, H.; Hajizadeh, F.; Abramoff, M.D.; Sonka, M. Thickness Mapping of Eleven Retinal Layers Segmented Using the Diffusion Maps Method in Normal Eyes. J. Ophthalmol. 2015, 2015, 259123. [Google Scholar] [CrossRef]
  27. Agrawal, P.; Karule, P.T. Measurement of retinal thickness for detection of Glaucoma. In Proceedings of the 2014 International Conference on Green Computing Communication and Electrical Engineering (ICGCCEE), Coimbatore, India, 6–8 March 2014. [Google Scholar]
  28. Chan, A.; Duker, J.S.; Ko, T.H.; Fujimoto, J.G.; Schuman, J.S. Normal Macular Thickness Measurements in Healthy Eyes Using Stratus Optical Coherence Tomography. Arch. Ophthalmol. 2006, 124, 193–198. [Google Scholar] [CrossRef]
  29. Bagci, A.M.; Shahidi, M.; Ansari, R.; Blair, M.; Blair, N.P.; Zelkha, R. Thickness Profiles of Retinal Layers by Optical Coherence Tomography Image Segmentation. Arch. Ophthalmol. 2008, 146, 679–687.e1. [Google Scholar] [CrossRef]
  30. Koozekanani, D.; Boyer, K.; Roberts, C. Retinal Thickness Measurements from Optical Coherence Tomography Using a Markov Boundary Model. IEEE Trans. Med. Imaging 2001, 20, 900–916. [Google Scholar] [CrossRef]
  31. Gholami, P.; Roy, P.; Parthasarathy, M.K.; Lakshminarayanan, V. OCTID: Optical Coherence Tomography Image Database. Comput. Electr. Eng. 2020, 81, 106532. [Google Scholar] [CrossRef]
Figure 1. A sample of raw macular SD-OCT image as captured by the Cirrus HD-OCT imaging device, from the test dataset showing a healthy adult cross-sectional retina.
Figure 1. A sample of raw macular SD-OCT image as captured by the Cirrus HD-OCT imaging device, from the test dataset showing a healthy adult cross-sectional retina.
Applsci 15 08783 g001
Figure 2. The sample SD-OCT image shown in Figure 1, converted to grayscale by the algorithm.
Figure 2. The sample SD-OCT image shown in Figure 1, converted to grayscale by the algorithm.
Applsci 15 08783 g002
Figure 3. A schematic representation of the shortest-path-based graph-search algorithm for segmenting intra-retinal layers in SD-OCT images.
Figure 3. A schematic representation of the shortest-path-based graph-search algorithm for segmenting intra-retinal layers in SD-OCT images.
Applsci 15 08783 g003
Figure 4. An illustration of the graphical user interface [20] used to facilitate the manual segmentations.
Figure 4. An illustration of the graphical user interface [20] used to facilitate the manual segmentations.
Applsci 15 08783 g004
Figure 5. An illustration of the foveal slice of a sample macular SD-OCT image from the test dataset illustrating the layers and boundaries segmented by the algorithm.
Figure 5. An illustration of the foveal slice of a sample macular SD-OCT image from the test dataset illustrating the layers and boundaries segmented by the algorithm.
Applsci 15 08783 g005
Figure 6. An illustration of the (a) macular SD-OCT scan of healthy retina (b) the image segmented by graph-based algorithm after denoising the image with a Gaussian filter showing 7 delineated boundaries.
Figure 6. An illustration of the (a) macular SD-OCT scan of healthy retina (b) the image segmented by graph-based algorithm after denoising the image with a Gaussian filter showing 7 delineated boundaries.
Applsci 15 08783 g006
Figure 7. An illustration of the (a) macular SD-OCT image of healthy retina after bilateral filtering (b) the image after wavelet-based detail thresholding (c) the segmented image showing 7 delineated boundaries.
Figure 7. An illustration of the (a) macular SD-OCT image of healthy retina after bilateral filtering (b) the image after wavelet-based detail thresholding (c) the segmented image showing 7 delineated boundaries.
Applsci 15 08783 g007
Figure 8. Histograms illustrating the mean thickness values from Kafieh et al. [26] (green bars) and those computed by the present segmentation algorithm when the images were denoised using Gaussian filter (blue bars) and wavelet reconstruction (yellow bars), respectively, prior to segmentation. The error bars indicate ± standard deviation of the mean. * indicates statistically significant difference.
Figure 8. Histograms illustrating the mean thickness values from Kafieh et al. [26] (green bars) and those computed by the present segmentation algorithm when the images were denoised using Gaussian filter (blue bars) and wavelet reconstruction (yellow bars), respectively, prior to segmentation. The error bars indicate ± standard deviation of the mean. * indicates statistically significant difference.
Applsci 15 08783 g008
Figure 9. A sample macular SD-OCT image from the test dataset illustrating the manual segmentation by an expert clinician.
Figure 9. A sample macular SD-OCT image from the test dataset illustrating the manual segmentation by an expert clinician.
Applsci 15 08783 g009
Table 1. Mean thicknesses of the six layers segmented by the algorithm in the foveal slices of the SD-OCT images across the test dataset.
Table 1. Mean thicknesses of the six layers segmented by the algorithm in the foveal slices of the SD-OCT images across the test dataset.
LayerSegmented Intra-Retinal Layers
(as Shown in Figure 1)
Thickness = Mean ± SD
(in Microns)
1RNFL + GCL25.02 ± 3.16
2IPL5.40 ± 2.79
3INL5.94 ± 1.10
4OPL8.45 ± 0.96
5ONL + IS16.24 ± 1.76
6OS + RPE12.67 ± 6.04
Table 2. Mean thicknesses of the six layers between the seven boundaries segmented by the algorithm in the macular SD-OCT images over the entire dataset, when the images were denoised using Gaussian filter and wavelet reconstruction, respectively, prior to segmentation.
Table 2. Mean thicknesses of the six layers between the seven boundaries segmented by the algorithm in the macular SD-OCT images over the entire dataset, when the images were denoised using Gaussian filter and wavelet reconstruction, respectively, prior to segmentation.
LayerSegmented
Intra-Retinal Layers (as Shown in
Figure 1)
Gaussian Filter-Based
Denoising
Wavelet-Based Denoising
Thickness = Mean ± SD
(in Microns)
Mean
Computation Time
(in Seconds)
Thickness = Mean ± SD
(in Microns)
Mean Computation Time (in Seconds)
1RNFL + GCL25.16 ± 2.091.053525.29 ± 2.2111.2736
2IPL5.38 ± 1.235.49 ± 1.18
3INL5.29 ± 0.156.15 ± 1.89
4OPL9.01 ± 2.858.19 ± 2.51
5ONL + IS16.95 ± 2.2414.88 ± 2.62
6OS + RPE14.92 ± 2.5914.96 ± 2.62
Table 3. Statistical results showing the difference between the mean thickness values computed by the present algorithm and that of the normalized mean thicknesses from a previous study.
Table 3. Statistical results showing the difference between the mean thickness values computed by the present algorithm and that of the normalized mean thicknesses from a previous study.
Current StudyPublished Results [26]Statistical Comparison
Retinal LayerMean Thickness ± SD (in µ)Retinal LayerMean Thickness ± SD (in µ)
Thickness Across 9 Macular Sectors
p-ValueStd. Error of Diff.
Layer 125.02 ± 0.36Layers (1 + 2)25.88 ± 4.480.360.94
Layer 25.40 ± 2.79Layer 35.44 ± 1.740.920.43
Layer 35.94 ± 1.10Layer 46.00 ± 2.590.910.53
Layer 48.45 ± 0.96Layer 57.67 ± 1.930.040.40
Layer 516.24 ± 1.76Layers (6 + 7)15.00 ± 4.270.150.87
Layer 612.67 ± 6.04Layers (8 + 9 + 10 + 11)13.22 ± 3.340.520.86
Table 4. Results showing the layer-wise mean thickness values computed by the algorithm for manual and automated segmentations, the mean error, and accuracy of segmentation for each layer across 25 SD-OCT images.
Table 4. Results showing the layer-wise mean thickness values computed by the algorithm for manual and automated segmentations, the mean error, and accuracy of segmentation for each layer across 25 SD-OCT images.
Retinal
Layer
Manual SegmentationAutomated
Segmentation
Mean ErrorAccuracy (%)
Mean Thickness ± SD (in µ)Mean Thickness ± SD (in µ)
RNFL + GCL22.76 ± 2.5225.02 ± 3.16−2.26 ± 1.5990.07
IPL7.24 ± 2.815.40 ± 2.791.84 ± 1.6474.58
INL6.46 ± 1.125.94 ± 1.100.52 ± 0.6991.95
OPL8.23 ± 0.938.45 ± 0.96−0.22 ± 0.6797.33
ONL + IS16.42 ± 1.5816.24 ± 1.760.18 ± 0.7498.90
OS + RPE12.18 ± 4.6812.67 ± 6.04−0.49 ± 1.5795.98
Table 5. Average time taken by the algorithm to segment the foveal and non-foveal image slices of SD-OCT images across the test dataset.
Table 5. Average time taken by the algorithm to segment the foveal and non-foveal image slices of SD-OCT images across the test dataset.
Segmentation TechniqueMean Computation Time (in Seconds)
Automated4.93
Manual578.05
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Roy, P.; Parthasarathy, M.K.; Lakshminarayanan, V. Retinal OCT Images: Graph-Based Layer Segmentation and Clinical Validation. Appl. Sci. 2025, 15, 8783. https://doi.org/10.3390/app15168783

AMA Style

Roy P, Parthasarathy MK, Lakshminarayanan V. Retinal OCT Images: Graph-Based Layer Segmentation and Clinical Validation. Applied Sciences. 2025; 15(16):8783. https://doi.org/10.3390/app15168783

Chicago/Turabian Style

Roy, Priyanka, Mohana Kuppuswamy Parthasarathy, and Vasudevan Lakshminarayanan. 2025. "Retinal OCT Images: Graph-Based Layer Segmentation and Clinical Validation" Applied Sciences 15, no. 16: 8783. https://doi.org/10.3390/app15168783

APA Style

Roy, P., Parthasarathy, M. K., & Lakshminarayanan, V. (2025). Retinal OCT Images: Graph-Based Layer Segmentation and Clinical Validation. Applied Sciences, 15(16), 8783. https://doi.org/10.3390/app15168783

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop