Next Article in Journal
Systemic Lymphadenopathic Mastocytosis with Eosinophilia
Next Article in Special Issue
Integration of Artificial Intelligence into the Approach for Diagnosis and Monitoring of Dry Eye Disease
Previous Article in Journal
Pathologic Findings at Risk Reducing Surgery in BRCA and Non-BRCA Mutation Carriers: A Single-Center Experience
Previous Article in Special Issue
Performance Evaluation of Different Object Detection Models for the Segmentation of Optical Cups and Discs
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners

1
Institute for Ophthalmic Research, University of Tuebingen, 72076 Tuebingen, Germany
2
University Eye Hospital Tuebingen, 72076 Tuebingen, Germany
*
Author to whom correspondence should be addressed.
Diagnostics 2022, 12(12), 3055; https://doi.org/10.3390/diagnostics12123055
Submission received: 29 October 2022 / Revised: 24 November 2022 / Accepted: 27 November 2022 / Published: 6 December 2022
(This article belongs to the Special Issue Artificial Intelligence in Eye Disease – Volume 2)

Abstract

:
Anterior segment optical coherence tomography (AS-OCT), being non-invasive and well-tolerated, is the method of choice for an in vivo investigation of ciliary muscle morphology and function. The analysis requires the segmentation of the ciliary muscle, which is, when performed manually, both time-consuming and prone to examiner bias. Here, we present a convolutional neural network trained for the automatic segmentation of the ciliary muscle in AS-OCT images. Ciloctunet is based on the Freiburg U-net and was trained and validated using 1244 manually segmented OCT images from two previous studies. An accuracy of 97.5% for the validation dataset was achieved. Ciloctunet’s performance was evaluated by replicating the findings of a third study with 180 images as the test data. The replication demonstrated that Ciloctunet performed on par with two experienced examiners. The intersection-over-union index (0.84) of the ciliary muscle thickness profiles between Ciloctunet and an experienced examiner was the same as between the two examiners. The mean absolute error between the ciliary muscle thickness profiles of Ciloctunet and the two examiners (35.16 µm and 45.86 µm) was comparable to the one between the examiners (34.99 µm). A statistically significant effect of the segmentation type on the derived biometric parameters was found for the ciliary muscle area but not for the selective thickness reading (“perpendicular axis”). Both the inter-rater and the intra-rater reliability of Ciloctunet were good to excellent. Ciloctunet avoids time-consuming manual segmentation, thus enabling the analysis of large numbers of images of ample study cohorts while avoiding possible examiner biases. Ciloctunet is available as open-source.

1. Introduction

Anterior segment optical coherence tomography (AS-OCT) has become the method of choice for an in vivo investigation of the ciliary muscle, mostly because it is non-invasive and well-tolerated, in contrast to alternative methods like ultrasound biomicroscopy (UBM) or magnetic resonance imaging (MRI). For a review of AS-OCT, its application, and a comparison with UBM and MRI, see [1,2]. Several groups, including ours, utilized AS-OCT to investigate different aspects of the ciliary muscle’s morphology and function, e.g., the changes in the ciliary muscle’s thickness during accommodation [3,4,5,6,7,8,9,10], the ciliary muscle’s thickness [11,12,13], movement during contraction in emmetropes and myopes [10,12,13], the relation between the ciliary muscle’s thickness and refractive error [14,15,16,17], or the ciliary muscle’s thickness and lens tension during accommodation [18]. Furthermore, the association between the axial length and ciliary muscle’s length [19], the age-related effects on the ciliary muscle’s morphology [6,16], and the impact of the prolonged nearwork on the ciliary muscle’s morphology in myopic and emmetropic eyes [20] have previously been examined. The anatomy is commonly analyzed by measuring the ciliary muscle’s thickness at a single position only [7,18,21], at equidistant steps posterior to the scleral spur [3,4,5,6,8,9,11,14,15,17,22], or proportionally to the length of the muscle [16,19,23]. Only a few studies have used narrower reading steps [8,24] or determined continuous thickness profiles [10]. Alternatively, the cross-sectional area of the ciliary muscle was assessed [22,25,26,27,28,29,30]. To facilitate the comparison of the results of different studies, suggestions have been made to harmonize the analysis of the ciliary muscle [31]. However, most methods have in common that they require either a manual placement of at least one landmark or the manual segmentation of the entire ciliary muscle within the OCT image. This is often performed using the built-in calipers of the device manufacturer’s software or image editing software, which is tedious, time-consuming, and prone to examiner bias. Custom software has been developed to ease and partly automate this task [22,23], but without being made publicly available. Our group has recently released open-source software for the semi-automated segmentation of the ciliary muscle in OCT images and the automated analysis of the biometric parameters [32], which has been employed successfully in previously published studies [10,12,20]. The software leverages manually placed guiding landmarks to find the largest brightness gradients along the ciliary muscle’s borders for fitting polynomial splines. It supports the examiner in the segmentation workflow and provides a batch processing mode to automate the extraction of the biometric parameters. However, the processing of large amounts of OCT images still requires a considerable amount of time: an experienced examiner needs about one hour to segment 10–20 images.
To eliminate the need for manual interventions for the segmentation and to avoid examiner bias, we trained a convolutional neural network based on the Freiburg U-Net’s architecture [33], using 1244 segmentations from two previous studies. The performance of the trained network Ciloctunet was evaluated by comparing the ciliary muscle’s biometric parameters of the OCT images of a third study segmented by the network with those resulting from segmentations originally done by two examiners. Furthermore, the results of the third study were replicated [10].

2. Materials and Methods

2.1. Training, Validation, and Test Image Data

2.1.1. Imaging Protocol

The OCT images used for the training, validation, and testing of the deep neural network were taken from three previous studies: an analysis of the morphological changes in the ciliary muscle during accommodation (0D and 3D) in 15 near-emmetropic volunteers (dataset A, 180 images) [10]; a comparison of the morphological changes in the ciliary muscle of 18 emmetropic and 20 myopic volunteers for different accommodative demands (0D, 2.5D, 3D, and 4D) (dataset B, 769 images) [12]; and an investigation of the effect of a prolonged nearwork on the ciliary muscle’s morphology in 18 myopic and 17 emmetropic volunteers (0.25 D, 4 D; pre-/post-near work) (dataset C, 475 images) [20]. The participants of study C also took part in study B.
In all studies, the temporal ciliary muscle of the right eye was imaged with an anterior segment OCT (Visante AS-OCT, Carl Zeiss Meditec AG, Jena, Germany). The right eye was chosen because of the space constraints of the experimental setup. The detailed experimental setup is described in [10,12]. The acquired DICOM images were then segmented by at least one experienced examiner using CilOCT, an open-source software implementing a semi-automated segmentation algorithm based on fitting polynomial splines to brightness gradients [10,32]. Subsequently, multiple parameters of the segmented images, the perpendicular axis (PA), the ciliary muscle area (CMA), the ciliary muscle thickness (CMT) profile, and the coordinates of the scleral spur (SP) and the ciliary muscle apex (CA), were automatically determined and exported [10,20]. All the settings used for the semi-automated segmentations are stored as an XML file, which allows for a reliable reproduction of the segmentation. For the full methodological protocols, we direct readers to the original articles. The studies referred to in this work followed the tenets of the Declaration of Helsinki and were approved by the Institutional Review Board of the Medical Faculty of the University of Tuebingen (376/2017BO2).

2.1.2. Image Preparation

The exported raw DICOM images of studies B and C were rotated and resized to 1280 × 512 pixels according to [22]. Subsequently, the images were segmented with built-in functions of the CilOCT software [32] using previously created XML segmentation files, and converted to feature (PNG, 640 × 480 pixels, 8 bit grayscale) and corresponding label images (PNG, 640 × 480 pixels, 8 bit palette RGB), representing the Ground truth with 13 segmentation classes (Table 1 and Figure 1). The downscaling is performed to allow a complete image to fit into the GPU memory. However, the convolutional network’s architecture does not employ a fully connected layer [34] and is therefore size-independent, i.e., the later inference using the trained network accepts original-sized images.

2.1.3. Training and Validation Data

The OCT images (1244) of studies B and C were combined and split by subject into a training (75%) and a validation (25%) dataset, resulting in 936 images in the training dataset and 308 images in the validation dataset. The different recording conditions of the studies (the number of repeated measures, emmetropes vs. myopes, accommodative demand, near vs. far accommodation, and pre- vs. post-nearwork) were kept balanced between both datasets, whereby the images of one subject were either assigned to the training or the validation dataset. Subsequently, the images of both datasets were mirrored vertically to double the number of images available for training and validation, and to enable the network to learn the segmentation of the OCT images of the left eye’s ciliary muscle (training: 1872; validation: 616 images). The images of the training dataset were further augmented by blurring with ImageMagick [35] using five different radii (1–5 px) of a Gaussian blur. Blurring allows for the simulation of the poor image quality resulting from suboptimal recording conditions. After the data augmentation, 11,232 images for training and 616 images for validation were available. To prevent any bias or order of the images, the image files were shuffled by randomly renaming them using their SHA-1 hash. Figure 2 depicts the workflow for creating training and validation data.

2.2. Network Architecture

Ciloctunet uses the Freiburg U-Net network’s [33] architecture with some modifications: between the convolution and activation (ReLU) layer pairs and pooling layers, batch normalization layers were inserted to allow for a faster training and improved regularization and to avoid an overfitting [36] in favor of the dropout layers used in the original architecture. Furthermore, the classification comprises 13 different classes (Table 1), compared to two classes (foreground/background) in the U-Net. Since the frequencies of the pixels belonging to a certain class are highly unbalanced, e.g., they are much higher for the background pixels compared to the pixels representing the boundaries of the ciliary muscle, the SoftMax loss layer was replaced by an Infogain multinomial logistic loss layer. This allows for the individual weighting of the loss for each class, thus penalizing the misclassifications of the underrepresented classes. The Infogain loss is mathematically formulated as in Equation (1), where E is the loss, N is the number of images, K is the number of classes, ln is the class Ground truth of the nth sample classified to the kth class, and pn,k is the probability of the nth sample classified to the kth class, satisfying k K p n , k = 1 and p n , k 0 [37].
E = 1 N n = 1 N k = 1 K H l n , k ( p n , k )
H l n , k is the Infogain weight for the nth sample with the Ground truth ln to be classified to class k [37]. The Infogain matrix with the weights is calculated using a custom Python script for each image separately based on the relative proportions of the number of pixels belonging to the different classes within an image during the training phase.

2.3. Network Training

The network was implemented with the Caffe 1.0 deep learning framework [38]. The training was performed using the Nvidia Deep Learning GPU Training System (DIGITS) version 5 with Python 2.7 on an Ubuntu 18.04 LTS system with two GeForce GTX 1080 Ti (12 GB) graphics cards. The training leveraged the RMSprop optimizer with a decay value of 0.99 and a learning rate of 1e-05 and a batch size of two. The weights were initialized using the MSRA weight filler [39].

2.4. Testing

For testing, the 180 OCT images acquired in study A were used, comprised of six images per subject (session 1: images 1–3; session 2: images 4–6) for near and distance vision, respectively. The semi-automated segmentation of the two examiners was compared with those performed by the network, which had not “seen” the images before. Therefore, the CilOCT software was extended to use the trained network as an alternative to the semi-automated segmentation process for single images as well as for batch execution. The integration into the software is based on JavaCV (version 1.5.3), the Java bindings of OpenCV (version 4.3.0) [40], and the OpenCV Deep Neural Network (DNN) module.
Ciloctunet outputs the segmentation results as a two-dimensional matrix of the pixels’ probabilities to belong to a particular segmentation class. Pixels with probabilities lower than five are discarded and those belonging to the segmentation classes representing the borders are skeletonized using a Java implementation of the Zhan–Suen thinning algorithm [41,42]. The remaining pixels are clustered using DBSCAN (ε = 30, minimum points = 20) [43] and the pixels which are not part of a cluster are removed. Both skeletonization and clustering help to discard possible spurious segmentation results (i.e., isolated wrongly classified pixels, Figure 3) and simplify the later fitting of the polynomial splines applied in the CilOCT software. The fitted splines not only allow for the segmentation of the ciliary muscle but are also used to determine the borders of the different types of tissues. These borders are subsequently used to correct for image distortion caused by the different refractive indices of the corresponding tissue.

2.5. Statistical Analysis

The accuracy of the segmentation was evaluated with the intersection-over-union (IoU) metric [44] between the ciliary muscle area (CMA, class 8, Table 1) resulting from the segmentation performed by the network and the two examiners (SW and TS). Furthermore, descriptive statistics of the differences between the Cartesian coordinates of the anatomical landmarks ciliary muscle apex and the scleral spur resulting from the particular segmentations were calculated. Both the IoU calculation and descriptive statistics were performed before the clustering and skeletonization.
Based on the segmentation, the biometric parameters PA and CMA were extracted after the distortion correction as described in [10]. Two linear mixed-effects models with the fixed effects segmenter (CNN, SW, and TS), session (1, 2), accommodative state (far and near) and their interactions, and the participant as a random effect were fit by the restricted maximum likelihood estimation (REML) to assess the significance of the effects in explaining the variations of the dependent variables PA and CMA, respectively. The variance inflation factors (VIF) of the predictors were calculated and assured to fall well below the common threshold value, indicating no collinearity between them [45]. The residuals were confirmed visually to follow a normal distribution and the homogeneity of the variances was ensured using the Brown–Forsythe test [46,47].
Paired-sample t-tests were conducted to compare the biometric parameters PA and CMA derived from the segmentations performed by the two examiners and the network, and the limits of agreement (LoA) were calculated according to the Bland–Altman method [48]. Additionally, two-way mixed intra-class correlation coefficients (ICC) with an average measure and absolute agreement between the segmentations were calculated [49].
The similarity of the CMT profiles between the manual and network segmentation was evaluated by calculating a modified IoU metric according to (Equation (2)) as the ratio of the summed minimum and maximum CMT values up to a −4.5 mm distance from the scleral spur of the two segmentations (Ciloctunet vs. SW, Ciloctunet vs. TS, SW vs. TS) of a particular OCT image. The IoU results of the single OCT images were then averaged. In the case of a perfect alignment, the IoU would be 1.0 (Equation (2)).
CMT   IoU s e g 1 / s e g 2 = x m i n ( CMT s e g 1 ( x ) ,   CMT s e g 2 ( x ) ) x m a x ( CMT s e g 1 ( x ) , CMT s e g 2 ( x ) )
Furthermore, the mean and standard deviations of the mean absolute error (MAE) [50] (in pixel) between the CMT profiles derived from the different segmentations were calculated.
To test the applicability of the segmentation performed by Ciloctunet, paired-sample t-tests were conducted to compare the averaged biometric parameters PA and CMA between the near and far accommodation. The results were contrasted with those reported by Wagner et al. [10].
All statistical analyses were performed using SAS JMP 15.1 (SAS Institute Inc., Cary, NC, USA). The ICCs were calculated using IBM SPSS Statistics 26.0 (IBM Corp., Armok, NY, USA).

3. Results

3.1. Training Performance

The training was stopped after 30 epochs and about 18.5 h. At that time, an accuracy of 97.5% for the validation dataset was reached. During the training, after about 11 epochs (validation accuracy of 96.4%), a slight increasing of the validation loss could be observed, whereas the training loss continued to decrease.

3.2. Segmentation Accuracy

Figure 3 exemplifies the result of the segmentation of an OCT image of the ciliary muscle performed by Ciloctunet. The colors correspond to the segmentation classes as listed in Table 1. The image contains several spurious and wrong segmentations. However, most of them are related to areas not used for further processing, which only requires the borders of the ciliary muscle (red, green, and blue), as well as the borders between different tissues or between air and tissue (yellow, white, and cyan). Most wrong classifications of these borders are removed by skeletonization and clustering.
Dataset A, used for testing the trained network, comprises 180 images, whereby some images were discarded from segmentation by the examiners due to a bad image quality (SW: 3; TS: 1). The segmentations performed by Ciloctunet and the first examiner (SW) showed a median IoU of 0.84 (Q25: 0.78, Q75: 0.87; n = 177), comparable to the median IoU of 0.84 (Q25: 0.81, Q75: 0.86; n = 176) between segmentations of the first (SW) and the second (TS) examiner.
The median IoU between the segmentations of the second examiner (TS) and Ciloctunet was 0.80 (Q25: 0.77, Q75: 0.83; n = 179).

3.3. Ciliary Muscle Apex and Scleral Spur Coordinates

The distributions of the differences between the x/y-coordinates of the anatomical landmarks ciliary muscle apex and scleral spur derived from the different segmenters are visualized in Figure 4. The results of the descriptive statistics of the mean and absolute differences of the x- and y-coordinates, as well as of the Euclidean distances, are given in Table 2.

3.4. Effect of Segmenter on Biometric Parameters

Both models’ residuals follow a normal distribution and their variances are homoskedastic. The linear mixed-effects model with the dependent variable PA (n = 380, R2adj. = 0.73) revealed a statistically significant effect for the accommodative demand (distance), but not for the segmenter, the session, or their interactions (Table 3). The linear mixed-effects model with the dependent variable CMA (n = 380, R2adj. = 0.54) revealed a statistically significant effect for the accommodative demand (distance) and the segmenter, but not for the session, or any interactions (Table 3).
A post hoc comparison using a t-test indicated a statistically significant mean difference of 53.25 (95% CI: [38.70, 67.80]) µm between the least-square means (±SE) of the far (647.58 ± 24.51 µm) and near (700.83 ± 24.57 µm) conditions (t (356.0616) = 7.198, p < 0.0001). The top of Figure 5 depicts the differences in the least-square means of the PA with the effects segmenter, accommodative demand (distance), and session.
A post hoc comparison using a t-test indicated a statistically significant mean difference of 0.0953 (95% CI: [0.0557, 0.1350]) mm2 between the least-square means of the far (1.2080 ± 0.0410 mm2) and near (1.3033 ± 0.0413 mm2) conditions (t (356.1653) = 4.7269, p < 0.0001).
A Tukey HSD post hoc test revealed statistically significant differences (p < 0.0001) of 0.0654 (95% CI: [0.0072, 0.1236]) mm2 between the least-square means of the segmentation performed by the neural network (mean ± SE: 1.3168 ± 0.0424 mm2) and examiner SW (mean ± SE: 1.2514 ± 0.0423 mm2), and of 0.1181 (95% CI: [0.0601, 0.1761]) mm2 (p = 0.0232) between the neural network and examiner TS (1.987 ± 0.0423 mm2, but not between the examiners SW and TS (p = 0.0807). The bottom of Figure 5 shows the least-squares means differences in the CMA concerning the interaction of the accommodative demand, segmenter, and session.

3.5. Repeatability Analysis of the Biometric Parameters

Paired samples t-tests between the biometric parameters PA and CMA derived from the segmentation performed by Ciloctunet and the examiners SW and TS revealed statistically significant differences only for the CMA but not for the PA (Table 4).
The CMA calculated from the segmentation by Ciloctunet differed from those of SW and TS by −0.08 ± 0.19 mm2 and −0.14 ± 0.19 mm2, respectively. No statistically significant difference was found between the biometric parameters derived from the Ciloctunet segmentation of the first (OCT images 1–3) and the second session (images 4–6).
The inter-rater reliability between Ciloctunet and the two examiners was good (with outliers) to excellent (without outliers) for the PA, and moderate (with outliers) to good (without outliers) for the CMA, according to the classification of the ICC of [49]. The intra-rater reliability between the first and the second session segmented by Ciloctunet was moderate (with outliers) to excellent (without outliers) for the PA and good for the CMA (Table 4).

3.6. Comparison of Ciliary Muscle Thickness Profiles

Figure 6 depicts the averaged CMT profiles with standard deviations derived from the segmentation of the OCT images of dataset A performed by the two examiners SW and TS, as well as by Ciloctunet. It is evident that the CMT profiles resulting from the segmentation performed by Ciloctunet are slightly thicker; the ones of TS are slightly thinner than the ones of SW. The average MAE (±SD) between Ciloctunet and SW is 35.16 ± 12.84 µm, between Ciloctunet and TS 45.86 ± 17.97 µm, and between SW and TS 34.99 ± 15.71 µm. Accordingly, the mean (±SD) IoU between Ciloctunet and SW (0.89 ± 0.04) is higher than between Ciloctunet and TS (0.86 ± 0.05) or between SW and TS (0.89 ± 0.05).

3.7. Replication of the Results of Study C by Comparison of Biometric Parameters Derived from Ciloctunet Segmentations during Near and Far Accommodation

Results for the parameters PA and CMA of up to six OCT images per subject (n = 13) and condition (far and near) were averaged after a ciliary muscle segmentation with Ciloctunet. The paired-samples t-tests revealed statistically significant differences for the averaged PA (−52.64 ± 39.98 µm, t (12) = 4.7472, p = 0.0005) and for the CMA (−0.10 ± 0.06 mm2, t (12) = 6.1804, p < 0.0001) during near (PA: 702.58 ± 87.51 µm; CMA: 1.36 ± 0.17) and far accommodation (PA: 649.94 ± 59.91 µm; CMA: 1.27 ± 0.16 mm2).

4. Discussion

The deep neural network Ciloctunet, which leverages the Freiburg U-Net convolutional network architecture [33], was trained to perform an automated segmentation of the ciliary muscle in the AS-OCT images using data from two previously published studies [12,20]. The Freiburg U-Net architecture was chosen since it aims to lower the number of required samples to train the network by using annotated data more efficiently. In contrast to most other application areas of the Freiburg U-Net, which focus on the segmentation of areas, the derivation of the ciliary muscle’s biometric parameters and the prior distortion correction requires the segmentation of the muscle’s borders (Figure 1). Therefore, the SoftMax loss layer of the net was replaced by an Infogain loss layer, which weights the loss according to the ratio of pixels belonging to the different segmentation classes, thus addressing the problem of a class imbalance, that could otherwise result in a high accuracy simply by classifying everything as the background. Furthermore, the Infogain loss has been shown to achieve a better performance than the cross-entropy loss [37]. Using other loss functions like the Dice coefficient, which works similarly to the IoU metric and allows for dealing with the class imbalances [51], a focal loss, as suggested by [52], or a combination of both [53], could further improve the performance and will be evaluated in the future. An alternative approach could use network architectures tailored to edge or contour detection like the holistically nested edge detection (HED) network [54,55]. However, in this study, the Freiburg U-Net led to better results than HED Future work might evaluate different network architectures, like the DeconvNet, SegNet, DeepLabv3+, Criss-Cross Network (CCNet), or Context Encoding Network (EncNet), for further improving the accuracy of the segmentation. Cabeza-Gil et al. recently published a comparison of several CNN architectures (U-Net and LinkNet, both with different backbone structures, like MobileNetv2, Vgg19, and EfficientNetb4) for the segmentation of the ciliary muscle in OCT images [53] and found the U-Net to have the highest performance compared to the others [53].
Ciloctunet was trained for 30 epochs, though after 11 epochs the validation loss stopped decreasing, indicating that the network started to overfit the data. However, since the increase in the loss was minor, we decided against an early stopping [56]. Overfitting could be avoided by increasing the number of the training images, for instance by including the segmentation data of other examiners. This would probably also improve the generalization and increase the accuracy, which could also be achieved alternatively by a further augmentation of the training dataset. Currently, the OCT images are augmented by mirroring, which not only increases the generalization but also allows for the segmentation of images taken from the left eye, and by Gaussian blurring with different kernel sizes, which simulates low OCT image qualities. Additionally, the warping of the images using elastic deformation [57,58] or modifying the image contrast either globally or locally could be applied. Other methods could also be beneficial [59].
Ciloctunet leverages 13 different segmentation classes (Table 1), whereby only a subset of them representing the tissue borders is used for the subsequent processing (distortion correction, polynomial spline fit, and the calculation of the biometric parameters). The additional segmentation classes were provided as an aid for the training of the network, since the Infogain loss penalizes overlapping areas.
The comparison of the segmentation results based on the parameter ciliary muscle area (class 8, Table 1) showed a good to very good IoU of 0.84, similar to the IoU of 0.84 between the examiner SW and examiner TS. The lower IoU of 0.80 between the segmentations of Ciloctunet and TS indicates that Ciloctunet probably resembles the characteristics of examiner SW, who performed the segmentation of the datasets B and C, which were used as Ground truth for the training. When evaluating this outcome, one has to take into consideration that the IoU metric is calculated without the removal of spurious segmentation results (Figure 3) using skeletonization and clustering, which is performed before a further analysis.
Two important anatomical landmarks of the ciliary muscle, the scleral spur and the ciliary muscle apex, were analyzed separately by comparing the differences in the absolute coordinates between the pairs of segmentations of the two examiners and Ciloctunet (Table 2). Interestingly, the variability of the differences is higher along the x-axis than along the y-axis for both the scleral spur and the ciliary muscle apex, whereby the variability of the differences for the scleral spur is, in general, smaller (Figure 4). The median Euclidean distance between the scleral spur coordinates derived from the segmentation of Ciloctunet and examiner SW is 67.44 µm with an interquartile range (IQR) of 87.65 µm. This corresponds to the values between the two examiners SW and TS, with a slightly lower median Euclidean distance of 60.94 µm and IQR of 83.40 µm. The median Euclidean distance between Ciloctunet and examiner TS is 99.92 (IQR: 127.09) µm. A previous study investigating the variability of the ciliary muscle’s segmentation in the OCT images of six subjects [60] reported an average inter-examiner difference in the scleral spur coordinates (presumably the Euclidean distance) of 122 µm and an intra-examiner standard deviation of 29 µm. Assuming a normal distribution, this corresponds to an IQR of 39.12 µm (=2 * 0.6745 * SD) [61,62], whereby the coordinates of the scleral spur were averaged over 10 images per subject. Ref. [63] trained a convolutional neural network to mark the position of the scleral spur in the AS-OCT images of 921 eyes and reported a CNN prediction error of the absolute coordinates (Euclidean distance) compared to the results of an experienced examiner of 73.08 µm with a standard deviation of 52.06 µm, which corresponds to an IQR of 70.23 µm, assuming a normal distribution. The reported inter-grader difference was 97.34 µm with a standard deviation of 73.29 (IQR: 98.87) µm.
The evaluation of the possible effects of the segmenter, distance, and session on the biometric parameters PA and CMA using linear mixed-effect models revealed an expected statistically significant effect of the accommodative demand for both parameters. A statistically significant effect of the segmenter was only present for the CMA, but not for the PA. These results indicate that for the PA, the segmenter is interchangeable, whereas for the CMA, Ciloctunet constantly overestimates and the examiner TS constantly underestimates the area of the ciliary muscle compared to examiner SW (Figure 5). Nevertheless, both segmenters as well as Ciloctunet detected the difference in the CMA between near and far accommodation. The comparison of the morphological changes in the ciliary muscle during near and far accommodation based on the segmentation of Ciloctunet resulted in statistically significant differences in both the PA and CMA (Figure 5), in the same range as those reported by [10]. The PA increased by about 53 µm from the far to near condition (Wagner et al.: ~43 µm), the CMA by about 0.1 mm2 (Wagner et al.: ~0.1 mm2).
The mean difference between the PA derived from the segmentation of Ciloctunet and the two examiners was 5.35 µm (Ciloctunet–SW) and −3.80 µm (Ciloctunet–TS), respectively, which is smaller than the mean difference of −9.60 µm between the two examiners (SW–TS) and considerably lower than those reported by [60] for the comparable parameter CMTMAX, derived from segmentation of two examiners (relaxed ciliary muscle: 20 µm, accommodated ciliary muscle: 25 µm). Cabeza-Gil et al. report a mean difference of 1.2 µm with a standard deviation of about 23.72 µm between CMTMAX derived from CNN-based segmentations and those performed by a human expert [53], therefore slightly better than the difference between CNN and human examiners found in this study.
The variability expressed as the standard deviation of the parameter PA between the first and second session of the segmentations performed by Ciloctunet is 84.07 µm, thus about the same as that reported by [60] (54 to 77 µm), taking into consideration that the standard deviation decreases with the square root of the number of samples (n = 6 subjects × 10 images). Table 5 summarizes the inter-examiner as well as the CNN-examiner differences as reported by several studies.
The average mean absolute error (MAE) of the CMT profiles between the segmentations of Ciloctunet and examiner SW is 35.16 µm and between Ciloctunet and examiner TS is 45.86 µm. Both are in the same range as the averaged MAE of 34.99 µm between the two examiners and about 2–3 times the axial resolution of 18 µm of the Zeiss Visante AS-OCT. Converted to pixels, this corresponds to a difference of approximately 4–5 pixels. Accordingly, the comparison of the CMT profiles shows high IoU values, indicating a high agreement of the CMT profiles derived from the different segmenters.
Interestingly, a statistically significant difference between the segmentations of Ciloctunet and the examiners was found only for the CMA and not for the PA (Table 4). This is probably explained by the summation of slight differences in the segmentation of the muscles’ boundaries along the extent of the ciliary muscle. Dividing the mean CMA difference of 0.08 mm2 between Ciloctunet and examiner SW by the length of 4 mm (taken from the scleral spur) used for calculating the CMA [10] results in an approximate difference of 20 µm or about 2.6 px per mm. This corresponds to a slight increase of two pixels in the ciliary muscle thickness (distance between the upper and lower boundaries). Therefore, although statistically significant, the difference is not clinically relevant. The summation of the differences seems to render selective thickness measurements, like the CMTMAX and perpendicular axis, or continuous ciliary muscle thickness profiles to be favorable over the ciliary muscle area for comparisons, given that the segmentations are not performed by only a single examiner. The application of Ciloctunet, currently trained using the segmentation of a single examiner as the Ground truth, avoids these differences. Furthermore, it also avoids a possible training effect in segmenting over time, which was observed by [10].
The analysis of the biometric parameters only uses some segmentation classes for the optical distortion correction, namely the boundaries of the ciliary muscle, the air-scleral border, and the borders to the anterior segment. While the definition of the ciliary muscle boundaries does not require each segmentation class (Table 1), they are needed for the distortion correction caused by the refractive indices of the different tissues. Furthermore, they could be used for other applications like the measurement of the scleral thickness [64], the scleral curvature [65], the segmentation of the angle recess and the trabecular iris space area, or the determination of the iridocorneal angle [1], which is used for the automatic detection of the angle closure [66,67].

5. Conclusions

By leveraging existing datasets from previous studies for training, validation, and testing, Ciloctunet not only proved the feasibility of the automated segmentation of the ciliary muscle in AS-OCT images like a similar approach published recently [53], but moreover demonstrated to be on par with experienced examiners. Thereby, Ciloctunet enables the analysis of high numbers of images of large study cohorts by avoiding a time-consuming manual segmentation with possible examiner biases. To the best of our knowledge, Ciloctunet is the first open-source solution for the fully automated segmentation of the ciliary muscle in AS-OCT images, which, since integrated into the open-source software CilOCT, leverages well-established workflows. Ciloctunet is available for download at https://github.com/strator1/Ciloctunet, accessed on 1 October 2022.

Author Contributions

Both authors, T.S. and S.W. contributed equally. Conceptualization, T.S. and S.W.; methodology, T.S.; software, T.S.; validation, S.W.; formal analysis, T.S. and S.W.; investigation, S.W.; data curation, T.S. and S.W.; writing—original draft preparation, T.S.; writing—review and editing, T.S. and S.W.; visualization, T.S.; funding acquisition, T.S. All authors have read and agreed to the published version of the manuscript.

Funding

Training of the deep neural network was performed on hardware acquired within the program Experiment! (93798) of the Volkswagen Foundation, granted to TS.

Institutional Review Board Statement

Ethical review and approval were waived for this study due to reusing data obtained in previous studies, all approved by the local institutional review board.

Informed Consent Statement

Patient consent was waived due to reusing data obtained in previous studies where patient consent was obtained.

Data Availability Statement

The Ciloctunet model structure, as well as the trained model, are available at https://github.com/strator1/Ciloctunet under an open-source license (GPLv3). For easy application, Ciloctunet was integrated into CilOCT, a software for the semi-automated segmentation and analysis of the ciliary muscle in OCT images, available at https://github.com/strator1/CilOCT, accessed on 1 October 2022.

Acknowledgments

We thank Anne Kurtenbach and Simon Clark for proofreading the manuscript and med. Eberhart Zrenner for support and critical discussion.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lee, R.; Ahmed, I.I.K. Anterior Segment Optical Coherence Tomography. Tech. Ophthalmol. 2006, 4, 120–127. [Google Scholar] [CrossRef]
  2. Ang, M.; Baskaran, M.; Werkmeister, R.M.; Chua, J.; Schmidl, D.; Aranha dos Santos, V.; Garhöfer, G.; Mehta, J.S.; Schmetterer, L. Anterior segment optical coherence tomography. Prog. Retin. Eye Res. 2018, 66, 132–156. [Google Scholar] [CrossRef] [PubMed]
  3. Richdale, K.; Bailey, M.D.; Sinnott, L.T.; Kao, C.-Y.; Zadnik, K.; Bullimore, M.A. The Effect of Phenylephrine on the Ciliary Muscle and Accommodation. Optom. Vis. Sci. 2012, 89, 1507–1511. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Lewis, H.A.; Kao, C.-Y.; Sinnott, L.T.; Bailey, M.D. Changes in Ciliary Muscle Thickness During Accommodation in Children. Optom. Vis. Sci. 2012, 89, 727–737. [Google Scholar] [CrossRef] [Green Version]
  5. Lossing, L.A.; Sinnott, L.T.; Kao, C.-Y.; Richdale, K.; Bailey, M.D. Measuring changes in ciliary muscle thickness with accommodation in young adults. Optom. Vis. Sci. 2012, 89, 719–726. [Google Scholar] [CrossRef] [Green Version]
  6. Richdale, K.; Sinnott, L.T.; Bullimore, M.A.; Wassenaar, P.A.; Schmalbrock, P.; Kao, C.-Y.; Patz, S.; Mutti, D.O.; Glasser, A.; Zadnik, K. Quantification of Age-Related and per Diopter Accommodative Changes of the Lens and Ciliary Muscle in the Emmetropic Human Eye. Investig. Ophthalmol. Vis. Sci. 2013, 54, 1095. [Google Scholar] [CrossRef] [Green Version]
  7. Ruggeri, M.; de Freitas, C.; Williams, S.; Hernandez, V.M.; Cabot, F.; Yesilirmak, N.; Alawa, K.; Chang, Y.-C.; Yoo, S.H.; Gregori, G.; et al. Quantification of the ciliary muscle and crystalline lens interaction during accommodation with synchronous OCT imaging. Biomed. Opt. Express 2016, 7, 1351–1364. [Google Scholar] [CrossRef] [Green Version]
  8. Ruggeri, M.; Hernandez, V.; de Freitas, C.; Manns, F.; Parel, J.-M. Biometry of the ciliary muscle during dynamic accommodation assessed with OCT. In Ophthalmic Technologies XXIV; Manns, F., Söderberg, P.G., Ho, A., Eds.; SPIE: Bellingham, WA, USA, 2014; Volume 8930, p. 89300W. [Google Scholar]
  9. Shao, Y.; Tao, A.; Jiang, H.; Shen, M.; Zhong, J.; Lu, F.; Wang, J. Simultaneous real-time imaging of the ocular anterior segment including the ciliary muscle during accommodation. Biomed. Opt. Express 2013, 4, 466–480. [Google Scholar] [CrossRef] [Green Version]
  10. Wagner, S.; Zrenner, E.; Strasser, T. Ciliary muscle thickness profiles derived from optical coherence tomography images. Biomed. Opt. Express 2018, 9, 5100. [Google Scholar] [CrossRef]
  11. Buckhurst, H.; Gilmartin, B.; Cubbidge, R.P.; Nagra, M.; Logan, N.S. Ocular biometric correlates of ciliary muscle thickness in human myopia. Ophthalmic Physiol. Opt. 2013, 33, 294–304. [Google Scholar] [CrossRef]
  12. Wagner, S.; Zrenner, E.; Strasser, T. Emmetropes and myopes differ little in their accommodation dynamics but strongly in their ciliary muscle morphology. Vision Res. 2019, 163, 42–51. [Google Scholar] [CrossRef] [PubMed]
  13. Jeon, S.; Lee, W.K.; Lee, K.; Moon, N.J. Diminished ciliary muscle movement on accommodation in myopia. Exp. Eye Res. 2012, 105, 9–14. [Google Scholar] [CrossRef] [PubMed]
  14. Bailey, M.D.; Sinnott, L.T.; Mutti, D.O. Ciliary Body Thickness and Refractive Error in Children. Investig. Ophthalmol. Vis. Sci. 2008, 49, 4353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  15. Pucker, A.D.; Sinnott, L.T.; Kao, C.-Y.; Bailey, M.D. Region-Specific Relationships Between Refractive Error and Ciliary Muscle Thickness in Children. Investig. Ophthalmol. Vis. Sci. 2013, 54, 4710. [Google Scholar] [CrossRef] [Green Version]
  16. Sheppard, A.L.; Davies, L.N. The Effect of Ageing on In Vivo Human Ciliary Muscle Morphology and Contractility. Investig. Ophthalmol. Vis. Sci. 2011, 52, 1809. [Google Scholar] [CrossRef] [Green Version]
  17. Kuchem, M.K.; Sinnott, L.T.; Kao, C.-Y.; Bailey, M.D. Ciliary Muscle Thickness in Anisometropia. Optom. Vis. Sci. 2013, 90, 1312–1320. [Google Scholar] [CrossRef] [Green Version]
  18. Schultz, K.E.; Sinnott, L.T.; Mutti, D.O.; Bailey, M.D. Accommodative Fluctuations, Lens Tension, and Ciliary Body Thickness in Children. Optom. Vis. Sci. 2009, 86, 677–684. [Google Scholar] [CrossRef] [Green Version]
  19. Sheppard, A.L.; Davies, L.N. In vivo analysis of ciliary muscle morphologic changes with accommodation and axial ametropia. Investig. Ophthalmol. Vis. Sci. 2010, 51, 6882–6889. [Google Scholar] [CrossRef] [Green Version]
  20. Wagner, S.; Schaeffel, F.; Zrenner, E.; Straßer, T. Prolonged nearwork affects the ciliary muscle morphology. Exp. Eye Res. 2019, 186, 107741. [Google Scholar] [CrossRef]
  21. Muftuoglu, O.; Hosal, B.M.; Zilelioglu, G. Ciliary body thickness in unilateral high axial myopia. Eye 2009, 23, 1176–1181. [Google Scholar] [CrossRef]
  22. Kao, C.-Y.; Richdale, K.; Sinnott, L.T.; Grillott, L.E.; Bailey, M.D. Semiautomatic extraction algorithm for images of the ciliary muscle. Optom. Vis. Sci. 2011, 88, 275–289. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Laughton, D.S.; Coldrick, B.J.; Sheppard, A.L.; Davies, L.N. A program to analyse optical coherence tomography images of the ciliary muscle. Contact Lens Anterior Eye 2015, 38, 402–408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Kaphle, D.; Schmid, K.L.; Davies, L.N.; Suheimat, M.; Atchison, D.A. Ciliary Muscle Dimension Changes With Accommodation Vary in Myopia and Emmetropia. Investig. Ophthalmol. Vis. Sci. 2022, 63, 24. [Google Scholar] [CrossRef] [PubMed]
  25. Monsálvez-Romín, D.; Domínguez-Vicent, A.; Esteve-Taboada, J.J.; Montés-Micó, R.; Ferrer-Blasco, T. Multisectorial changes in the ciliary muscle during accommodation measured with high-resolution optical coherence tomography. Arq. Bras. Oftalmol. 2019, 82, 207–213. [Google Scholar] [CrossRef] [PubMed]
  26. Domínguez-Vicent, A.; Monsálvez-Romín, D.; Esteve-Taboada, J.J.; Montés-Micó, R.; Ferrer-Blasco, T. Effect of age in the ciliary muscle during accommodation: Sectorial analysis. J. Optom. 2018, 12, 14–21. [Google Scholar] [CrossRef]
  27. Esteve-Taboada, J.J.; Domínguez-Vicent, A.; Monsálvez-Romín, D.; Del Águila-Carrasco, A.J.; Montés-Micó, R. Non-invasive measurements of the dynamic changes in the ciliary muscle, crystalline lens morphology, and anterior chamber during accommodation with a high-resolution OCT. Graefe’s Arch. Clin. Exp. Ophthalmol. 2017, 255, 1385–1394. [Google Scholar] [CrossRef] [Green Version]
  28. Fernández-Vigo, J.I.; Shi, H.; Kudsieh, B.; Arriola-Villalobos, P.; De-Pablo Gómez-de-Liaño, L.; García-Feijóo, J.; Fernández-Vigo, J.Á. Ciliary muscle dimensions by swept-source optical coherence tomography and correlation study in a large population. Acta Ophthalmol. 2020, 98, aos.14304. [Google Scholar] [CrossRef]
  29. Moulakaki, A.I.; Monsálvez-Romín, D.; Domínguez-Vicent, A.; Esteve-Taboada, J.J.; Montés-Micó, R. Semiautomatic procedure to assess changes in the eye accommodative system. Int. Ophthalmol. 2018, 38, 2451–2462. [Google Scholar] [CrossRef]
  30. Shi, J.; Zhao, J.; Zhao, F.; Naidu, R.; Zhou, X. Ciliary muscle morphology and accommodative lag in hyperopic anisometropic children. Int. Ophthalmol. 2020, 40, 917–924. [Google Scholar] [CrossRef]
  31. Bailey, M.D. How should we measure the ciliary muscle? Investig. Ophthalmol. Vis. Sci. 2011, 52, 1817–1818. [Google Scholar] [CrossRef]
  32. Straßer, T.; Wagner, S.; Zrenner, E. Review of the application of the open-source software CilOCT for semi-automatic segmentation and analysis of the ciliary muscle in OCT images. PLoS ONE 2020, 15, e0234330. [Google Scholar] [CrossRef] [PubMed]
  33. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Medical Image Computing and Computer-Assisted Intervention—MICCAI 2015, Proceedings of the 18th International Conference, Munich, Germany, 5–9 October 2015; Navab, N., Hornegger, J., Wells, W.M., Frangi, A.F., Eds.; Lecture Notes in Computer Science; Springer International Publishing: Cham, Switzerland, 2015; Volume 9351, pp. 234–241. ISBN 978-3-319-24573-7. [Google Scholar]
  34. Shelhamer, E.; Long, J.; Darrell, T. Fully Convolutional Networks for Semantic Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 640–651. [Google Scholar] [CrossRef]
  35. Team, T.I.D. ImageMagick. Available online: https://imagemagick.org (accessed on 12 June 2020).
  36. Ioffe, S.; Szegedy, C. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In Proceedings of the ICML’15: 32nd International Conference on International Conference on Machine Learning, Lille, France, 6–11 July 2015; Volume 37, pp. 448–456. [Google Scholar]
  37. Yu, N.; Shen, X.; Lin, Z.; Mech, R.; Barnes, C. Learning to Detect Multiple Photographic Defects. In Proceedings of the 2018 IEEE Winter Conference on Applications of Computer Vision (WACV), Lake Tahoe, NV, USA, 12–15 March 2018; Volume 2018, pp. 1387–1396. [Google Scholar]
  38. Jia, Y.; Shelhamer, E.; Donahue, J.; Karayev, S.; Long, J.; Girshick, R.; Guadarrama, S.; Darrell, T. Caffe. In Proceedings of the ACM International Conference on Multimedia—MM ’14, Orlando, FL, USA, 3–7 November 2014; ACM Press: New York, NY, USA, 2014; pp. 675–678. [Google Scholar]
  39. He, K.; Zhang, X.; Ren, S.; Sun, J. Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; pp. 1026–1034. [Google Scholar] [CrossRef] [Green Version]
  40. Bradski, G. The OpenCV Library. Dr. Dobb’s J. Softw. Tools 2000, 25, 120–123. [Google Scholar]
  41. Chen, Y.-S.; Hsu, W.-H. A modified fast parallel algorithm for thinning digital patterns. Pattern Recognit. Lett. 1988, 7, 99–106. [Google Scholar] [CrossRef]
  42. Reza, N. Zhang-Suen Thinning Algorithm, Java Implementation. Available online: https://nayefreza.wordpress.com/2013/05/11/zhang-suen-thinning-algorithm-java-implementation/ (accessed on 24 June 2020).
  43. Ester, M.; Kriegel, H.-P.; Sander, J.; Xu, X. A density-based algorithm for discovering clusters in large spatial databases with noise. In Proceedings of the KDD’96: Second International Conference on Knowledge Discovery and Data Mining, Portland OR, USA, 2–4 August 1996; Association for the Advancement of Artificial Intelligence: Menlo Park, CA, USA, 1996; pp. 226–231. [Google Scholar]
  44. Jaccard, P. Lois de distribution florale dans la zone alpine. Bull. la Société vaudoise des Sci. Nat. 1902, 38, 72. [Google Scholar] [CrossRef]
  45. Hair, J.F.J.; Anderson, R.E.; Tatham, R.L.; Black, W.C. Multivariate Data Analysis, 3rd ed.; Macmillan: New York, NY, USA, 1995. [Google Scholar]
  46. Crosby, J.M.; Twohig, M.P.; Phelps, B.I.; Fornoff, A.; Boie, I.; Mazur-Mosiewicz, A.; Dean, R.S.; Mazur-Mosiewicz, A.; Dean, R.S.; Allen, R.L.; et al. Homoscedasticity. In Encyclopedia of Child Behavior and Development; Springer: Boston, MA, USA, 2011; p. 752. [Google Scholar]
  47. Santos Nobre, J.; da Motta Singer, J. Residual Analysis for Linear Mixed Models. Biom. J. 2007, 49, 863–875. [Google Scholar] [CrossRef]
  48. Bland, J.M.; Altman, D.G. Statistical methods for assessing agreement between two methods of clinical measurement. Lancet 1986, 1, 307–310. [Google Scholar] [CrossRef] [PubMed]
  49. Koo, T.K.; Li, M.Y. A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research. J. Chiropr. Med. 2016, 15, 155–163. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  50. Willmott, C.; Matsuura, K. Advantages of the mean absolute error (MAE) over the root mean square error (RMSE) in assessing average model performance. Clim. Res. 2005, 30, 79–82. [Google Scholar] [CrossRef]
  51. Milletari, F.; Navab, N.; Ahmadi, S.-A. V-Net: Fully Convolutional Neural Networks for Volumetric Medical Image Segmentation. In Proceedings of the IEEE 2016 Fourth International Conference on 3D Vision (3DV), Stanford, CA, USA, 25–28 October 2016; pp. 565–571. [Google Scholar]
  52. Lin, T.-Y.; Goyal, P.; Girshick, R.; He, K.; Dollár, P. Focal Loss for Dense Object Detection. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 42, 318–327. [Google Scholar] [CrossRef]
  53. Cabeza-Gil, I.; Ruggeri, M.; Chang, Y.-C.; Calvo, B.; Manns, F. Automated segmentation of the ciliary muscle in OCT images using fully convolutional networks. Biomed. Opt. Express 2022, 13, 2810. [Google Scholar] [CrossRef] [PubMed]
  54. Xie, S.; Tu, Z. Holistically-Nested Edge Detection. Int. J. Comput. Vis. 2015, 125, 3–18. [Google Scholar] [CrossRef] [Green Version]
  55. Hui, B.; Xu, Z.; Luo, H.; Chang, Z. Contour detection using an improved holistically-nested edge detection network. In Proceedings of the Global Intelligence Industry Conference (GIIC 2018), Beijing, China, 21–23 May 2018; Lv, Y., Ed.; SPIE: Bellingham, WA, USA, 2018; Volume 10835, p. 2. [Google Scholar]
  56. Prechelt, L. Early Stopping—But When? In Neural Networks: Tricks of the Trade. Lecture Notes in Computer Science; Orr, G., Müller, K., Eds.; Springer: Berlin/Heidelberg, Germany, 1998; pp. 55–69. ISBN 978-3-540-65311-0. [Google Scholar]
  57. Simard, P.Y.; Steinkraus, D.; Platt, J.C. Best practices for convolutional neural networks applied to visual document analysis. In Proceedings of the Seventh International Conference on Document Analysis and Recognition, Edinburgh, UK, 3–6 August 2003; IEEE Computer Society: Washington, DC, USA, 2003; Volume 1, pp. 958–963. [Google Scholar]
  58. Wong, S.C.; Gatt, A.; Stamatescu, V.; McDonnell, M.D. Understanding Data Augmentation for Classification: When to Warp? In Proceedings of the IEEE 2016 International Conference on Digital Image Computing: Techniques and Applications (DICTA), Gold Coast, Australia, 30 November–2 December 2016; pp. 1–6. [Google Scholar]
  59. Shorten, C.; Khoshgoftaar, T.M. A survey on Image Data Augmentation for Deep Learning. J. Big Data 2019, 6, 60. [Google Scholar] [CrossRef]
  60. Chang, Y.-C.; Liu, K.; Cabot, F.; Yoo, S.H.; Ruggeri, M.; Ho, A.; Parel, J.-M.; Manns, F. Variability of manual ciliary muscle segmentation in optical coherence tomography images. Biomed. Opt. Express 2018, 9, 791. [Google Scholar] [CrossRef] [PubMed]
  61. Bland, M. Estimating Mean and Standard Deviation from the Sample Size, Three Quartiles, Minimum, and Maximum. Int. J. Stat. Med. Res. 2015, 4, 57–64. [Google Scholar] [CrossRef]
  62. Wan, X.; Wang, W.; Liu, J.; Tong, T. Estimating the sample mean and standard deviation from the sample size, median, range and/or interquartile range. BMC Med. Res. Methodol. 2014, 14, 135. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  63. Xu, B.Y.; Chiang, M.; Pardeshi, A.A.; Moghimi, S.; Varma, R. Deep Neural Network for Scleral Spur Detection in Anterior Segment OCT Images: The Chinese American Eye Study. Transl. Vis. Sci. Technol. 2020, 9, 18. [Google Scholar] [CrossRef] [Green Version]
  64. Buckhurst, H.D.; Gilmartin, B.; Cubbidge, R.P.; Logan, N.S. Measurement of Scleral Thickness in Humans Using Anterior Segment Optical Coherent Tomography. PLoS ONE 2015, 10, e0132902. [Google Scholar] [CrossRef] [Green Version]
  65. Choi, H.J.; Lee, S.-M.; Lee, J.Y.; Lee, S.Y.; Kim, M.K.; Wee, W.R. Measurement of anterior scleral curvature using anterior segment OCT. Optom. Vis. Sci. 2014, 91, 793–802. [Google Scholar] [CrossRef]
  66. Fu, H.; Baskaran, M.; Xu, Y.; Lin, S.; Wong, D.W.K.; Liu, J.; Tun, T.A.; Mahesh, M.; Perera, S.A.; Aung, T. A Deep Learning System for Automated Angle-Closure Detection in Anterior Segment Optical Coherence Tomography Images. Am. J. Ophthalmol. 2019, 203, 37–45. [Google Scholar] [CrossRef]
  67. Xu, B.Y.; Chiang, M.; Chaudhary, S.; Kulkarni, S.; Pardeshi, A.A.; Varma, R. Deep Learning Classifiers for Automated Detection of Gonioscopic Angle Closure Based on Anterior Segment OCT Images. Am. J. Ophthalmol. 2019, 208, 273–280. [Google Scholar] [CrossRef] [PubMed]
Figure 1. (a) OCT image of the ciliary muscle (here with eye wearing a soft contact lens) (feature image), (b) the semi-automated segmentation (red: ciliary muscle borders, orange: scleral-conjunctival boundary, turquoise: contact lens, yellow and orange: anterior chamber borders), (c) the label image based on image (b) with the 13 segmentation classes (Table 1) used as Ground truth for training the network.
Figure 1. (a) OCT image of the ciliary muscle (here with eye wearing a soft contact lens) (feature image), (b) the semi-automated segmentation (red: ciliary muscle borders, orange: scleral-conjunctival boundary, turquoise: contact lens, yellow and orange: anterior chamber borders), (c) the label image based on image (b) with the 13 segmentation classes (Table 1) used as Ground truth for training the network.
Diagnostics 12 03055 g001
Figure 2. Workflow of the preparation of training and validation data. DICOM files are converted to feature and label (Ground truth) images using previously created segmentation files. All images are mirrored vertically, and the training data are additionally augmented using Gaussian blur of five different radii.
Figure 2. Workflow of the preparation of training and validation data. DICOM files are converted to feature and label (Ground truth) images using previously created segmentation files. All images are mirrored vertically, and the training data are additionally augmented using Gaussian blur of five different radii.
Diagnostics 12 03055 g002
Figure 3. An exemplary result of the segmentation of an OCT image of the ciliary muscle performed by Ciloctunet. The colors correspond to the segmentation classes listed in Table 1. There are several wrong classifications, however, most of them only affect areas that are not used for further processing.
Figure 3. An exemplary result of the segmentation of an OCT image of the ciliary muscle performed by Ciloctunet. The colors correspond to the segmentation classes listed in Table 1. There are several wrong classifications, however, most of them only affect areas that are not used for further processing.
Diagnostics 12 03055 g003
Figure 4. Distribution of the differences of the x/y-coordinates of the ciliary muscle apex and the scleral spur between different pairs of segmentation and the corresponding 95% density ellipses (red: Ciloctunet vs. SW; blue: Ciloctunet vs. TS; green: SW vs. TS). The density plots depict the shape of the distribution of the differences between the x- and y-coordinate of the respective pairs of segmentations.
Figure 4. Distribution of the differences of the x/y-coordinates of the ciliary muscle apex and the scleral spur between different pairs of segmentation and the corresponding 95% density ellipses (red: Ciloctunet vs. SW; blue: Ciloctunet vs. TS; green: SW vs. TS). The density plots depict the shape of the distribution of the differences between the x- and y-coordinate of the respective pairs of segmentations.
Diagnostics 12 03055 g004
Figure 5. Least-square means plots of the results of the linear mixed-effects models for the dependent variables perpendicular axis (PA, top) and ciliary muscle area (CMA, bottom) for the fixed effects accommodative demand, segmenter, and session. In both models, the accommodative demand was found to be statistically significant. Only for the CMA, the segmenter was shown to also have a statistically significant effect.
Figure 5. Least-square means plots of the results of the linear mixed-effects models for the dependent variables perpendicular axis (PA, top) and ciliary muscle area (CMA, bottom) for the fixed effects accommodative demand, segmenter, and session. In both models, the accommodative demand was found to be statistically significant. Only for the CMA, the segmenter was shown to also have a statistically significant effect.
Diagnostics 12 03055 g005
Figure 6. Averaged CMT profiles of dataset A during near and far accommodation derived from the different segmenters SW (green), TS (blue), and Ciloctunet (red). The shaded areas denote one standard deviation.
Figure 6. Averaged CMT profiles of dataset A during near and far accommodation derived from the different segmenters SW (green), TS (blue), and Ciloctunet (red). The shaded areas denote one standard deviation.
Diagnostics 12 03055 g006
Table 1. Segmentation classes and their corresponding colors, as used in the labeled images (Ground truth).
Table 1. Segmentation classes and their corresponding colors, as used in the labeled images (Ground truth).
#Segmentation ClassColorRGB Value
1Background● Black0, 0, 0
2Inner scleral border○ White255, 255, 255
3Upper iris borderCyan0, 255, 255
4Outer conjunctiva borderYellow255, 255, 0
5Outer ciliary muscle borderRed255, 0, 0
6Inner ciliary muscle borderGreen0, 255, 0
7Vertical ciliary muscle borderBlue0, 0, 255
8Ciliary muscle areaDark gray160, 160, 160
9Upper contact lens border 1Magenta240, 0, 240
10Lower contact lens border 1Light gray240, 240, 240
11Contact lens area 1Purple160, 0, 160
12Anterior chamber areaSienna240, 0, 15
13Air areaLavender209, 209, 255
1 Refers to an optionally present soft contact lens.
Table 2. Descriptive statistics of the differences of the respective x/y-coordinates of the ciliary muscle apex and the scleral spur between different pairs of segmentations.
Table 2. Descriptive statistics of the differences of the respective x/y-coordinates of the ciliary muscle apex and the scleral spur between different pairs of segmentations.
Ciloctunet–SWCiloctunet–TSTS–SW
Ciliary muscle apex (n = 130)
Difference x
Mean ± SD (µm)
−21.82 ± 139.2730.89 ± 148.8646.15 ± 142.43
Difference y
Mean ± SD (µm)
−3.66 ± 31.08−1.03 ± 37.262.27 ± 38.00
Absolute difference x
Median [Q25, Q75] (µm)
85.16 [44.53, 147.66]101.17 [55.66, 153.71]60.16 [24.61, 110.94]
Absolute difference y
Median [Q25, Q75] (µm)
18.75 [9.37, 33.40]21.68 [11.72, 36,91]20.51 [8.79, 38.38]
Euclidean distance
Median [Q25, Q75] (µm)
89.38 [54.72, 152.11]111.17 [65.69, 156.24]71.44 [34.62, 127.54]
Scleral spur (n = 131)
Difference x
Mean ± SD (µm)
51.68 ± 80.6198.63 ± 103.3248.99 ± 92.32
Difference y
Mean ± SD (µm)
−20.16 ± 23.10−33.13 ± 31.87−14.11 ± 29.62
Absolute difference x
Median [Q25, Q75] (µm)
65.23 [27.34, 114.06]94.53 [35.16, 157.03]58.59 [25.39, 112.50]
Absolute difference y
Median [Q25, Q75] (µm)
19.92 [8.20, 36.62]31.05 [15.82, 48.05]15.23 [4.98, 30.47]
Euclidean distance
Median [Q25, Q75] (µm)
67.44 [29.61, 117.26]99.92 [45.94, 173.03]60.94 [31.83, 115.23]
Table 3. Results of the linear mixed-effects models with the dependent variables perpendicular axis (PA) and ciliary muscle area (CMA).
Table 3. Results of the linear mixed-effects models with the dependent variables perpendicular axis (PA) and ciliary muscle area (CMA).
Fixed EffectPA (n = 380, R2adj. = 0.73)CMA (n = 380, R2adj. = 0.54)
F-Statisticp-ValueF-Statisticp-Value
DistanceF (1, 356.17) = 71.2319<0.0001F (1, 356.47) = 27.6147<0.0001
SegmenterF (2, 356.00) = 0.93430.3938F (2, 355.99) = 30.6360<0.0001
SessionF (1, 356.28) = 0.00030.9852F (1, 356.77) = 2.01810.1563
Distance x SegmenterF (2, 356.00) = 0.51680.5969F (2, 356.00) = 2.37870.0941
Distance x SessionF (1, 356.27) = 1.69760.1934F (1, 356.75) = 1.30510.2541
Segmenter x SessionF (2, 356.00) = 0.64440.5256F (2, 355.99) = 0.81430.4438
Distance x Segmenter x SessionF (2, 356.00) = 0.07120.9313F (2, 356.01) = 0.38280.6823
Table 4. Results of the paired samples t-tests between the biometric parameters perpendicular axis (PA) and ciliary muscle area (CMA), derived from the segmentations performed by the two examiners SW and TS, and Ciloctunet.
Table 4. Results of the paired samples t-tests between the biometric parameters perpendicular axis (PA) and ciliary muscle area (CMA), derived from the segmentations performed by the two examiners SW and TS, and Ciloctunet.
ComparisonMean Diff. ± SDt-Statistic aLimits of AgreementICC b,d
[95% CI]
ICC b,c,d
[95% CI]
dft-Valuep-Value
Perpendicular axis (µm)
Ciloctunet–SW5.35 ± 67.641190.86680.3878[−127.21, 137.92]0.85 [0.79, 0.90]0.93 [0.90, 0.95]
Ciloctunet–TS−3.80 ± 77.10122−0.54730.0585[−154.93, 147.32]0.82 [0.74, 0.87]0.90 [0.86, 0.93]
SW–TS−9.60 ± 57.11124−1.87930.0626[−121.54, 102.34]0.91 [0.88, 0.94]0.91 [0.88, 0.94]
Ciloctunet, Session 1–213.23 ± 84.07491.11320.2711[−151.54, 178.01]0.70 [0.47, 0.83]0.93 [0.87, 0.96]
Ciliary muscle area (mm²)
Ciloctunet–SW−0.08 ± 0.19119−4.8045<0.0001[−0.45, 0.29]0.65 [0.45, 0.77]0.71 [0.44, 0.83]
Ciloctunet–TS−0.14 ± 0.19122−7.8459<0.0001[−0.51, 0.24]0.58 [0.19, 0.76]0.58 [0.19, 0.76]
SW–TS−0.06 ± 0.21124−3.24550.0015[−0.47, 0.35]0.63 [0.47, 0.74]0.67 [0.53, 0.77]
Ciloctunet, Session 1–20.00 ± 0.1749−0.10580.9162[−0.34, 0.33]0.75 [0.57, 0.86]0.75 [0.57, 0.86]
a paired-samples t-test, all differences were normally distributed, b ICC: two-way mixed, average measure, absolute agreement, c ICC with two outliers excluded, d ICC classification: <0.5: poor; <0.75: moderate; <0.9: good; ≥0.9: excellent [49].
Table 5. Overview of inter-examiner and CNN-examiner differences between absolute coordinates of the scleral spur and the biometric parameter ciliary muscle thickness (CMTMAX, PA) derived from segmented OCT images as reported by different studies.
Table 5. Overview of inter-examiner and CNN-examiner differences between absolute coordinates of the scleral spur and the biometric parameter ciliary muscle thickness (CMTMAX, PA) derived from segmented OCT images as reported by different studies.
Euclidean Distance between Absolute Scleral Spur Coordinates (µm)
Inter-Examiner DifferenceCNN-Examiner Difference
Mean/MedianSDIQRMean/MedianSDIQR
Current study60.94 83.40SW:
TS:
67.44
99.92
87.65
83.40
Chang et al. (2018) [60]122
Xu et al. (2020) [63]97.3473.2998.87 173.0852.0670.23 1
Pointwise ciliary muscle thickness difference: CMTMAX/PA mean (µm)
MeanSDMeanSD
Current study−9.6057.11SW:
TS:
5.35
−3.80
67.64
77.10
Chang et al. (2018) [60]Relaxed:
Accommodated:
20
25
69.57 2,3
34.79 2,3
Cabeza-Gil et al. (2022) [53] 1.223.72 4
1 IQR calculated from the standard deviation according to [61,62], assuming a normal distribution: IQR = 2 * 0.6745 * standard deviation); 2 corrected for averaging of 10 images by a factor of square root of 10; 3 estimated from Bland–Altman plot; 4 calculated from Bland–Altman Limits of Agreement.
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Straßer, T.; Wagner, S. Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners. Diagnostics 2022, 12, 3055. https://doi.org/10.3390/diagnostics12123055

AMA Style

Straßer T, Wagner S. Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners. Diagnostics. 2022; 12(12):3055. https://doi.org/10.3390/diagnostics12123055

Chicago/Turabian Style

Straßer, Torsten, and Sandra Wagner. 2022. "Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners" Diagnostics 12, no. 12: 3055. https://doi.org/10.3390/diagnostics12123055

APA Style

Straßer, T., & Wagner, S. (2022). Performance of the Deep Neural Network Ciloctunet, Integrated with Open-Source Software for Ciliary Muscle Segmentation in Anterior Segment OCT Images, Is on Par with Experienced Examiners. Diagnostics, 12(12), 3055. https://doi.org/10.3390/diagnostics12123055

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop