Next Article in Journal
A Modified Coronavirus Herd Immunity Optimizer for the Power Scheduling Problem
Next Article in Special Issue
GeoStamp: Detail Transfer Based on Mean Curvature Field
Previous Article in Journal / Special Issue
3D Modelling with C2 Continuous PDE Surface Patches
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells

1
Computer Technologies Engineering Department, Information Technology Collage, Imam Ja’afar Al-Sadiq University, Baghdad 10064, Iraq
2
Department of Information Technology, College of Computer and Information Sciences, Majmaah University, Al Majmaah 11952, Saudi Arabia
3
Computer Science Department, Al-Ma’aref University College, Ramadi, Anbar 31001, Iraq
4
School of Electrical Engineering and Computer Science, University of Bradford, Bradford BD7 1DP, UK
5
Division of Medicine, Weill Cornell Medicine-Qatar, Doha 24144, Qatar
6
Institute of Cardiovascular Medicine, University of Manchester, Manchester M13 9NT, UK
7
Manchester Royal Infirmary, Central Manchester Hospital Foundation Trust, Manchester M13 9NT, UK
8
Information Systems Department, College of Computer Science and Information Technology, University of Anbar, Ramadi 31001, Iraq
9
Faculty of Applied Computing and Technology, Noroff University College, 4612 Kristiansand, Norway
*
Author to whom correspondence should be addressed.
Mathematics 2022, 10(3), 320; https://doi.org/10.3390/math10030320
Submission received: 21 December 2021 / Revised: 14 January 2022 / Accepted: 18 January 2022 / Published: 20 January 2022
(This article belongs to the Special Issue Computer Graphics, Image Processing and Artificial Intelligence)

Abstract

:
The quantification of corneal endothelial cell (CEC) morphology using manual and semi-automatic software enables an objective assessment of corneal endothelial pathology. However, the procedure is tedious, subjective, and not widely applied in clinical practice. We have developed the CellsDeepNet system to automatically segment and analyse the CEC morphology. The CellsDeepNet system uses Contrast-Limited Adaptive Histogram Equalization (CLAHE) to improve the contrast of the CEC images and reduce the effects of non-uniform image illumination, 2D Double-Density Dual-Tree Complex Wavelet Transform (2DDD-TCWT) to reduce noise, Butterworth Bandpass filter to enhance the CEC edges, and moving average filter to adjust for brightness level. An improved version of U-Net was used to detect the boundaries of the CECs, regardless of the CEC size. CEC morphology was measured as mean cell density (MCD, cell/mm2), mean cell area (MCA, μm2), mean cell perimeter (MCP, μm), polymegathism (coefficient of CEC size variation), and pleomorphism (percentage of hexagonality coefficient). The CellsDeepNet system correlated highly significantly with the manual estimations for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features. The Bland–Altman plots showed excellent agreement. The percentage difference between the manual and automated estimations was superior for the CellsDeepNet system compared to the CEAS system and other state-of-the-art CEC segmentation systems on three large and challenging corneal endothelium image datasets captured using two different ophthalmic devices.

1. Introduction

In Vivo Confocal Microscopy (IVCM) is a rapid non-invasive imaging method used to capture high-resolution images from all corneal layers [1]. The images acquired are useful to extract important clinical information and quantify morphological alterations in the human cornea to provide insights into a wide range of corneal endothelial cells’ pathologies and infections. The corneal endothelium is a monolayer of hexagonal corneal endothelial cells (CECs) that line the posterior surface of the cornea [2]. These CECs are vital for corneal transparency by maintaining an optimal state of corneal stromal hydration [3,4,5]. The corneal endothelial layer is comprised of 2300 to 2500 (cells/mm2), which are fixed-sized with uniform hexagonal forms and a honeycomb appearance [6]. The density, size, and shape of the CEC can be affected by ageing and a range of ocular and systemic pathologies as well as intraocular surgery [2]. CEC density and morphology can be used to define the functional ability of donor cornea before corneal transplantation [7,8].
Ideally, several morphological features define the health status of the corneal endothelium, including the Mean Cell Density (MCD) (cell/mm2), Mean Cell Area (MCA) (μm2), Mean Cell Perimeter (MCP), polymegathism, and pleomorphism. However, it has not been previously possible to accurately quantify these features [9]. Ophthalmologists and researchers have relied on the manual annotation to quantify endothelial cell density alone, which is very time-consuming and highly subjective [10]. It has also relied on assessing a particular Region of Interest (ROI) containing 20–30 cells and extrapolating to the whole cornea [7]. Therefore, the availability of an automatic image analysis system for accurate detection of the CEC boundaries and rapid geometric analysis of CEC morphology is essential to enable CEC pathology assessment in clinical settings [11]. Given the increasing clinical burden of diseases of the corneal endothelium, there is a need for rapid automated CEC quantification.
Designing and implementing a segmentation system for CECs has proven difficult due to poor image quality of the corneal endothelium [12]. Most of the early approaches applied simple methods (e.g., thresholding, Gaussian filtering, shape-dependent filters, and morphological operations) [13,14,15,16]. However, these approaches were limited due to the low contrast and uneven lighting of the input images, and relatively good results could only be obtained using high-quality images. Ruggeri et al. [7] proposed an automated estimation algorithm by applying a 2D-Discrete Fourier Transform (2D-DFT) for extracting the spatial frequencies embedded in 100 corneal endothelium images captured by an Inverse Phase Contrast Microscope (IPCM). The overall difference between the manual and automatic analysis of the endothelial cell density (ECD) was 14 (cells/mm2) with an execution time of 1 to 2 s per image. In 2015, Scarpa and Ruggeri [17] developed a CEC segmentation procedure using a genetic algorithm that they tested on a small database of 15 images acquired using specular microscopy and demonstrated a mean difference in ECD between manual and automated estimations of 4% with a maximum difference of less than 7%. Sharif et al. [18] developed a new model for detecting the CEC boundaries depending on Snake and Particle Swarm Optimization (S-PSO) by initially applying an image enhancement algorithm using 2D-DFT and Bandpass filter to improve the quality and decrease the noise presented in 11 corneal endothelium images and showed a mean difference between the manual and automated ECD of 5%. The automatic segmentation of CEC boundaries using the watershed approach and its variants is an applied solution. A market-driven watershed segmentation algorithm by Vincent and Masters [19] applied the watershed algorithm on the distance map [20] generated from the input image to generate a seeded stochastic watershed algorithm was developed by Selig et al. [21]. A slightly different algorithm was presented by Al-Fahdawi et al. [2] using a combination of the watershed algorithm and the Voronoi tessellations approaches to precisely trace the endothelial cell boundaries. However, the watershed approach and its variants are still prone to either under- or over-segmentation, especially in poor-quality images containing large endothelial cells.
Recently, several machine learning and deep learning approaches have been employed for the precise segmentation of corneal endothelial cell boundaries. Fabijanska [12] developed an effective endothelium cell segmentation system using a Feed-Forward Neural Network (F-FNN), trained to classify the image pixels into two classes (e.g., cell body or cell boundary). Nurzynska [22] proposed to train the Convolutional Neural Network (CNN) to precisely distinguish between three different regions: cell body, cell boundary, and cell centre. The performance of this approach was assessed on the “Alizarine” dataset [23], and a precision of 93%, a DICE of 0.94, and a Hausdorff distance of 0.14 pixel distance were achieved. Fabijanska [24] employed the U-Net on a dataset of 30 CEC images captured using specular microscopy to produce an edge probability map that was binarized and skeletonized to produce one-pixel wide borders. Relatively good results were obtained with a DICE of 0.85, AUROC of 0.92, and difference errors in ECD, CV, and HEC of 5.2%, 11.9%, and 6.2%, respectively. However, most of the existing approaches require user intervention to manually remove the incorrect borders or connect the discontinuous ones. Furthermore, a major limitation of most of the existing works is the use of small datasets, which cannot reveal how the trained network generalizes to a larger dataset of real-world CEC images unseen in the training set. Finally, a key concern of deep learning segmentation methods (e.g., CNN) is the extensive time needed to train a model to precisely trace the endothelial cell boundaries.
Segmentation systems can be utilized via a desktop or web application. Generally, desktop applications have limitations, especially for use in the clinic in relation to hardware requirements, the need for a specific operating system, and regular individual updates. Alternatively, web applications are more flexible and suitable for image segmentation systems, as the image processing and output are implemented on the server-side, and all users can easily access the web application. This paper provides a novel web application for the fully automated segmentation of CECs using a deep learning approach based on an improved version of U-Net architecture that correctly and accurately distinguishes cell boundaries and achieves precise segmentation. The main contributions of the current work are summarized below:
i.
Development of CellsDeepNet, a novel, fully automated, and real-time web application for objective quantification of CEC morphology for CEC pathology assessment.
ii.
A new pre-processing procedure for reducing the noise and enhancing image quality is proposed to make cell boundaries more visible. Firstly, the contrast of the corneal endothelium image is improved by transforming the values using CLAHE. Secondly, novel image denoising and smoothing algorithms are proposed based on 2DDD-TCWT and Butterworth Bandpass filter to minimize the noise and enhance the edges of the corneal endothelial cell boundaries. This is followed by applying the brightness level adjustment step using the moving average filter and the CLAHE to reduce the effects of non-uniform illumination.
iii.
An improved version of U-Net architecture was applied to precisely identify all CEC boundaries in the enhanced image regardless of the endothelial cell size. An effective training methodology supported with some well-appointed training techniques (e.g., data augmentation, dropout technique, etc.) is utilized to assess different U-Net structures (e.g., the number of layers, the number of filters per layer, etc.) to prevent the overfitting issues and enhance the generalization capability of the final trained model.
iv.
The performance and generalization capability of the proposed CellsDeepNet system was verified in corneal endothelium images captured using corneal confocal microscopy (CCM) by the Heidelberg Retina Tomograph 3 with the Rostock Cornea Module (HRT 3 RCM) and a specular microscope. The results demonstrated that the measurement of the CEC morphology by the CellsDeepNet system highly correlates to the manual annotation and outperforms current state-of-the-art automated approaches on precise endothelial cell segmentation.
The remainder of this paper is structured as follows: Section 2 includes descriptions of the proposed CellsDeepNet system. The experimental results are presented in Section 3. Section 4 presents the conclusions and future research directions.

2. The Proposed CellsDeepNet System

The CellsDeepNet system is an entirely automatic system that needs no user interaction to precisely identify the CEC boundaries. As demonstrated in Figure 1, the CellsDeepNet system is comprised of the client-side and server-side. On the client-side, images of human CECs are acquired using CCM from patients. Then, the user/ophthalmologist goes to the CellsDeepNet login page to log in with their account. The CellsDeepNet web application allows the ophthalmologist to create a unique record for each patient that can be edited, viewed, and removed at any time. Once the patient’s record is created, all images are uploaded and sent to the server-side, where the core CellsDeepNet system produces the final segmented images and extracts useful endothelial cell parameters. On the server side, the proposed CellsDeepNet system comprises two stages: (a) CEC segmentation stage and (b) Clinical features quantification stage. The CEC segmentation stage is composed of (i) a ore-processing step to improve the CEC image quality and decrease the noise level and (ii) a cell boundaries detection step to precisely identify the CEC borders. In the clinical features quantification stage, a set of valuable clinical features are extracted as described previously [2].

2.1. Pre-Processing Step

The reliable calculation of the CEC parameters needs an accurate detection of the CEC boundaries in a large number of cells. However, CCM images suffer from various artefacts including blurriness, noise, non-uniform illumination, and low contrast (Figure 2) caused by: (i) saccadic eye movement causing a motion, blurring, or displacement effect; (ii) the spherical form of the cornea layer that can result in a non-uniform light distribution in various regions with darker areas in the periphery making the borders of cells unclear; and (iii) variation in the pressure level applied between the surface of the cornea and the CCM Tomocap.
In this study, an effective and reliable pre-processing procedure is developed to address the problems mentioned above, as shown in Figure 3. The main steps of the proposed image preprocessing procedure can be summarized as follows:
Step 1 (Contrast Enhancement): an adaptive contrast enhancement method is applied using the CLAHE method to improve the contrast of the CEC image [25]. The CLAHE method is based on Adaptive Histogram Equalization (AHE) that works on enhancing the contrast in small data regions, called tiles rather than the whole image, and neighboring tiles are fused using bilinear interpolation to reduce artificially induced borders. Herein, an optimized bilinear interpolation function is employed as in [26]. The main steps of the applied the CLAHE method can be outlined as follows:
(1)
Divide the input image of size (M × M) pixels into non-overlapping tiles of size (8 × 8) pixels.
(2)
Estimate the histogram of each tile in the input image.
(3)
Compute the clipping threshold value to redistribute pixels of each tile.
(4)
The histogram equalization is estimated for each tile of redistributed pixels.
(5)
The center pixel is computed of each tile, and then all pixels within a tile are produced using an optimized bilinear interpolation function to eliminate boundary artefacts.
In this work, the optimal value of the clip limit is set to 0.9. The contrast in the homogeneous regions can be restricted to avoid the over-enhancement of noise and decrease the edge-shadowing influence in the output image, as shown in Figure 4b.
Step 2 (Image Denoising): the 2DDD-TCWT approach is applied as a powerful image denoising technique to reduce noise and avoid destroying fine details (e.g., edges and curves) in the enhanced endothelial cell image, as displayed in Figure 4c. For the TCWT based denoising, we employed the ‘db4’ family wavelets. Many image denoising algorithms based on the wavelet framework have been proposed, but they suffer from weaknesses such as a lack of directionality, shift variance, oscillations, and aliasing [27]. In this work, to overcome these drawbacks, an efficient and robust image denoising approach using the 2DDD-TCWT and shrinkage operation is proposed to reduce the noise and recover the fine details in the endothelial cell image. In the wavelet domain, the coefficients having large absolute values correspond to the important information in the image, while commonly noise and very fine feature representations of the image are encoded by the coefficients having small absolute values. Hence, eliminating the coefficients with small absolute values and then rebuilding the image will produce an image with a lesser amount of noise. As described in [28], the major steps of image denoising based on the wavelet coefficients shrinkage approach can be summarized as follows:
(1)
The forward 2DDD-TCWT is applied to the input CEC image to determine the wavelet sub-bands coefficients.
(2)
The level of the noise variance in the input image is evaluated.
(3)
The non-linear shrinkage function is applied to compute the threshold value.
(4)
The soft thresholding technique is adopted based of the thresholding value computed in step 3.
(5)
Finally, the denoised image is obtained by applying the inverse 2DDD-TCWT.
In this work, the soft thresholding function ( T = 11 ) is applied in step 3, as described in [29].
Step 3 (Image Smoothing): the denoised image is then smoothed using the Butterworth Bandpass filter to reduce enhance the edges and curves in the endothelial cell image, as shown in Figure 4d. The Butterworth Bandpass filter is computed mathematically by multiplying the transfer functions of a low and high pass filter where the low pass filter has the higher cut-off frequency [30].
H L P   ( u , v ) = 1 1 + [ D ( u , v ) / D L ] 2 n
H H P   ( u , v ) = 1 1 1 + [ D ( u , v ) / D H ] 2 n  
H B P   ( u , v ) = H L P   ( u , v ) × H H P   ( u , v )  
where D L and D H are the cut frequencies of the low and high pass filters and set to be 22 and 40, respectively; n = 3 is the filter order, and is D ( u , v )   the distance from the origin.
Step 4 (Brightness Adjustment): the brightness level adjustment step using the moving average filter, and the CLAHE method is applied as described in step 1 (Contrast Enahancement) to reduce the effects of the non-uniform illumination of the image resulting from the previous step, as shown in Figure 4e. In this study, the moving average filter replaces each pixel with a weighted average of pixel values in a square of size (5 × 5) pixels centred at that pixel instead of forming a simple average. Let f i . j , for i , j = 1 , 2 , , n , indicate the pixel values in the image and g i . j   denote the output image. A linear filter of size ( 2 m + 1 ) × ( 2 m + 1 ) , with identified weights W k , l , for k , l = m , , m , can be computed as follows:
g i . j = k = m m l = m m W k , l × f i + k . j + l ,   f o r   i ,   j = ( m + 1 ) , , ( n m )  

2.2. Endothelial Cell Boundary Detection Step

Once the enhanced image is obtained, an effective segmentation algorithm using an improved version of the U-Net is applied to classify the pixels of the input image accurately and automatically into either the endothelial cell body or cell boundary. In the next sub-sections, the network architecture of the improved version of U-Net along with the followed training methodology are explained in detail.

2.2.1. Network Architecture

Many perspectives concerning the U-Net architecture along with the possible output generalization were investigated. The U-Net is mainly based on CNN, which was established to be used for biomedical image segmentation tasks. The main structure of the standard U-Net is composed of two main paths: a contracting path (called encoder) and an expanding path (called decoder). The encoder is a typical CNN and consists of the iterative application of convolutional layers, each followed by a rectified linear unit (ReLU) function and a max-pooling procedure. Through this contracting path, the size of the spatial data is reduced while more discriminative features are produced. On the other hand, the decoder integrates features and spatial data through a sequence of up-convolutional layers and concatenations with high-resolution and discriminative features obtained from the encoder path. Thus, the U-Net can efficiently produce the pixel-wise probability map of an image instead of classifying it as a whole. Compared to a typical CNN, the main structure of the U-Net was designed to work with fewer training images and to yield more precise segmentation.
The architecture of the proposed U-Net employed for detecting the boundaries of the CECs is depicted in Figure 5. It was derived from the original U-Net described in [31]. Unlike the original U-Net architecture, a set of modifications and enhancements were applied to produce precise endothelial cell segmentation. Firstly, the depth of the original U-Net was reduced by eliminating one layer from each path (e.g., contracting and expanding path) with the corresponding convolutions operations. Secondly, the filter size was changed to be (5 × 5) pixels instead of (3 × 3) pixels with a zero-padding of 2 pixels implemented at each layer to avoid a quick decrease in the amount of spatial data though moving toward the deepest layers. Thirdly, the number of trainable filters (feature maps) were changed to minimize the complexity of the last trained model and prevent the overfitting issues. Therefore, the number of trainable filters varies from 32 filters in the input layer to 128 filters in the deepest resolution layer. Finally, the dropout method as a regularization technique newly proposed by Srivastava et al. [32] was applied in between each two consecutive convolutional layers in the same level to avoid overfitting the training set and decreasing the complex co-adaptations of neurons by avoiding the inter-dependencies between them. In this work, the dropout is applied in each iteration of the training process by totally neglecting neurons with a probability of 0.5. At each path of the proposed network architecture, there are eight repeated applications of (5 × 5) convolutional filters, each followed by implementing a ReLU function.
The convolutional layers in the odd sequence are followed by applying the dropout method with a 0.5 parameter value to prevent the overfitting problem and learn more discriminative features. Furthermore, in the contacting path, a (2 × 2) max-pooling process with a stride of 2 pixels is implemented for down-sampling after each convolutional layer in the even sequence, in which the number of trainable filters was doubled. The main structure of the layers in the decoder path is identical to that in the encoder path, except that the max-pooling procedures were replaced with up-sampling procedures of the feature maps by a factor of 2. At decoder path, the total number of trainable filters was also halved, and the generated feature representations were merged with the analogous feature representations from the encoder path. A loss function based on cross-entropy along with Softmax activation was employed over the last convolutional layer in the decoder path to produce the probability distribution of two classes (e.g., cell body and cell boundary). Specifically, the loss function (https://en.wikipedia.org/wiki/Cross_entropy) (accessed on 10 December 2021) was defined as follows:
C ( l , s ) = w l i i = 0 B l i log ( s i ) + ( 1 l i ) log ( 1 s i )  
where s is the assigned pixel score, l the reference pixel label, B the total number of samples in one batch, and w l i the pixel weight. The coss entropy is one of the most commonly used loss functions to measure how good DNNs perform. The value of the loss is computed as a numeric value between 0 and 1, with 0 being an optimal model. Thus, the main aim is to obtain a DNN model with a loss value as close to 0 as possible. To obtain the final segmented image, the edge probability map produced by the CellsDeepNet model was binarized and skeletonized to generate the one-pixel wide borders of the endothelial cells. In this step, the Hysteresis thresholding method was utilized to generate the binary image from the obtained edge probability map [33]. Using the Hysteresis thresholding method, all pixels with intensity value above the upper threshold ( T u p ) are defined as cell boundary pixels. Furthermore, all the nearby pixels to these boundary pixels with intensity values higher than a lower threshold ( T l o w )   are defined as the cell’s boundary pixels as well. Herein, 8-connectivity was employed to detect the connected areas to each boundary point. Finally, the skeletonized image (e.g., one-pixel wide borders) was obtained by iteratively applying a thinning operation using the binary image produced from the Hysteresis thresholding method.

2.2.2. The Training Methodology

The corneal endothelium images and their corresponding manually segmented images (gold standard) were utilized to train the adopted model with the Stochastic Gradient Descent (SGD) technique for learning rate adaptability. Herein, all of the experiments were conducted by employing 60% randomly chosen images for the training set, while the remaining 40% was separated equally between the validation and testing sets. As described in [34], the suggested training methodology begins with training a specific network architecture using the training set and using the validation set to assess the generalization capability of the network through the learning process and store the weights of the model that performs best on it with a smallest validation error rate, as displayed in Figure 6. The proposed CellsDeepNet model was trained for 100 epochs with an initial learning rate of 0.01, a high momentum value (0.99), a weight decay parameter value (0.0005), and a mini-batch size of 100. In this study, the early stopping procedure was adopted during the learning process to determine the number of epochs where the learning process is stopped directly when the validation accuracy rate starts to decline due to the model starting to overfit the training data. On the other hand, the values of the most commonly used hyper-parameters (e.g., learning rate, mini-batch size, etc.) in the literature were employed. The major steps of the suggested training methodology can be outlined as follows:
(1)
Divide the dataset into 3 different sets: training set, validation set, and test set.
(2)
Choose an initial network architecture and a combination of training parameters.
(3)
Train the selected network architecture in step 2 using the training set.
(4)
Use the validation set to assess the performance of the selected network architecture through the training progression.
(5)
Repeat steps 3 through 4 by employing N = 100 epochs.
(6)
Choose the best-trained model with the smallest validation error.
(7)
Report the performance of the best model using the testing set.
In this work, the training and prediction procedures were implemented in the image-to-image sitting instead of the image-to-patch sitting employed in the original U-Net to reduce the system complexity, increase its throughput, and reduce the data redundancy due to overlapping patches. Figure 7 displays the loss and accuracy plots during the training process on the training and validation set of the MCCM and ECA dataset.
The data augmentation procedure is fundamental for the learning process associated with deep learning networks to prevent overfitting issues and enhance the generalization capability of the DNN. Given the nature of the corneal endothelium images, only eight random image patches of the same size were cropped from each corneal endothelium image and set to the size of the original image. Besides, the horizontal and vertical flipped versions were also obtained. Thus, ten images were generated from each corneal endothelium image. Figure 8 shows the ten image patches produced from a single CEC image using the employed data augementation procedure. In this work, other elastic deformations and transformations (e.g., rotation) were avoided for two reasons. Firstly, applying image rotation can produce new noise types, which do not exist in the CEC images. Secondly, the rotation process leads to losing the corners of the CEC image, which require to be filled either by reflecting the image or coloured the missing regions in black.

2.3. Clinical Features Quantification Stage

Five morphological measures are computed automatically from the final segmented CEC image, including mean cell density (MCD, cell/mm2), mean cell area (MCA, µm2), mean cell perimeter (MCP, µm), polymegathism, and pleomorphism. Using our web application, all these computed morphological measures can be reported in a PDF file that also contains coloured figures, patients’ information, and a table listing all the extracted features and a histogram distribution of the cell pleomorphism. The ophthalmologist can crop the selected ROI from the segmented CEC image, and the following morphological measures are computed as in [2].
  • MCD is computed by dividing the number of CECs ( C number ) in the selected ROI (otherwise entire segmented image) on the total size ( S )   of the selected ROI (otherwise entire segmented image) as follows:
    M C D =   C n u m b e r S  
    Herein, to accurately estimate the MCD, only the cells on the two adjacent borders of the selected ROI are included, discarding them on other borders.
  • Polymegathism also called Coefficient of Variation (CV), is utilized to define the difference in the endothelial cell’s area. Increasing the standard deviation ( S T D ) value of the MCA can result in an imprecise estimate for the MCD. Therefore, increased polymegathism results in an imprecise estimation of MCA [6]. Polymegathism is computed as follows:
    P o l y m e g a t h i s m = S T D c e l l   a r e a M C A × 100  
    where S D cell   area is the STD of the cell area divided by the MCA.
  • Pleomorphism, also called Hexagonality Coefficient (HC), is computed by dividing the number of CECs with a roughly hexagonal form (neighbouring with 6 cells) Chexagonal on the overall number of CECs in the selected ROI (otherwise entire segmented image) Cimage as follows:
    P l e o m o r p h i s m = C h e x a g o n a l C i m a g e × 100
Some of the generated figures are displayed in Figure 9. Figure 9a shows a colour-coded map of the endothelial cells pleomorphism, where all CECs that neighbour the same number of cells are coded with the same colour. The CECs coloured orange correspond to six-sided cells. Figure 9b shows the histogram distribution plot of the pleomorphism parameter.

3. Experimental Results

In this work, several comprehensive experiments are carried out to measure the performance of the CellsDeepNet system in both detecting the endothelial cell boundaries and extracting useful clinical features. Firstly, the corneal endothelium image datasets employed in the conducted experiments are briefly described. This is followed by a comprehensive assessment and a comparison with the most current highly developed methods are presented. In this work, the partial correlation was employed to measure the strength of a relationship between the manual and automatic calculations of five clinical features. The cut-off of p-values was set to (p < 0.0001). Furthermore, the Bland–Altman plots were also used to confirm the agreement between manual and automatic endothelial cell parameters using the proposed CellsDeepNet system. A Bland–Altman plot comprises of a plot of the difference between paired evaluations of two parameters (e.g., the manual and automated estimations) over the average of these two estimations, with ±2 SD lines (agreements outlines) parallel to the mean difference line.

3.1. Datasets Description

The performance of the developed CellsDeepNet system was assessed using two different challenging datasets, termed the Manchester Corneal Confocal Microscopy (MCCM) dataset [35] and Endothelial Cell Alizarine (ECA) dataset [23].
  • Manchester Corneal Confocal Microscopy Dataset: The MCCM dataset contains a total of 1010 images of CECs acquired using the Heidelberg Retinal Tomograph III Rostock Cornea Module (Heidelberg Engineering GmbH, Heidelberg, Germany). This device uses a 670 nm red wavelength diode laser, which is a class I laser and therefore does not pose any ocular safety hazard. A 63× objective lens with a numerical aperture of 0.9 and a working distance relative to the applanating cap (TomoCap, Heidelberg Engineering GmbH, Heidelberg, Germany) of 0.0 to 3.0 mm was used. The images produced using this lens are (400 μm × 400 μm) with a (15°×15°) field of view and 10 μm/pixel transverse optical resolution. The cornea was locally anaesthetized by instilling 1 drop of 0.4% benoxinate hydrochloride (Chauvin Pharmaceuticals, Chefaro, UK), and Viscotears (Carbomer 980, 0.2%, Novartis, UK) was used as the coupling agent between the cornea and the TomoCap as well as between the TomoCap and the objective lens. The patients were asked to place their chin on the chin rest and press their forehead against the forehead support. They were asked also to fixate with the eye not being examined on an outer fixation light to enable examination of the central cornea. Images of the endothelial cells were captured using the “section” mode. Multiple images were taken from the endothelium immediately posterior to the posterior stroma. On CCM, endothelial cells are identified as a polygonal shape with bright cell bodies and dark borders. During image acquisition, 2–3 representative sharp images were selected by filtering out blurred images, pressure lines, or dark shadows caused by the pressure applied between the TomoCap and cornea or out of focus images. Some samples of unprocessed corneal endothelium images are presented in Figure 2. It is worth noting that the corneal endothelium images used in this dataset are very challenging poor-quality images compared to those used in the previous works. A major challenge for carrying out this research using this dataset was the unavailability of ground-truth images and the reference value measurements for all five clinical parameters. To obtain a manual version from this database, a freely available application, named GNU Image Manipulation Program (GIMP) was utilized by an expert ophthalmologist from the University of Manchester to manually detect endothelial cell boundaries and then generate a binary image from particular ROIs to serve as a ground-truth image in the segmentation evaluation and a manual estimation of the reference values of the clinical parameters, as shown in Figure 10.
  • Endothelial Cell Alizarine Dataset: This dataset is composed of 30 corneal endothelium images captured by an IPCM (CK 40, Chroma Technology Corp, Windham, VT, USA) at (200 × magnification) and an analogue camera (SSC-DC50AP, Sony, Tokyo, Japan) [23]. These images were taken from 30 porcine eyes stained with alizarine and stored in the JPEG format of resolution (576 × 768) pixels. The ground truth images representing the borders of the endothelial cells traced manually by an expert ophthalmologist are also provided. Each image contains approximately 232 detected cells on average (ranging from 188 to 388 cells), along with an average cell area of 272.76 pixels. This dataset is freely available at http://bioimlab.dei.unipd.it, accessed on 10 December 2021.

3.2. Experiments on MCCM Dataset

Using the MCCM dataset, several experiments were carried out to measure the performance of the CellsDeepNet system. Firstly, the performance of the CellsDeepNet system was validated against the gold standard images (e.g., binary images) produced from the GIMP software, as shown in Figure 10c. In this assessment procedure, seven quantitative performance measurements were computed, including: Probabilistic Rand Index (PRI) [36], Gradient Magnitude Similarity Deviation (GMSD) [37], Structural SIMilarity (SSIM) Index [38], Variation of Information (VoI) [39], Normalized Absolute Error (NAE), Mean Square Error (MSE) [40], and Global Consistency Error (GCE) [41]. These employed seven quantitative metrics are commonly used in the previous works for measuring the effectiveness and reliability of segmentation algorithms. In this work, the performance of the developed CellsDeepNet system was compared with our previous fully-automated Corneal Endothelium Analysis System (CEAS) described in [2] as well as with the original U-Net structure described in [31]. Initially, the impact of the image enhancement procedure was assessed by training the improved U-Net from scratch on top of enhanced CEC images instead of the direct usage of the raw CEC images to guide the learning process by enforcing the U-Net to learn only discriminating features. As shown in Table 1, we demonstrate that the developed CellsDeepNet system can accurately identify the boundaries of the CECs regardless of endothelial cell size.
The total average of the employed seven quantitative metrics was significantly increased in comparison with the direct usage of the raw CEC images. Moreover, the time required to obtain the final trained model was about 20 min compared to approximately one hour and 45 min using the raw CEC images. The overall average of these seven quantitative metrics was calculated for all the CCM images in the MCCM dataset and compared in Figure 11. Of note, better results were obtained from both CellsDeepNet and CEAS systems compared to the original U-Net in terms of all seven quantitative measurements.
Although the CEAS system has got a superior result of GMSD measurement (0.0212) compared to 0.0862 produced from the proposed CellsDeepNet system, it produced lower results in terms of the other six quantitative measures. There were no substantial differences found between the automated segmented images using the developed CellsDeepNet system and the manual images. In the second experiment, the effectiveness and robustness of the CellsDeepNet system were assessed by comparing the automatic estimations of the five clinical parameters with reference values computed by applying the definition of these clinical features on the binary image produced from the GIMP software. Initially, the outputs using both the CellsDeepNet and CEAS system were obtained from all corneal endothelium images in the MCCM dataset. Then, the identical ROI with the biggest region of visible CECs was chosen for that image and used to automatically compute the values of the five clinical parameters. Next, the automatic estimations of the clinical parameters were compared with the reference values.
Table 2 shows the total average, STD, MAX, and MIN of each clinical feature for both manual and automatic segmented CEC images, in addition to the difference and the percentage difference between them. Although the CEAS system achieved good agreement with reference values, a higher agreement was achieved with the proposed CellsDeepNet system. The average percentage variations between manual and automated estimates computed using the CEAS and CellsDeepNet system were 1.56% vs. 0.66%, 2.52% vs. 0.003%, 6.5% vs. 2.4%, 5.6% vs. 2.9%, and 6.8% vs. 1.7% for MCD, MCA, MCP, polymegathism, and pleomorphism, respectively. There were highly significant correlations between the manual and automated estimates for MCD (r = 0.94), MCA (r = 0.99), MCP (r = 0.99), polymegathism (r = 0.92), and pleomorphism (r = 0.86), with p < 0.0001 for all the extracted clinical features, as shown in Figure 12. As shown in Figure 13, one can see that MCD is located between 966 to 2925 (cell/mm2), MCA is located between 266 to 1328 (μm2), MCP is located between 60 to 137 (μm), polymegathism is located between 31 to 85%, and pleomorphism is located between 18 to 53%.

3.3. Experiments on ECA Dataset

To prove the generalization capability of the developed CellsDeepNet system, seven quantitative performance measurements from both the automated segmented images and ground-truth images were applied to the ECA dataset. The performance comparison between three different segmentation systems (U-Net, CellsDeepNet, and CEAS system) on the ECA dataset was tested using seven quantitative metrics (Figure 14). Even though the performance of the original U-Net improved slightly on the ECA dataset, it was inferior to both the CellsDeepNet and CEAS systems. The results obtained by the CellsDeepNet system compared to the CEAS system were better for all seven quantitative measurements. The automated estimation of all five CEC parameters were compared to the reference values from the same ROI (Table 3). The average percentage differences between manual and automatic estimations calculated using the CEAS and CellsDeepNet system were 0.91% vs. 0.85%, 1.84% vs. 0.73%, 1.66% vs. 2.24%, 4.07% vs. 0.50%, and 8.52% vs. 0.39% for MCD, MCA, MCP, polymegathism, and pleomorphism, respectively. There were highly remarkable correlations between the manual and automated estimations for MCD (r = 0.98), MCA (r = 0.98), MCP (r = 0.86), polymegathism (r = 0.91), and pleomorphism (r = 0.76) with the cut-off of p-values set to (p < 0.0001), as shown in Figure 15.
The Bland–Altman plots confirm the excellent agreement between manual and automatic endothelial cell parameters using the CellsDeepNet system. As shown in Figure 16, one can see that the MCD is located between 2637 and 3185 (cell/mm2); MCA is located between 250 and 306 (μm2); MCP is located between 58 and 90 (μm); polymegathism is located between 27 and 38%; and pleomorphism is located between 36 and 46%.
Figure 17 shows the overlap segmented image of the manual and automated outputs to visually demonstrate the endothelial cell segmentation truthfulness of the proposed CellsDeepNet system. In this figure, the yellow lines refer to the ground-truth segmented image, and the purple lines refer to the automatic output of the CellsDeepNet system, whereas the white lines describe the common borders of the endothelial cells.

3.4. Comparison Study

In this section, the performance of the developed CellsDeepNet system was compared against six present state-of-the-art CEC segmentation systems using the EAC dataset, as illustrated in Table 4. The first system was established by Ruggeri et al. [23], in which the luminosity correction and contrast enhancement of the corneal endothelium image was enhanced using a parabolic correction and a sigmoid point transformation, respectively. The boundaries of the endothelial cells were detected by training a multi-layer F-FNN to classify each pixel in the input image either as a cell body or cell boundary. The second system proposed by Scarpa and Ruggeri [17] used a genetic algorithm combining information about the shape of the CECs and the intensity of the pixels to detect the endothelial cell boundaries. The third one developed by Poletti and Ruggeri [42] uses three different kernels specially designed to compute three endothelial cell signatures (e.g., vertex, side, or body). These extracted signatures were utilized as feature vectors to train a SVM classifier to correctly detect the boundaries of the CECs. In this comparison study, MCD, polymegathism, and pleomorphism were compared due to the limitation of these systems to compute the other CEC parameters. Compared to the method of Ruggeri et al. [23], the CellsDeepNet system achieved a slightly worse Diff % for MCD (0.28 v 0.85), but it was better for polymegathism (3.02% v 0.50%) and pleomorphism (1.03% v 0.39%). Compared to the Poletti and Ruggeri system [42], CellsDeepNet system achieved a slightly worse Diff % for MCD (0.82% v 0.85%), but it was better for polymegathism (3.95% v 0.50%) and pleomorphism (3.13% v 0.39%). Compared to the method of Scarpa and Ruggeri [17] the CellsDeepNet system achieved better outcomes for MCD (3.11% v 0.85%), but it was better for polymegathism (14.78% v 0.50%) and pleomorphism (7.32% v 0.39%).
The performance of the CellsDeepNet system was also compared with some of the state-of-the-art deep learning-based CEC segmentation systems (e.g., Fabija’nska [24], Zhang et al. [43], Nurzynska [22], and U-Net [31]) by the conduction of several comprehensive experiments using the same dataset. In these experiments, six measures were computed, such as Dice Coefficient (DI), Jaccard coefficient (JA), F1 Score (F1), Specificity (SP), Sensitivity (SE), and Modified Hausdorff Distance (MHD) as evaluation metrics. The first three measures are more related to reveal the overall level between the manual and automatic segmentation. As shown in Table 5, the proposed CellsDeepNet system has achieved the best overall performance on the EAC dataset in term of the six adopted evaluation metrics. It is worth mentioning that the CEC segmentation system developed by Fabija’nska [24] achieved a proportional difference of 5.2% for MCD, 6.2% for pleomorphism, and 11.93% for polymegathism, while no clinical feature with a proportional difference (>2.5%) has been achieved using the proposed CellsDeepNet system.
Further experiments were carried out by testing the performance of the developed CellsDeepNet system on the dataset provided by Selig et al. [21]. This dataset composed of 52 CEC images acquired from 23 patients using in vivo confocal corneal microscopy. Using the automated cell segmentation results provided by Selig et al. [21], the seven adopted quantitative performance measurements were computed and compared with the proposed CellsDeepNet system, as shown in Figure 18. The results obtained by the proposed CellsDeepNet system compared to the Selig et al. [21] system were better for all seven quantitative measurements.
Figure 19 displays a comparison between the automatically segmented CEC images produced by the developed CellsDeepNet system and Selig et al. [21] system. It is clear that the Selig et al. [21] system suffers from the over-segmentation problem, especially in detecting the endothelial cells with large sizes. The incorrectly segmented endothelial cell boundaries are filled with the red colour.

3.5. Discussion

The study has shown a quantitatively better performance using the proposed CellsDeepNet system compared to U-Net and the SEAS system on two large and challenging corneal endothelium image datasets captured using two different devices. For instance, using the MCCM dataset, the mean difference between manual and automated estimates was lower than 1%, 0.05%, 2.5%, 3%, and 2% for MCD, MCA, MCP, polymegathism, and pleomorphism, respectively, with no clinical feature with a proportional difference (>3%) between the manual and automated estimates. The proposed CellsDeepNet system provides an accurate estimation for all the five clinical features of the detected CECs, with higher than (95%) of the information falling between ±2 SD lines, as shown in Figure 13. In this experiment, one can see that MCD is located between 966 and 2925 (cell/mm2); MCA is located between 266 and 1328 (μm2); MCP is located between 60 and 137 (μm); polymegathism is located between 31 and 85%; and pleomorphism is located between 18 and 53%.
Using the ECA dataset derived from a specular microscope, which is most commonly used in most ophthalmology clinics, the mean differences between manual and automated estimates were less than 1%, 1%, 2.5%, 0.5%, and 0.5% for MCD, MCA, MCP, polymegathism, and pleomorphism, respectively, with no clinical feature with a proportional difference (>2.5%) between the manual and automated estimates. Better results were obtained using all the employed systems on the ECA dataset, as the images were of higher quality compared to the images obtained for the MCCM dataset which used a CCM.
Furthermore, the developed CellsDeepNet system based on the modified U-Net architecture significantly reduced the time necessary to obtain the last trained model to 20 min compared with the original U-Net model, which takes approximately one hour and 45 min. The proposed CellsDeepNet web application has a number of advantages. It is more flexible and suitable for researchers or ophthalmologists, as they do not need specific hardware, given that the processing to produce the final segmented image and quantitative output is implemented on the server-side. In addition, clinicians can create a unique record for each patient that can be edited, viewed, and removed at any time. The selection of an ROI from the final segmented image allows for the inspection of the segmentation results to automatically extract more accurate and useful clinical parameters from the clearest ROI in the segmented image. The results obtained using the CellsDeepNet system are encouraging, especially as they have been obtained from two large image data sets of corneal endothelial cells from two different microscopes. However, there are some limitations, including over-segmentation, especially when poor quality images containing large CEC’s are fed into the CellsDeepNet system. The accuracy of the CellsDeepNet system may be enhanced by extending the training set using a larger and more heterogeneous dataset of corneal endothelial images.

4. Conclusions and Future Work

We have developed the CellsDeepNet system, an improved, fully automated, and fast corneal endothelial cell analysis system. It requires no user intervention for segmenting and extracting different corneal endothelial cell parameters. Two different datasets of corneal endothelium images acquired using a specular microscope and corneal confocal microscope along with their manually traced ground-truth images were employed to validate the performance of the developed CellsDeepNet system. An efficient and reliable pre-processing procedure was deployed to eliminate the noise, improve the image quality, and make the boundaries of CECs more visible, and an improved version of the U-Net architecture was used to produce a binary segmented image of the endothelial cell boundaries to enhance the quantification of clinically useful parameters. Our results demonstrate the effectiveness, reliability, and superior outcomes of the CellsDeepNet system compared to the current highly developed approaches in terms of segmentation precision and the extraction of clinically useful endothelial cell parameters. This markedly increases the clinical utility of this image analysis system, especially as it is web-based to allow rapid (<3 s/image) corneal endothelial cell quantification to identify pathology and assess disease progression over time. We are not presenting the final word in accurately detecting the endothelial cell boundaries and extracting useful clinical features. We are currently in the process of testing the accuracy of the proposed CellsDeepNet system using a larger and more challenging dataset, in which the corneal endothelium images are captured using different microscope devices.

Author Contributions

Conceptualization, A.S.A.-W., A.A., S.A.-F. and R.Q.; methodology, A.S.A.-W., A.A. and S.A.-F.; software, G.P. and R.A.M.; validation, A.S.A.-W., S.A.-F. and R.Q.; formal analysis, M.A.M. and S.K.; investigation, A.S.A.-W., S.A.-F., G.P. and R.A.M.; resources, M.A.M. and S.K.; data curation, A.S.A.-W., G.P. and R.A.M.; writing—original draft preparation, A.S.A.-W., S.A.-F., R.Q., G.P. and R.A.M.; writing—review and editing, A.S.A.-W., S.A.-F., R.Q., G.P. and R.A.M.; visualization, M.A.M. and S.K.; project administration, A.S.A.-W., A.A. and S.A.-F. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The CellsDeepNet system, a web application accessible at https://sentizer.com/ accessed on 10 December 2021, provides a rapid and objective quantification of CEC morphology for CEC pathology assessment suitable in a clinical setting.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Al-Fahdawi, R.S.; Qahwaji, A.R.S.; Al-Waisy, S.; Ipsopn, S. An Automatic Corneal Subbasal Nerve Registration System Using FFT and Phase Correlation Techniques for an Accurate DPN Diagnosis. In Proceedings of the 2015 IEEE International Conference on Computer and Information Technology; Ubiquitous Computing and Communications; Dependable, Autonomic and Secure Computing; Pervasive Intelligence and Computing, Liverpool, UK, 26–28 October 2015; pp. 1035–1041. [Google Scholar] [CrossRef]
  2. Al-Fahdawi, S.; Qahwaji, R.; Al-Waisy, A.S.; Ipson, S.; Ferdousi, M.; Malik, R.A.; Brahma, A. A fully automated cell segmentation and morphometric parameter system for quantifying corneal endothelial cell morphology. Comput. Methods Programs Biomed. 2018, 160, 11–23. [Google Scholar] [CrossRef]
  3. Al-Fahdawi, S.; Qahwaji, R.; Al-Waisy, A.S.; Ipson, S.; Malik, R.A.; Brahma, A.; Chen, X. A fully automatic nerve segmentation and morphometric parameter quantification system for early diagnosis of diabetic neuropathy in corneal images. Comput. Methods Programs Biomed. 2016, 135, 151–166. [Google Scholar] [CrossRef] [Green Version]
  4. Gavet, Y.; Pinoli, J.-C. Comparison and Supervised Learning of Segmentation Methods Dedicated to Specular Microscope Images of Corneal Endothelium. Int. J. Biomed. Imaging 2014, 2014, 704791. [Google Scholar] [CrossRef] [PubMed]
  5. Khan, A.; Kamran, S.; Akhtar, N.; Ponirakis, G.; Al-Muhannadi, H.; Petropoulos, I.N.; Al-Fahdawi, S.; Qahwaji, R.; Sartaj, F.; Babu, B.; et al. Corneal Confocal Microscopy detects a Reduction in Corneal Endothelial Cells and Nerve Fibres in Patients with Acute Ischemic Stroke. Sci. Rep. 2018, 8, 17333. [Google Scholar] [CrossRef]
  6. McCarey, B.E.; Edelhauser, H.F.; Lynn, M.J. Review of Corneal Endothelial Specular Microscopy for FDA Clinical Trials of Refractive Procedures, Surgical Devices and New Intraocular Drugs and Solutions. Cornea 2008, 27, 1–16. [Google Scholar] [CrossRef] [Green Version]
  7. Ruggeri, A.; Grisan, E.; Jaroszewski, J. A new system for the automatic estimation of endothelial cell density in donor corneas. Br. J. Ophthalmol. 2005, 89, 306–311. [Google Scholar] [CrossRef] [Green Version]
  8. Gain, P.; Thuret, G.; Kodjikian, L.; Gavet, Y.; Turc, P.H.; Theillere, C.; Acquart, S.; Le Petit, J.C.; Maugery, J.; Campos, L. Automated tri-image analysis of stored corneal endothelium. Br. J. Ophthalmol. 2002, 86, 801–808. [Google Scholar] [CrossRef] [PubMed]
  9. Doughty, M.J.; Aakre, B.M. Further analysis of assessments of the coefficient of variation of corneal endothelial cell areas from specular microscopic images. Clin. Exp. Optom. 2008, 91, 438–446. [Google Scholar] [CrossRef] [PubMed]
  10. Foracchia, M.; Ruggeri, A.M. Cell contour detection in corneal endothelium in-vivo microscopy. In Proceedings of the 22nd Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Chicago, IL, USA, 23–28 July 2000; Volume 2, pp. 1033–1035. [Google Scholar] [CrossRef]
  11. Foracchia, M.; Ruggeri, A.M. Corneal Endothelium Cell Field Analysis by means of Interacting Bayesian Shape Models. In Proceedings of the 2007 29th Annual International Conference of the IEEE Engineering in Medicine and Biology Society, Lyon, France, 22–26 August 2007; Volume 62035–60387, pp. 6035–6038. [Google Scholar] [CrossRef]
  12. Fabijańska, A. Corneal Endothelium Image Segmentation Using Feedforward Neural Network. In Proceedings of the Federated Conference on Computer Science and Information Systems, Prague, Czech Republic, 3–6 September 2017; Volume 11, pp. 629–637. [Google Scholar] [CrossRef] [Green Version]
  13. Nadachi, R.; Nunokawa, K. Automated Corneal Endothelial Cell Analysis. In Proceedings of the Fifth Annual IEEE Symposium on Computer-Based Medical Systems, Durham, NC, USA, 14–17 June 1992; pp. 450–457. [Google Scholar]
  14. Mahzoun, M.; Okazaki, K.; Mitsumoto, H.; Kawai, H.; Sato, Y.; Tamura, S.; Kani, K. Detection and Complement of Hexagonal Borders in Corneal Endothelial Cell lmage. Med. Imaging Technol. 1996, 14, 56–69. [Google Scholar]
  15. Sanchez-Marin, F.J. Automatic segmentation of contours of corneal cells. Comput. Biol. Med. 1999, 29, 243–258. [Google Scholar] [CrossRef]
  16. Ayala, G.M.E.D.H.; Mart, L.H. Granulometric moments and corneal endothelium status. Pattern Recognit. 2001, 34, 1219–1227. [Google Scholar] [CrossRef]
  17. Scarpa, F.; Ruggeri, A. Segmentation of Corneal Endothelial Cells Contour by Means of a Genetic Algorithm. In Proceedings of the Ophthalmic Medical Image Analysis Second International Workshop, OMIA 2015, Held in Conjunction with MICCAI 2015, Munich, Germany, 9 October 2015; pp. 25–32. [Google Scholar]
  18. Sharif, M.; Qahwaji, R.; Shahamatnia, E.; Alzubaidi, R.; Ipson, S.; Brahma, A. An efficient intelligent analysis system for confocal corneal endothelium images. Comput. Methods Programs Biomed. 2015, 122, 421–436. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Vincent, L.M.; Masters, B.R. Morphological image processing and network analysis of cornea endothelial cell images. In Proceedings of the SPIE, San Diego, CA, USA, 1 July 1992; Volume 1769, pp. 212–226. [Google Scholar] [CrossRef]
  20. Gavet, Y.; Ijean-Charles, P. Visual Perception Based Automatic Recognition of Cell Mosaics in Human Corneal Endothelium Microscopy Images Cornea : Vision And Quality. Image Anal. Stereol. 2008, 27, 53–61. [Google Scholar] [CrossRef] [Green Version]
  21. Selig, B.; Vermeer, K.A.; Rieger, B.; Hillenaar, T.; Hendriks, C.L.L. Fully automatic evaluation of the corneal endothelium from in vivo confocal microscopy. BMC Med. Imaging 2015, 15, 13. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Nurzynska, K. Deep Learning as a Tool for Automatic Segmentation of Corneal Endothelium Images. Symmetry 2018, 3, 60. [Google Scholar] [CrossRef] [Green Version]
  23. Ruggeri, A.; Scarpa, F.; de Luca, M.; Meltendorf, C.; Schroeter, J. A system for the automatic estimation of morphometric parameters of corneal endothelium in alizarine red-stained images. Br. J. Ophthalmol. 2010, 94, 643–647. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  24. Fabijańska, A. Segmentation of corneal endothelium images using a U-Net-based convolutional neural network. Artif. Intell. Med. 2018, 88, 1–13. [Google Scholar] [CrossRef]
  25. Sasi, N.M.; Jayasree, V.K. Contrast Limited Adaptive Histogram Equalization for Qualitative Enhancement of Myocardial Perfusion Images. Engineering 2013, 05, 326–331. [Google Scholar] [CrossRef] [Green Version]
  26. Miao, Y. Application of the CLAHE algorithm based on optimized bilinear interpolation in near infrared vein image enhancement. In Proceedings of the 2nd International Conference on Computer Science and Application Engineering, Hohhot, China, 22–24 October 2018; pp. 1–6. [Google Scholar] [CrossRef]
  27. Raj, V.N.P.; Venkateswarlu, T.P. Denoising of Medical Images Using Dual Tree Complex Wavelet Transform. Procedia Technol. 2012, 4, 238–244. [Google Scholar] [CrossRef] [Green Version]
  28. Fodor, I.K.; Kamath, C. Denoising Through Wavelet Shrinkage: An Empirical Study. J. Electron. Imaging 2001, 12, 151–161. [Google Scholar] [CrossRef]
  29. Naimi, H.; Adamou-Mitiche, A.B.H.; Mitiche, L. Medical image denoising using dual tree complex thresholding wavelet transform and Wiener filter. J. King Saud Univ. Comput. Inf. Sci. 2015, 27, 40–45. [Google Scholar] [CrossRef] [Green Version]
  30. Ginley, B.; Lutnick, J.B.E.; Tomaszewski, P.J.; Sarder, E.; Govind, D. Glomerular Detection and Segmentation from Multimodal Microscopy Images using a Butterworth Band-Pass Filter; Université de Grenoble: Grenoble, France, 2018; p. 10581. [Google Scholar]
  31. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. arXiv 2015, arXiv:1505.04597. [Google Scholar]
  32. Agarwal, A.; Negahban, S.; Wainwright, M.J. Noisy matrix decomposition via convex relaxation: Optimal rates in high dimensions. Ann. Stat. 2012, 40, 1171–1197. [Google Scholar] [CrossRef] [Green Version]
  33. Sornam, M.; Kavitha, M.S.; Nivetha, M. Hysteresis thresholding based edge detectors for inscriptional image enhancement. In Proceedings of the IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Chennai, India, 15–17 December 2016; pp. 1–4. [Google Scholar] [CrossRef]
  34. Al-Waisy, A.S.; Qahwaji, R.; Ipson, S.; Al-Fahdawi, S.; Nagem, T.A.M. A multi-biometric iris recognition system based on a deep learning approach. Pattern Anal. Appl. 2017, 21, 783–802. [Google Scholar] [CrossRef] [Green Version]
  35. Tavakoli, M.; Malik, R.A. Corneal Confocal Microscopy: A Novel Non-invasive Technique to Quantify Small Fibre Pathology in Peripheral Neuropathies. J. Vis. Exp. 2011, 12, e2194. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  36. Kaur, J.; Agrawal, S.; Vig, R. Integration of Clustering, Optimization and Partial Differential Equation Method for Improved Image Segmentation. Int. J. Image Graph. Signal Process. 2012, 4, 26–33. [Google Scholar] [CrossRef]
  37. Xue, W.; Zhang, L.; Mou, X.; Bovik, A. Gradient Magnitude Similarity Deviation: A Highly Efficient Perceptual Image Quality Index. IEEE Trans. Image Process. 2013, 23, 684–695. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  38. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P.; Member, S. Image Quality Assessment: From Error Visibility to Structural Similarity. IEEE Trans. IMAGE Process 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  39. Meilă, M. Comparing Clusterings–An Information Based Distance. J. Multivar. Anal. 2007, 98, 873–895. [Google Scholar] [CrossRef] [Green Version]
  40. Mallikarjuna, K.; Prasad, K.S.; Subramanyam, M.V. Image Compression and Reconstruction using Discrete Rajan Transform Based Spectral Sparsing. Int. J. Image Graph. Signal Process. 2016, 8, 59–67. [Google Scholar] [CrossRef] [Green Version]
  41. Martin, D.; Fowlkes, C.; Tal, D.; Malik, J. A database of human segmented natural images and its application to evaluating segmentation algorithms and measuring ecological statistics. In Proceedings of the Eighth IEEE International Conference on Computer Vision. ICCV 2001, Vancouver, BC, Canada, 7–14 July 2001; Volume 2, pp. 416–423. [Google Scholar]
  42. XIII Mediterranean Conference on Medical and Biological Engineering and Computing 2013. IFMBE Proc. 2014, 41, 658–661. [CrossRef] [Green Version]
  43. Zhang, Y. A Multi-Branch Hybrid Transformer Networkfor Corneal Endothelial Cell Segmentation. 2021. Available online: http://arxiv.org/abs/2106.07557 (accessed on 1 March 2021).
Figure 1. The block diagram of the proposed CellsDeepNet web application: (a) client-side and (b) server-side.
Figure 1. The block diagram of the proposed CellsDeepNet web application: (a) client-side and (b) server-side.
Mathematics 10 00320 g001
Figure 2. Examples of corneal endothelium images: (a) a healthy subject, (b) an obese patient, and (c) a diabetic patient displaying a high difference in the size and shape of CECs.
Figure 2. Examples of corneal endothelium images: (a) a healthy subject, (b) an obese patient, and (c) a diabetic patient displaying a high difference in the size and shape of CECs.
Mathematics 10 00320 g002
Figure 3. The main steps of the proposed CEC image enhancement procedure.
Figure 3. The main steps of the proposed CEC image enhancement procedure.
Mathematics 10 00320 g003
Figure 4. The outputs of the CellsDeepNet system: (a) the original corneal image, (b) output of the CLAHE approach, (c) output of the 2DDD-TCWT approach, (d) output of the Butterworth Bandpass filter, (e) applying the brightness level adjustment step, (f) final endothelial cells segmented image, (g) labelling of endothelial cells, and (h) imposed traced endothelial cells boundaries on the original image.
Figure 4. The outputs of the CellsDeepNet system: (a) the original corneal image, (b) output of the CLAHE approach, (c) output of the 2DDD-TCWT approach, (d) output of the Butterworth Bandpass filter, (e) applying the brightness level adjustment step, (f) final endothelial cells segmented image, (g) labelling of endothelial cells, and (h) imposed traced endothelial cells boundaries on the original image.
Mathematics 10 00320 g004
Figure 5. The architecture of the proposed U-Net model developed for the segmentation of the CECs.
Figure 5. The architecture of the proposed U-Net model developed for the segmentation of the CECs.
Mathematics 10 00320 g005
Figure 6. An illustration of the suggested training methodology to obtain the best network architecture.
Figure 6. An illustration of the suggested training methodology to obtain the best network architecture.
Mathematics 10 00320 g006
Figure 7. The loss and accuracy plots during the training process on training and validation set: (a) the MCCM dataset and (b) the ECA dataset.
Figure 7. The loss and accuracy plots during the training process on training and validation set: (a) the MCCM dataset and (b) the ECA dataset.
Mathematics 10 00320 g007
Figure 8. The data augmentation procedure: (a) the original CEC image, (b) horizontal flipped image, (c) vertical flipped image, and (d) the eight random image patches.
Figure 8. The data augmentation procedure: (a) the original CEC image, (b) horizontal flipped image, (c) vertical flipped image, and (d) the eight random image patches.
Mathematics 10 00320 g008
Figure 9. Examples of the generated figures in the clinical features quantification stage: (a) Colour-coded cell pleomorphism map where the orange colour is referring to six neighbours’ cells and (b) the histogram distribution plot of the pleomorphism parameter.
Figure 9. Examples of the generated figures in the clinical features quantification stage: (a) Colour-coded cell pleomorphism map where the orange colour is referring to six neighbours’ cells and (b) the histogram distribution plot of the pleomorphism parameter.
Mathematics 10 00320 g009
Figure 10. The GIMP software yields: (a) The input image, (b) a typical sample of manually detected CEC boundaries, (c) a produced binarized image employed as a ground-truth segmented image.
Figure 10. The GIMP software yields: (a) The input image, (b) a typical sample of manually detected CEC boundaries, (c) a produced binarized image employed as a ground-truth segmented image.
Mathematics 10 00320 g010
Figure 11. Comparison of the segmentation performance between the original U-Net, CellsDeepNet, and CEAS system using the MCCM dataset. The performance is better with higher values of SSIM and PRI and lower values of VoI, GMSD, GCE, MSE, and NAE.
Figure 11. Comparison of the segmentation performance between the original U-Net, CellsDeepNet, and CEAS system using the MCCM dataset. The performance is better with higher values of SSIM and PRI and lower values of VoI, GMSD, GCE, MSE, and NAE.
Mathematics 10 00320 g011
Figure 12. Correlation plots between manual and automated endothelial cell parameters for the MCCM dataset. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism.
Figure 12. Correlation plots between manual and automated endothelial cell parameters for the MCCM dataset. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism.
Mathematics 10 00320 g012
Figure 13. Bland-Altman plots present the difference against the mean for each pair of manual compared to automated endothelial cell parameters. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism from the MCCM dataset. The dashed lines refer to the (95%) lines of agreement, and solid lines represent the mean differences.
Figure 13. Bland-Altman plots present the difference against the mean for each pair of manual compared to automated endothelial cell parameters. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism from the MCCM dataset. The dashed lines refer to the (95%) lines of agreement, and solid lines represent the mean differences.
Mathematics 10 00320 g013
Figure 14. Comparison of the segmentation performance between the original U-Net, CellsDeepNet, and CEAS systems using the ECA dataset. The performance is better with higher values of SSIM and PRI, and lower values of VoI, GMSD, GCE, MSE, and NAE.
Figure 14. Comparison of the segmentation performance between the original U-Net, CellsDeepNet, and CEAS systems using the ECA dataset. The performance is better with higher values of SSIM and PRI, and lower values of VoI, GMSD, GCE, MSE, and NAE.
Mathematics 10 00320 g014
Figure 15. Correlation plots of the manual and automated CEC parameters from the ECA dataset. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism.
Figure 15. Correlation plots of the manual and automated CEC parameters from the ECA dataset. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism.
Mathematics 10 00320 g015
Figure 16. Bland–Altman plots present the difference against the mean for each pair of manual compared to automated endothelial cell parameters. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism from ECA dataset. The dashed lines refer to the (95%) lines of agreement, and solid lines represent the mean differences.
Figure 16. Bland–Altman plots present the difference against the mean for each pair of manual compared to automated endothelial cell parameters. (a) MCD, (b) MCA, (c) MCP, (d) polymegathism, and (e) pleomorphism from ECA dataset. The dashed lines refer to the (95%) lines of agreement, and solid lines represent the mean differences.
Mathematics 10 00320 g016
Figure 17. Differences between the manual and CellsDeepNet system segmentation: (a) ground-truth manual segmented image, (b) CellsDeepNet system segmented image, and (c) overlapped CEC borders that the manual and CellsDeepNet system had.
Figure 17. Differences between the manual and CellsDeepNet system segmentation: (a) ground-truth manual segmented image, (b) CellsDeepNet system segmented image, and (c) overlapped CEC borders that the manual and CellsDeepNet system had.
Mathematics 10 00320 g017
Figure 18. Comparison of the segmentation performance between CellsDeepNet and Selig et al. [21] using the ECA dataset. The performance is better with higher values of SSIM and PRI and lower values of VoI, GMSD, GCE, MSE, and NAE.
Figure 18. Comparison of the segmentation performance between CellsDeepNet and Selig et al. [21] using the ECA dataset. The performance is better with higher values of SSIM and PRI and lower values of VoI, GMSD, GCE, MSE, and NAE.
Mathematics 10 00320 g018
Figure 19. Comparison between the automated segmentation outputs: (a) original image, (b) Selig et al. [21] system, (c) the proposed CellsDeepNet system.
Figure 19. Comparison between the automated segmentation outputs: (a) original image, (b) Selig et al. [21] system, (c) the proposed CellsDeepNet system.
Mathematics 10 00320 g019
Table 1. Performance comparison study of the proposed CellsDeepNet system with and without applying the image pre-processing step.
Table 1. Performance comparison study of the proposed CellsDeepNet system with and without applying the image pre-processing step.
Quantitative MeasuresMCCM DatasetECA Dataset
With Image
Pre-Processing
Without Image
Pre-Processing
With Image
Pre-Processing
Without Image Pre-Processing
PRI0.99930.49560.98370.5053
SSIM0.99980.41020.97680.4312
VoI0.05160.72310.10320.8901
GMSD0.08620.82310.01050.9212
MSE0.05660.53420.03370.5231
GCE0.00030.52230.00060.4345
NAE0.04440.45320.07690.5301
Table 2. Measures of the CEC morphology generated by manual assessment, CEAS, and CellsDeepNet system from the MCCM dataset. The differences (Diff) between the manual and automated estimates are listed along with a percentage (Diff %).
Table 2. Measures of the CEC morphology generated by manual assessment, CEAS, and CellsDeepNet system from the MCCM dataset. The differences (Diff) between the manual and automated estimates are listed along with a percentage (Diff %).
ManualAuto. CEAS SystemAuto. CellsDeepNet System
MCD (Cells/mm2)MCD (Cells/mm2)DiffDiff %MCD (Cells/mm2)DiffDiff %
Av.1898.081967.0431.041.561984.8813.20.66
STD765.21755.559.661.27763.172.040.26
Max31253226−101−3.1829371886.20
Min12321280−48−3.82123020.16
MCA (µm2)MCA (µm2)DiffDiff %MCA (µm2)DiffDiff %
Av.292.282857.282.52292.270.010.003
STD41.7744.45−2.68−6.2141.760.010.02
Max39539051.2739500
Min229233−4−1.7322900
MCP (µm)MCP (µm)DiffDiff %MCP (µm)DiffDiff %
Av.79.0474.064.986.5077.161.882.40
STD20.5521.67−1.12−5.3020.93−0.38−1.83
Max138140−2−1.4313621.45
Min6163−2−3.225923.33
Polymegathism %Polymegathism %DiffDiff %Polymegathism %DiffDiff %
Av.45.4442.952.495.644.141.32.90
STD11.1412.68−1.54−12.9310.480.666.10
Max837944.9386−3−3.55
Min3034−4−12.532−2−6.45
Pleomorphism %Pleomorphism %DiffDiff %Pleomorphism %DiffDiff %
Av.34.7832.52.286.7734.190.591.71
STD6.143.632.5151.385.980.162.64
Max5356−3−5.505300
Min1417−3−19.3516−2−13.33
Table 3. Performance comparison between the manual and automatic estimates of five clinical parameters from the ECA dataset.
Table 3. Performance comparison between the manual and automatic estimates of five clinical parameters from the ECA dataset.
ManualAuto. CEAS SystemAuto. CellsDeepNet System
MCD (Cells/mm2)MCD (Cells/mm2)DiffDiff %MCD (Cells/mm2)DiffDiff %
Av.29522925270.912927250.85
STD16115742.5115831.88
Max31943177170.533181130.40
Min26622605572.162612501.89
MCA (µm2)MCA (µm2)DiffDiff %MCA (µm2)DiffDiff %
Av.27426951.8427220.73
STD16.0815.650.432.7116.010.070.43
Max30530051.65306−1−0.32
Min25124652.0124741.60
MCP (µm)MCP (µm)DiffDiff %MCP (µm)DiffDiff %
Av.66.6865.581.11.6665.201.482.24
STD2.761.671.0949.202.82−0.06−2.15
Max929022.198755.58
Min5658−2−3.5059−3−5.21
Polymegathism %Polymegathism %DiffDiff %Polymegathism %DiffDiff %
Av.31.8130.541.274.0731.650.160.50
STD2.852.680.176.142.830.020.70
Max3842−4−103800
Min2629−3−10.92600
Pleomorphism %Pleomorphism %DiffDiff %Pleomorphism %DiffDiff %
Av.40.8437.53.348.5240.680.160.39
STD1.732.52−0.79−37.171.95−0.22−11.95
Max464424.4447−1−2.15
Min363338.693600
Table 4. Performance comparison between the CellsDeepNet system and the existing CEC segmentation systems using three clinical parameters.
Table 4. Performance comparison between the CellsDeepNet system and the existing CEC segmentation systems using three clinical parameters.
Scarpa and Ruggeri [17]Ruggeri et al. [23]Poletti and Ruggeri [42]CellsDeepNet System
MCD (cells/mm2) DiffDiff %DiffDiff %DiffDiff %DiffDiff %
Av.17.730.82−120.2896.353.11250.85
STD16.311.08−103.158.832.2931.88
Max00−70.18180.57130.40
Min3.806.24350.7224510.37501.89
Polymegathism % DiffDiff %DiffDiff %DiffDiff %DiffDiff %
Av.1.453.950.73.025.7214.780.160.50
STD0.791.930.26.892.565.900.020.70
Max001.16.411.102.2000
Min2.806.86−0.10.3512.2024.3000
DiffDiff %DiffDiff %DiffDiff %DiffDiff %
Pleomorphism %Av.1.833.13−0.51.034.217.320.160.39
STD1.332.27−0.923.372.724.57−0.22−11.95
Max002.25.6200−1−2.15
Min3.806.24−4.27.4711.8016.2100
Table 5. Performance comparison between the proposed CellsDeepNet system with other existing deep learning CEC segmentation systems using the EAC dataset.
Table 5. Performance comparison between the proposed CellsDeepNet system with other existing deep learning CEC segmentation systems using the EAC dataset.
ModelsMeasures
DIJAF1SPSEMHD
Fabija´nska [24]0.86-----
Zhang et al. [43],0.78-0.820.940.87-
Nurzynska [22]0.940.94---0.14
Orig. U-Net [31]0.770.790.810.960.810.22
CellsDeepNet0.970.980.890.980.900.08
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Al-Waisy, A.S.; Alruban, A.; Al-Fahdawi, S.; Qahwaji, R.; Ponirakis, G.; Malik, R.A.; Mohammed, M.A.; Kadry, S. CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells. Mathematics 2022, 10, 320. https://doi.org/10.3390/math10030320

AMA Style

Al-Waisy AS, Alruban A, Al-Fahdawi S, Qahwaji R, Ponirakis G, Malik RA, Mohammed MA, Kadry S. CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells. Mathematics. 2022; 10(3):320. https://doi.org/10.3390/math10030320

Chicago/Turabian Style

Al-Waisy, Alaa S., Abdulrahman Alruban, Shumoos Al-Fahdawi, Rami Qahwaji, Georgios Ponirakis, Rayaz A. Malik, Mazin Abed Mohammed, and Seifedine Kadry. 2022. "CellsDeepNet: A Novel Deep Learning-Based Web Application for the Automated Morphometric Analysis of Corneal Endothelial Cells" Mathematics 10, no. 3: 320. https://doi.org/10.3390/math10030320

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop