Next Article in Journal
A New Perspective of Multiple Roller Compaction of Microcrystalline Cellulose for Overcoming Re-Compression Drawbacks in Tableting Processing
Next Article in Special Issue
A Study on Tensile Strain Distribution and Fracture Coordinate of Nanofiber Mat by Digital Image Correlation System
Previous Article in Journal
Ginkgo biloba Alleviates Cisplatin-Mediated Neurotoxicity in Rats via Modulating APP/Aβ/P2X7R/P2Y12R and XIAP/BDNF-Dependent Caspase-3 Apoptotic Pathway
Previous Article in Special Issue
Deep Learning-Based Wrapped Phase Denoising Method for Application in Digital Holographic Speckle Pattern Interferometry
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Automatic Tortuosity Estimation of Nerve Fibers and Retinal Vessels in Ophthalmic Images

1
Cixi Institute of Biomedical Engineering, Ningbo Institute of Materials Technology and Engineering, Chinese Academy of Sciences, Ningbo 315201, China
2
Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA
3
Department of Computer Science and Engineering, Southern University of Science and Technology, Shenzhen 518055, China
*
Authors to whom correspondence should be addressed.
These authors contributed equally to this work.
Appl. Sci. 2020, 10(14), 4788; https://doi.org/10.3390/app10144788
Submission received: 29 May 2020 / Revised: 7 July 2020 / Accepted: 9 July 2020 / Published: 12 July 2020
(This article belongs to the Special Issue Intelligent Processing on Image and Optical Information, Volume II)

Abstract

:
The tortuosity changes of curvilinear anatomical organs such as nerve fibers or vessels have a close relationship with a number of diseases. Therefore, the automatic estimation and representation of the tortuosity is desired in medical image for such organs. In this paper, an automated framework for tortuosity estimation is proposed for corneal nerve and retinal vessel images. First, the weighted local phase tensor-based enhancement method is employed and the curvilinear structure is extracted from raw image. For each curvilinear structure with a different position and orientation, the curvature is measured by the exponential curvature estimation in the 3D space. Then, the tortuosity of an image is calculated as the weighted average of all the curvilinear structures. Our proposed framework has been evaluated on two corneal nerve fiber datasets and one retinal vessel dataset. Experiments on three curvilinear organ datasets demonstrate that our proposed tortuosity estimation method achieves a promising performance compared with other state-of-the-art methods in terms of accuracy and generality. In our nerve fiber dataset, the method achieved overall accuray of 0.820, and 0.734, 0.881 for sensitivity and specificity, respectively. The proposed method also achieved Spearman correlation scores 0.945 and 0.868 correlated with tortuosity grading ground truth for arteries and veins in the retinal vessel dataset. Furthermore, the manual labeled 403 corneal nerve fiber images with different levels of tortuosity, and all of them are also released for public access for further research.

1. Introduction

As one of the most significant biomarkers, tortuosity of anatomical curvilinear organ has a close relationship with human diseases in medical images. In recent clinical research [1,2], it is confirmed that some morphological changes of curvilinear structures, nerve fibers, retinal vessels, corneal, coronary, and optic nerve are important cues for diagnosis many diseases. The tortuosity measurement can be used for multiple imaging modalities. The tortuosity is an important way to evaluate architecture and morphology of vasculature, and can be used for skin diseases detection in optical coherence tomography angiography and photoacoustic imaging [3]. Higher densities and tortuosity geometries is also usually indicating the location of cancer-induced angiogenesis in contrast-enhance ultrasound imaging [4]. Human eye is a small organ and full of many curvilinear structures which include nerve fibers and vessels. Diagnosis of many eyes and systemic diseases [1,5,6] are usually tested by quantitative properties of such curvilinear structure tortuosity. How to measure the tortuosity and what is the best measurement metrics are major topics in their research.
More specifically, ophthalmologists usually assess diabetic optic neuropathy [2] and hypertensive retinopathy [7] with tortuosity of corneal nerve fibers and retinal vessels, respectively. Figure 1 illustrates examples of nerve fiber and vessels with three-level tortuosity in corneal confocal microscopy (CCM) and retinal fundus image. Clinically, the tortuosity of these curvilinear structure is usually graded into three to five levels, or as normal/abnormal by ophthalmologists based on their experience. Nerve fibers and retinal vessels tortuosity are divided into different levels by the naked eye of clinicians after observing the CCM and retinal fundus image. However, the assessment made by clinicians is usually subjective and time consuming. Different clinicians will get different levels for the same image, and the same doctor also may give inconsistent decisions for the same image at a different time [1]. Therefore, it is desired to propose an objective, repeatable, highly adaptable and quantitative method to assess the tortuosity level of the curvilinear structure.
So far, various automatic tortuosity measurement approaches [1,2,7] fall into five steps: first, original image usually is enhanced by filters in the vessel segmentation problems. The three most influential filters are the eigenvalue-based filter, the wavelet transform based filter and the local phase based filter. The typical eigenvalue-based filter proposed by Frangi et al. [8] is based on eigenvalues of the Hessian matrix. The isotropic undecimated wavelet transform has recently been used for vessel segmentation and showed good accuracy and computational efficiency [9]. Local phase is an important local feature that can measure structural information of an image. It has recently been shown that this information can be used to enhance vessels in a more precise way and produce promising segmentation results [10]. Second, a segmentation step is utilized to get the curvilinear structure image from original medical image [11,12,13]. Third, a centerline map is extracted from curvilinear structure image by a skeletonization method, and all the intersection points of the centerline are detected. Then, the centerline is broken into small centerline fragments according to intersection points. In some cases, curve fitting or parametric pre-processing is performed on the fragments for the breakpoints connection and next calculation. Finally, the tortuosity of each fragment is calculated by different metrics which are shown in Table 1.
However, methods of the curvilinear tortuosity measurement usually lack of universally accepted definition, or standard specification [1,2,7]. Several definitions have been proposed in the literature with an aim to automatically assess the tortuosity. Scarpa et al. [2] compute the tortuosity based on frequency information such as the number of phase changes in the curvature. Some methods assess the arc length and chord length for the tortuosity estimation. The arc length is the pixel length between two endpoints of the centerline fragment, while the chord length is the linear distance between endpoints. Grisan et al. [7] multiply the sum of each ACR by the inflection points number for tortuosity assessment. Bracher et al. [18] assess the tortuosity by ratio of the arcs length to chords length (ACR). Bullitt et al. [20] provides a 3-D tortuosity analysis of vessels and evaluate three ACR related tortuosity metrics for different types of abnormality detection. Other methods evaluate the tortuosity based on angle and curvature. Semdby et al. [24] measure tortuosity by total curvature and distance factor calculations for the femoral arteriograms. This measure is suitable for detection of centerline fragment tortuosity with high-frequency changes. Hart et al. [14] differentiate simply tortuous retinal vessels from non-tortuours ones using total curvature and total squared curvature of vessel centerlines. Goh et al. [16] measure the vessels curvature for diabetic-related disease detection by direction change and ACR. Some other methods utilize coding and mode technique to measure tortuosity. Bribiesca [25] encode the retinal vessels by slope chain code and then get the tortuosity. Annunziata et al. [1,26] take the advantage of combining results from different tortuosity metrics, and then leverage the multinomial regression method to select the best combination.
In addition, the tortuosity evaluation still faces some problems to be solved, including human subjective errors, data variations, and metric differences. An image may be labeled by a clinician as the high tortuosity level while other clinicians as mild tortuosity level [1]. The analysis of the curvilinear organs is also a time-consuming task, since most of the analysis softwares still relies on the ophthalmologists manual measurement. It is worth noticing that the lack of available data limits the application of deep learning methods to tortuosity grading. Recently, a research [27] found that the performance of end-to-end tortuosity classification of corneal nerve images with AlexNet and VGG-16 did not exceed random accuracy on the tested dataset. The performance of conventional tortuosity measurements sometimes will also be affected by data differences, image pre-processing, and segmentation of curve structures. In order to make the tortuosity measurement more robust, the method is also invariant to scaling, rotation, and translation [6] in the clinical scence. Our previous work [28] evaluated the tortuosity assessment on the Retinex-based nerve fiber enhancement image. Therefore, an automatic and accurate measurement method is needed to evaluate the curvature of the curve. To tackle these problems, this paper introduces a new reliable framework for tortuosity assessment, which can conquer the human subjective error and metric variations. In addition, compared with the previous work [28], we evaluate the tortuosity assessment both on the nerve fiber and retinal vessel image based on the local phase tensor enhancement in order to verify the robustness of the method. Most of the tortuosity metrics are evaluated quantitatively on the dataset. The effectiveness of four enhancement methods on the tortuosity assessment is also evaluated on the internal and public segmentation method, respectively. The contributions of this paper are described as follows:
  • We propose an automatic curvilinear tortuosity measurement method with the exponential curvature estimation, which measures the tortuosity value from the original image directly with few complex pre-processing steps of the traditional methods.
  • Our proposed method is robust, which has been validated quantitatively using one retinal blood vessel tortuosity dataset and two corneal nerve tortuosity datasets. As a complementary output, we have made all the tortuosity datasets available online.

2. Methods

Our proposed framework consists of two stages: the curvilinear structure enhancement by the local phase tensor filtering, and the tortuosity estimation by exponential curvature calculation.

2.1. Curvilinear Structure Enhancement

Typically, the quality of acquired ophthalmic images exhibits varying conditions of illumination and noise, primarily due to the imperfect focus and acquisition process. The imbalanced intensity dims the appearance of curvilinear structures, such as nerve fibers in CCM imagery, and thus creates difficulty in distinguishing the nerve fibers and estimating their tortuosity from the background [29]. In this paper, we introduce the local phase tensor-based filter to enhance the curvilinear structures in ophthalmic images. The local phase is a measure of the structural information of an image, which is able to distinguish intrinsic features from background with the illumination invariant advantage [30]. Given a curvilinear structure image signal as f ( x ) , the definition of the local phase is:
φ ( x ) = arctan E ( x ) O ( x ) ,
where E ( x ) is the convolution operation response of f ( x ) through the even filter. O ( x ) is the response of convolution operation of f ( x ) through the odd filter. Here, the odd filter is the Hilbert transform of the even filter. In practice, E ( x ) and O ( x ) are usually calculated by the log-Gabor filter:
E ( x ) = Re { F 1 ( LG ( ω ) × F ( x ) ) } ,
O ( x ) = Im { F 1 ( LG ( ω ) × F ( x ) ) } ,
where F ( x ) is the Fourier transform of the image signal. The LG indicate the log-Gabor filtering. F 1 is the inverse Fourier transform. Re and Im indicate the real part and the imaginary part of it expression, respectively. The symmetric aspect T e v e n of local phase tensor is calculated as [31]:
T e v e n = [ H ( E ( x ) ) ] · [ H ( E ( x ) ) ] T ,
while the asymmetric aspect T o d d is
T o d d = 0 . 5 × { [ ( O ( x ) ) ] · [ 2 ( O ( x ) ) ] T + [ 2 ( O ( x ) ) ] · [ ( O ( x ) ) ] T } ,
In these equations, ∇ and 2 denote the Gradient and Laplacian operations, respectively. H is the Hessian operation. The T e v e n is the product of the term H ( E ( x ) ) with its transpose. According to the conclusion in [30], the local phase tensor equation is:
T = θ Θ ( T e v e n θ ) 2 + ( T o d d θ ) 2 · cos ( φ ) .
Furthermore, we also consider multiple orientations to make the rotation invariant tensor, where Θ = { π 16 , 2 π 16 , 3 π 16 , , π } . φ is the phase parameter, which is calculated from the local contrast. Figure 2a,e are the original images. Figure 2b,f illustrate the curvilinear structure enhanced images. Figure 2d,h are the curvature maps. Since the filters are usually calculated for the whole image, the tortuosity values of the bifurcation points are also calculated. However, the value of the bifurcation should not have impact on the evaluation of tortuosity, so we break the bifurcations and remove the tortuosity values.

2.2. Curvilinear Structure Tortuosity Representation

To precisely describe geometric variations of curvilinear structure, we assess the tortuosity by the exponential curvature estimation.

2.2.1. Representation of the Curvature Orientation

The curvature estimation is calculated from the parameters with position and orientation. Therefore we define a 3D space R 2 S 1 , in which every 2D curvilinear can be represented as a group g = ( x , θ ) . The term x = ( x , y ) R 2 is identified by planar translations while θ S 1 is the rotation. In this space, the image processing operations rely on a rotating frame of reference, such that
{ e ξ | g , e η | g , e θ | g } = { cos θ e x + sin θ e y , cos θ e y sin θ e x , e θ } .
In order to describe the estimation process more clearly, we illustrate the orientation projection process in Figure 3. Take the synthetic image in Figure 3a as an example of the original curvilinear structures in 2D image. First the original curvilinear structures is deal with our enhancement method, and then projected to the 3D position-orientation space to get the orientation scores) [32,33] (Figure 3b). For each curvilinear, the mapping process is achieved through the wavelet transform and convolution operations. The transformation formula with continuous form is as follows:
S f ( x , θ ) = R 2 ψ ( R θ 1 ( x x ) ) ¯ f ( x ) d x ,
where S f is the orientation score. In an image f, each spatial location ( x , y ) of a curvilinear can be represented as a group ( x , θ ) in 3D space. Here, the ψ L 2 ( R 2 ) is the cake-wavelet transform, which is θ -axis aligned with rotational symmetry. Finally, the curvilinear in 3D space is represented by the orientation scores for further curvature estimation.

2.2.2. Exponential Curvature Estimation

Intuitively, as shown in Figure 3b, the curvature of a curvilinear can be regarded as the rate of angular change of a moving point along the direction of the curve. In R 2 S 1 3D space, an exponential curve γ c with arc length t is represented by tangent vector components and the rotation-invariant basis: i.e., γ ˙ c ( t ) = c ξ e ξ | γ c ( t ) + c η e η | γ c ( t ) + c θ e θ | γ c ( t ) . Here the tangent vector is c = ( c ξ , c η , c θ ) T , while the basis is noted as { e ξ , e η , e θ } . In orientation scores, exponential curves are presented as circular spirals, which are similar to straight lines with respect to the curved geometry in R 2 S 1 . With the spatial projections, each pixel on the curvature can be determined. For an exponential curve γ c with the tangent vector of c, we note P R 2 γ c as the projected curve. Therefore, the curvature κ ( x , θ ) is calculated by the term c θ sign ( c ξ ) | c ξ | 2 + | c η | 2 .
Accurate local curvature estimation is achieved by fitting the exponential curve to the local curvilinear structure with the best alignment. The optimal tangent vector c * of the best exponential curve fit is used to compute the local curvature κ , as shown in Figure 3b. We can solve the c * by calculating the eigenvectors of the Hessian matrix. Here we take the Hessian matrix with μ —normalized form previous orientation scores. Its mathematical expression is: H μ S f = M μ 1 ( H S f ) T M μ 2 ( H S f ) M μ 1 , with
H S f = ξ 2 S f ξ η S f θ ξ S f ξ η S f η 2 S f θ η S f ξ θ S f η θ S f θ 2 S f ,
For each element in the matrix, its first and second order derivative are calculated by rotation-invariant partial derivative [34]. Here, the derivative of position group is { ξ , η } = { cos θ x + sin θ y , cos θ y sin θ x } . The orientation element is θ . M μ 1 is the diagonal matrix [11].
From previous steps, we extract curvilinear structures from original images with the proposed enhancement method, and generate high-quality maps for the further calculations. Then, we obtained the orientation scores and curvature of local curvilinear structures. In Figure 3, we show this process with curvilinear structures with a different position and curvature. From the results of orientation and curvature of individual curvilinear fragment, we will evaluate the tortuosity all over the map. For tortuosity metric calculation, we consider the length, curvature of each curve and the proportion of different curvature in the map. The equation of the tortuosity metric is
κ exp = 1 V | κ c * ( x , Θ ( x ) ) | T ( x ) d x ,
where the κ exp is our tortuosity metric, which is the normalized global absolute curvatures of the enhanced curvilinear. For an enhanced curvilinear structure map T ( x ) , the κ exp is the sum of them with curvature weights | κ c * ( x , Θ ( x ) ) | . The V = T ( x ) d x is the normalization factor which is the total sum of the T ( x ) . For the absolute curvature weight, the κ c * is a previous local curvature.
The Θ ( x ) = arg max θ i π N { 1 , , N } { S ( x , θ i ) } selects the maximum orientation response angle as weight. S ( x , θ ) is our previous orientation scores. The weighted tortuosity metric can describe the curvilinear structure better. For a vessel or a nerve fiber image with multiple curvilinear structures, the weighted tortuosity metric can describe the curvilinear structure better, since many of the curvilinear structures are relatively smooth, while only a few of them are with high tortuosity. In order to capture the local curvature changes, we give large weight value for high tortuosity by Θ ( x ) function while low weight value for low tortuosity. Then we take weighted sum rather than average for the whole curvilineear structure map.

3. Datasets and Metrics

In this paper, we use two corneal nerve datasets and one retinal vessel dataset for tortuosity evaluation, which are named CCM-A, CCM-B and RET-TORT. In order to validate the proposed tortuosity measure, the segmentation are manually done for all datasets. Particularly for the corneal nerve datasets (CCM-A and CCM-B), the reference fiber centerlines were manually annotated by ophthalmologists. For the retinal vessel dataset, the manually drawn vessel center lines are provided with the original dataset. For all the tested datasets, the centerlines of segmentation results are extracted and the intersection points on these centerlines are detected. When calculating the tortuosity degrees, the centerlines and are further broken into single curvilinear segments at each intersection point.
CCM-A contains 403 CCM images. In CCM-A, two clinicians independently graded the images into four levels according to the tortuosity from straight to heavy. The dataset is annotated in accordance with previously published protocol [35]. We have made this dataset available online (https://imed.nimte.ac.cn/corn.html). Level-1: 54 nerve fibers images with almost straight. Level-2: 212 images with mild tortuous. Level-3: 108 images are quite tortuous. Level-4: 29 images show heavy tortuous. Figure 4 illustrates nerve fibers images with four tortuosity levels, and their manually labeled centerlines.
CCM-B is a publicly-available dataset (http://bioimlab.dei.unipd.it/), with clinical annotations for corneal nerve fibers. The CCM-B database is composed of 30 CCM images. An expert grouped these images into three classes, based on degree of fiber tortuosity: low, mid, and high.
RET-TORT is the same publicly-available dataset with CCM-B. There are 60 total fundus images in this dataset. 30 arteries and 30 veins are extracted from them. All these images were examined by a retinal specialist, Names of these images are ordered by increasing tortuosity. For each image, the manually drawn vessel centerline is provided with the original dataset. As shown in Figure 5a–c are original images with low, middle and high tortuosity in RET-TORT. In this dataset, only the most important vessel was selected for tortuosity measurement and manual labeled. We overlaid the manual centerlines ground truth on the original images with yellow color.
For each tortuosity level, we compare the classification results with the ground truth by sensitivity (Sen), specificity (Spec), and accuracy (Acc) metrics. In order to evaluate tortuosity more accurately for different levels, as suggested in [1], we also take the weighted sensitivity (wSen), weighted specificity (wSpec), and weighted accuracy (wAcc) as metrics to demonstrate the overall performance. These metrics are defined as:
wSen = i = 1 L r i TP i TP i + FN i ,
wSpec = i = 1 L r i TN i TN i + FP i ,
wAcc = i = 1 L r i TP i + TN i TP i + TN i + FP i + FN i ,
let L be the total number of levels of tortuosity, for each i level tortuosity, r i is the percentage of level i images over all images. TP i is the number of correctly classified images which blong to the i level. FN i is the number of the i level images which are are falsely classified as other levels. FP i is the number of other level images which are are falsely classified as i levels. TN i is the number of other level images which are correctly classified.

4. Experimental Results

In experiment section, our proposed tortuosity estimation method is evaluated with combination experiments on these three tortuosity datasets. In addition, our proposed curvilinear structure enhancement method is evaluated by tortuosity grading and fiber segmentation. Furthermore, the tortuosity metrics in Table 1 are also evaluated quantitatively on the retinal vessel clinically.

4.1. Tortuosity Classification

First of all, we need to grade each image into different tortuosity levels. However, the number, length and curvature of curvilinear of each image are different, so it is a key problem to evaluate different tortuosity characteristics effectively [1]. As previous section stated, we should consider more about the measurement of a small part of the high tortuosity segments. Therefore, the tortuosity measurement for a whole image is calculated by the weighted average:
M = i = 1 N l i × m i i = 1 N l i ,
where l i is the length of the ith curvilinear structure. N is the number of segments in a curvilinear structure map. For each ith curvilinear structure with length ( l ) , the m i is the tortuosity metric score. The tortuosity metric can be any one of the metrics in Table 1 as well as our proposed Kexp.
Here, we use 14 tortuosity metrics for experiment. The 14 features are the values of tortuosity metrics, which are listed in Table 1. First, we extract each image feature as a 14 dimensional vector, then the tortuosity classification is by the Support Vector Machine (SVM) with image features. For instance, as four-level grading is used to measure tortuosity, the one versus one strategy used by SVM for multi-class classification leads to six SVM classifiers, each of which is trained to discriminate one class from another.
SVM is a well-established classification technique and has been proved to be able to produce satisfactory results on many applications, in particular when the training data is not big enough as required by deep learning. We only have 400 images in this study, from our experience, this would be too small for deep learning models, and SVM is well suitable in the problem at this scale. In the experiment, we leverage the SVM in the Matlab, and all SVM parameters are set to default values. The parameters may not be the optimal but will provide a benchmark for future studies (The goal of classification here is just to verify the validity of this new method).

4.2. Nerve Fibers Tortuosity Grading

As shown in Table 2, in this work, we calculate the tortuosity classification results for each level and the overall tortuosity level on both fiber dataset. For both the in-house CCM-A and public CCM-B datasets, the segmentation is manually done by ophthalmologists. In order to make a comparative study, we compared three methods: Annanziata [1], Kexp and Kexp+. The Annunziata method calculates ACR, TD, SCC, AC and WAC in Table 1 respectively, and get several features for an image. Finally, several features are fused by weighted average strategy, and the tortuosity of the image is classified into different levels by Multinomial Logistic Ordinal Regression. Kexp is our approach, but it does not include enhancement step. Kexp+ is our proposed method, which includes enhancement step.
In Table 2, the comparison results of tortuosity classification show that our proposed Kexp+ on enhanced images show better performance than the Annunziata’s both on CCM-A and CCM-B datasets. The evaluation metrics are wSen, wSpec, and wAcc scores. For the overall result, the kexp+ achieved the wAcc with 0.820 and 0.882 on the two datasets respectively, while the Annunziata’s approach are 0.790 and 0.848. The Kexp+ also outperforms the better classification results with regard to the wSen and wSpec in overall. In this experiment, we also further compare the reuslts level-by-level according to the tortuosity. Among different levels, we found that the level-1 column, level-4 column in CCM-A dataset and the low level column in CCM-B have the good classification results. The more levels are divided, the worse the classification results are. For Annunziata’s method, the wAccs of level-1, level-4 and low level are 0.858, 0.843 and 0.877, while the results of Kexp+ are 0.881, 0.864, and 0.914. In the group of low-level in CCM-B, two images were wrongly classified. Because of the interference of pathological features, some nerve fibers could not be traced completely.

4.3. Retinal Vessels Tortuosity Grading

In order to evaluate the tortuosity measurement performance of the proposed method on the RET-TORT dataset, we carried out a two steps experiments: In the first experiment, we use the manual annotation of the RET-TORT in order to show the effectiveness of the tortuosity metrics without accounting for possible mistakes of the segmentation. For each tortuosity metric, we calculated the tortuosity for arteries and veins images, respectively. Then we get the Spearman Correlation score of each metric correlated with tortuosity grading ground truth. The higher the correlation score is, the more accurate the metric is for the vessels tortuosity measurement. From the manual segmentation column of Table 3, we found that the TC get the best score for arteries tortuosity while our Kexp is the best for veins. But some tortuosity metrics (e.g., DCI, ICM, ACR) have low correlations and lead some errors.
In the second experiment, we further validated our proposed method with the automated segmentation result on the RET-TORT dataset. We calculated Spearman Correlation scores of the tortuosity metrics on the segmentation results and compare their performances. The automatic segmentation pipeline is illustrated as Figure 5, we first perform the local phase filters enhancement on the original vessel images (Figure 5d–f). Then we segment the vessels by the Infinite Perimeter Active Contour with Hybrid Region (IPACHR) model (Figure 5g–i). After that, we get the skeletonization results of the segmentation and keep the most important vessel centerline for further tortuosity measurement (Figure 5j–l). As shown in the automated segmentation column of Table 3, all the tortuosity metrics are calculated based on the centerline, and the Spearman Correlation scores of each metric are computed correlated with tortuosity grading ground truth. Our Kexp is also computed by using our exponential curvature metric on the centerline. From the results, we found that our Kexp achieved the best Spearman scores on the automatic segmentation group. In addition, when comparing results between the manual segmentation and the an automated segmentation group, the Spearman scores of automated group is lower than mannual ones. Because the IPACHR segmentation model lead some segmentation errors. The automated segmentation results of our Kexp estimation process is also slightly lower than manual ones with 1.9% difference ratio. Therefore, our proposed tortuosity estimation method is more accurate and robust than other metrics.

4.4. Clinical Evaluation

In this part, we test the ability of our proposed method to distinguish health and pathologies in clinical practice. The dataset we used is CCM-A which contain four pathological conditions: patients with both diabetes and dry eye (23 patients, total 117 images), only with dry eye (28 patients, total 124 images), only with diabetes (24 patients, total 120 images) and healthy (28 patients, total 123 images). The number of subjects and images are shown in Table 4. From the fourth column, we can see that eye disease will cause changes in curvature. Ppatients both with diabetes and dry eye have the highest Kexp score while the healthy group is the lowest. The Kexp score in the healthy group is lower than other eye diseases with p < 0.01. It means that our exponential curvature measurement is potentially able to distinguish the healthy from the disease groups well.

4.5. The Effectiveness of Curvilinear Structure Enhancement for Tortuosity Grading

Figure 6 shows intuitively why the proposed enhancement approach can improve the tortuosity estimation. Where Figure 6a is the raw image, Figure 6b,c are the 3D image after exponential curvature estimation and the 2D curvature image map after projection, respectively. Similarly, Figure 6d is an enhanced image obtained by our curvature enhancement method from the original image. Figure 6e,f are the results of exponential curvature estimation based on the enhanced image. The traditional method of tortuosity metrics of the original image is to calculate on the pre-segmented image of the original image. In contrast, our proposed tortuosity estimation is based on the curvature map. The tortuous structure has a higher value in the curvature map, while the background pixel is close to zero. Compared with Figure 6c,f, we can see that Figure 6c is greatly affected by the background noise of the original image. Figure 6f has the clearer structure and background.
We also obtained the tortuosity grading results after removing the enhancement step by the proposed method, referred as Kexp in Table 2. It can be observed that the tortuosity grading results of Kexp+ are better than the Kexp one. The proposed enhancement method is helpful to boost the final results with 0.023 and 0.014 in accuracy on both two fiber datasets.

4.6. The Effectiveness of Curvilinear Structure Enhancement for Fiber Segmentation

In this experiment, the effectiveness of different enhancement methods on the tortuosity assessment is also evaluated on our group internal and public segmentation method, respectively. For the segmentation methods, IPACHR [13] is the traditional enhancement mehtod which is proposed by our group. U-Net [36] is a popular deep learning based segmentation method which is widely used in medical image segmentation. For the enhancement methods, We compare the Weighted Symmetry Filter (WSF) [13], the Frangi proposed multi-scale vesselness filtering [8], and the proposed curvilinear structure enhancement method for fiber segmentation.
In order to evaluate different enhancement methods for different segmentation scenario, two segmentation methods are compared: one is the previous IPACHR method, another is the U-Net. In medical image processing domain, U-Net is a deep convolutional neural network for segmentation task, which classify every pixel of a medical image into different semantic label. Here, we verify the U-Net for nerve fibers segmentation with and without our enhancement methods separately, and the IPACHR model is also evaluated in this two cases. Because the U-Net is a trainable model, we randomly select 80 % images as the training set in CCM-A, and the left 20 % as the testing set. The IPACHR model is also validated on the same testing dataset.
We compare different enhancement methods from two evaluation criterias: the false discovery rate (FDR) and the sensitivity (Sen). The calculation of these two evaluation criteria is obtained by comparing the centerline predicted by the segmentation model with the ground truth centerline. Because the segmentation method only extracts one pixel wide centerline, the pixels of the extracted centerline are considered as true positive as long as it is within the range of three pixels around the ground truth centerline.
The segmentation performance is illustrated in Table 5. Our proposed method improve the Sen and decrease the FDR in both IPACHR and U-Net segmentation methods. Compared with the segmentation results of the raw image without enhancement, our method increase about 0 . 022 in Sen, and decrease about 0 . 027 in FDR by the IPACHR. For U-Net segmentation model, the Sen increase 0 . 056 , and the FDR decrease 0 . 090 . Compared with the results of WSF, the proposed method also demonstrates its promising result.
The last two images of Figure 7 demonstrate the fiber segmentation results obtained by U-Net over the original and enhanced image, respectively. In the red circle regions, we notice that our enhanced method can improve the segmentation performance. When comparing with the results from the raw image, our method can detect the fibers with noise matter much better, and it’s also more sensitive to some small fibers.

5. Conclusions

In ophthalmic images, the tortuosity change of curvilinear organs is associated with a number of ocular diseases. The assessment of tortuosity plays a significant role for diagnosis. However, it still has two problems. First, there is no general evaluation standard. Second, most of the tortuosity grading methods are highly dependent on the results of image pre-processing. The poor results of pre-processing and the method of tortuosity calculation itself will lead errors.
We employed a local phase tensor-based filter to improve the visibility of curvilinear structure, and a new exponential curvature estimation based tortuosity metrics for curvilinear structures is also illustrated. The proposed method has been validated on confocal corneal microscopy and retinal fundus dataset. The comparison results of tortuosity estimation with three datasets demonstrate that the proposed method has superior performance over the conventional measures.

Author Contributions

Conceptualization, Y.Z.; methodology, J.Z.; validation, B.C., D.Z.; formal analysis, H.C., B.C., D.Z.; investigation, D.Z., H.C., B.C.; original draft preparation, H.C. and B.C.; writing—review and editing, H.C., J.L. and Y.Z.; supervision, J.Z. and J.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by Zhejiang Provincial Natural Science Foundation of China (LZ19F010001), Zhejiang Provincial Key Research and Development Program (2020C030360), Chinese Postdoctoral Science Foundation (2018M640578), National Natural Science Foundation of China (61906181), Ningbo 2025 Science and Technology Major Projects (2019B10033, 2019B10061). Guizhou Provincial Joint Funds LH[2017]7007, Ningbo Natural Science Foundation (2018A610055), and Zhejiang Postdoctoral Scientific Research Project (ZJ2019167).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Annunziata, R.; Kheirkhah, A.; Aggarwal, S.; Hamrah, P.; Trucco, E. A fully automated tortuosity quantification system with application to corneal nerve fibres in confocal microscopy images. Med. Image Anal. 2016, 32, 216–232. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Scarpa, F.; Zheng, X.; Ohashi, Y.; Ruggeri, A. Automatic evaluation of corneal nerve tortuosity in images from in vivo confocal microscopy. Invest. Ophthal. Vis. Sci. 2011, 52, 6404–6408. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  3. Liu, M.; Drexler, W. Optical coherence tomography angiography and photoacoustic imaging in dermatology. Photochem. Photobiol. Sci. 2019, 18, 945–962. [Google Scholar] [CrossRef] [PubMed]
  4. van Sloun, R.J.; Demi, L.; Schalk, S.G.; Caresio, C.; Mannaerts, C.; Postema, A.W.; Molinari, F.; van der Linden, H.C.; Huang, P.; Wijkstra, H.; et al. Contrast-enhanced ultrasound tractography for 3D vascular imaging of the prostate. Sci. Rep. 2018, 8, 1–8. [Google Scholar] [CrossRef] [Green Version]
  5. Edwards, K.; Pritchard, N.; Vagenas, D.; Russell, A.; Malik, R.A.; Efron, N. Standardizing corneal nerve fibre length for nerve tortuosity increases its association with measures of diabetic neuropathy. Diabet. Med. 2014, 31, 1205–1209. [Google Scholar] [CrossRef]
  6. Kim, J.; Markoulli, M. Automatic analysis of corneal nerves imaged using in vivo confocal microscopy. Clin. Exp. Optom. 2018, 101, 147–161. [Google Scholar] [CrossRef] [Green Version]
  7. Grisan, E.; Foracchia, M.; Ruggeri, A. A Novel Method for the Automatic Grading of Retinal Vessel Tortuosity. IEEE Trans. Med. Imaging 2008, 27, 310–319. [Google Scholar] [CrossRef]
  8. Frangi, A.; Niessen, W.; Vincken, K.; Viergever, M. Multiscale vessel enhancement filtering. In Proceedings of the Medical Image Computing and Computer-Assisted Intervention, Cambridge, MA, USA, 11–13 October 1998; pp. 130–137. [Google Scholar]
  9. Bankhead, P.; Scholfield, C.N.; McGeown, J.G.; Curtis, T.M. Fast retinal vessel detection and measurement using wavelets and edge location refinement. PLoS ONE 2012, 7, e32435. [Google Scholar] [CrossRef] [Green Version]
  10. Läthén, G.; Jonasson, J.; Borga, M. Blood vessel segmentation using multi-scale quadrature filtering. Patter. Recogn. Lett. 2010, 31, 762–767. [Google Scholar] [CrossRef] [Green Version]
  11. Zhang, J.; Dashtbozorg, B.; Bekkers, E.; Pluim, J.; Duits, R.; ter Haar Romeny, B.M. Robust Retinal Vessel Segmentation via Locally Adaptive Derivative Frames in Orientation Scores. IEEE Trans. Med. Imaging 2016, 35, 2631–2644. [Google Scholar] [CrossRef] [Green Version]
  12. Zhao, Y.; Zheng, Y.; Liu, Y.; Zhao, Y.; Luo, L.; Yang, S.; Na, T.; Wang, Y.; Liu, J. Automatic 2D/3D Vessel Enhancement in Multiple Modality Images Using a Weighted Symmetry Filter. IEEE Trans. Med. Imaging 2017, 37, 438–450. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  13. Zhao, Y.; Rada, L.; Chen, K.; Harding, S.; Zheng, Y. Automated Vessel Segmentation Using Infinite Perimeter Active Contour Model with Hybrid Region Information with Application to Retinal Images. IEEE Trans. Med. Imaging 2015, 34, 1797–1807. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  14. Hart, W.E.; Goldbaum, M.H.; Kube, P.; Nelson, M. Measurement and classification of retinal vascular tortuosity. Int. J. Med. Inform. 1999, 53, 239–252. [Google Scholar] [CrossRef]
  15. Holmes, T.; Pellegrini, M.; Miller, C.; Epplin-Zapf, T.; Larkin, S.; Luccarelli, S.; Staurenghi, G. Automated software analysis of corneal micrographs for peripheral neuropathy. Investig. Ophthalmol. Vis. Sci. 2010, 51, 4480–4491. [Google Scholar] [CrossRef] [PubMed]
  16. Goh, K.; Hsu, W.; Lee, M.; Wang, H. ADRIS: An Automatic Diabetic Retinal Image Screening System; Medical Data Mining and Knowledge Discovery; Physica-Verlag: Heidelberg, Germany, 2001; pp. 181–210. [Google Scholar]
  17. Heneghan, C.; Flynn, J.; O’Keefe, M.; Cahill, M. Characterization of changes in blood vessel width and tortuosity in retinopathy of prematurity using image analysis. Med. Image Anal. 2002, 6, 407–429. [Google Scholar] [CrossRef]
  18. Bracher, D. Changes in peripapillary tortuosity of the central retinal arteries in newborns. Graefe’s Arch. Clin. Exp. Ophthalmol. 1982, 218, 211–217. [Google Scholar] [CrossRef]
  19. Patašius, M.; Marozas, V.; Lukoševičius, A.; Jegelevičius, D. Evaluation of tortuosity of eye blood vessels using the integral of square of derivative of curvature [elektroninis išteklius]. In Proceedings of the 3rd European Medical & Biological Engineering Conference, IFMBE European Conference on Biomedical Engineering EMBEC’05, Prague, Czech Republic, 20–25 November 2005; Volume 11. [Google Scholar]
  20. Bullitt, E.; Gerig, G.; Pizer, S.M.; Lin, W.; Aylward, S.R. Measuring tortuosity of the intracerebral vasculature from MRA images. IEEE Trans. Med. Imaging 2003, 22, 1163–1171. [Google Scholar] [CrossRef] [Green Version]
  21. Chandrinos, K.; Pilu, M.; Fisher, R.; Trahanias, P. Image Processing Techniques for the Quantification of Atherosclerotic Changes. In Proceedings of the VIII Mediterranean Conference on Medical and Biological Engineering and Computing, Limassol, Cyprus, 14–17 June 1998. [Google Scholar]
  22. Bribiesca, E.; Bribiescacontreras, G. 2D tree object representation via the slope chain code. Patter. Recogn. 2014, 47, 3242–3253. [Google Scholar] [CrossRef]
  23. Kallinikos, P.; Berhanu, M.; O’Donnell, C.; Boulton, A.; Efron, N.; Malik, R. Corneal nerve tortuosity in diabetic patients with neuropathy. Investig. Ophthal. Vis. Sci. 2004, 45 2, 418–422. [Google Scholar] [CrossRef] [Green Version]
  24. Smedby, O.; Högman, N.; Nilsson, S.; Erikson, U.; Olsson, A.; Walldius, G. Two-dimensional tortuosity of the superficial femoral artery in early atherosclerosis. J. Vasc. Res. 1993, 30, 181–191. [Google Scholar] [CrossRef]
  25. Bribiesca, E. A measure of tortuosity based on chain coding. Patter. Recogn. 2013, 46, 716–724. [Google Scholar] [CrossRef]
  26. Annunziata, R.; Kheirkhah, A.; Aggarwal, S.; Cavalcanti, B.; Hamrah, P.; Trucco, E. Tortuosity classification of corneal nerves images using a multiple-scale-multiple-window approach. In Proceedings of the MICCAI Workshop OMIA, Boston, MA, USA, 14 September 2014; pp. 113–120. [Google Scholar]
  27. Mehrgardt, P.; Zandavi, S.M.; Poon, S.K.; Kim, J.; Markoulli, M.; Khushi, M. U-Net Segmented Adjacent Angle Detection (USAAD) for Automatic Analysis of Corneal Nerve Structures. Data 2020, 5, 37. [Google Scholar] [CrossRef] [Green Version]
  28. Zhao, Y.; Zhang, J.; Pereira, E.; Zheng, Y.; Su, P.; Xie, J.; Zhao, Y.; Shi, Y.; Qi, H.; Liu, J.; et al. Automated Tortuosity Analysis of Nerve Fibers in Corneal Confocal Microscopy. IEEE Trans. Med. Imaging 2020. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  29. Scarpa, F.; Ruggeri, A. Development of Clinically Based Corneal Nerves Tortuosity Indexes. In Proceedings of the MICCAI Workshop OMIA, Québec City, QC, Canada, 14 September 2017; pp. 219–226. [Google Scholar]
  30. Felsberg, M.; Sommer, G. The monogenic signal. IEEE Trans. Signal Process. 2001, 49, 3136–3144. [Google Scholar] [CrossRef] [Green Version]
  31. Hacihaliloglu, I.; Rasoulian, A.; Abolmaesumi, P.; Rohling, R. Local Phase Tensor Features for 3D Ultrasound to Statistical Shape+Pose Spine Model Registration. IEEE Trans. Med. Imaging 2014, 33, 2167–2179. [Google Scholar] [CrossRef]
  32. Bekkers, E.; Duits, R.; Berendschot, T.; ter Haar Romeny, B. A multi-orientation analysis approach to retinal vessel tracking. J. Math. Imaging Vis. 2014, 49, 583–610. [Google Scholar] [CrossRef] [Green Version]
  33. Franken, E.; Duits, R. Crossing-preserving coherence-enhancing diffusion on invertible orientation scores. Int. J. Comput. Vis. 2009, 85, 253–278. [Google Scholar] [CrossRef] [Green Version]
  34. Bekkers, E.; Zhang, J.; Duits, R.; ter Haar Romeny, B.M. Curvature Based Biomarkers for Diabetic Retinopathy via Exponential Curve Fits in SE(2). In Proceedings of the MICCAI: International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 113–120. [Google Scholar]
  35. Oliveira-Soto, L.; Efron, N. Morphology of corneal nerves using confocal microscopy. Cornea 2001, 20, 374–384. [Google Scholar] [CrossRef]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation. In Proceedings of the MICCAI: International Conference on Medical Image Computing and Computer-Assisted Intervention, Munich, Germany, 5–9 October 2015. [Google Scholar]
Figure 1. Corneal nerve fibers (top) and retinal vessels (bottom) with low, middle and high tortuosity levels.
Figure 1. Corneal nerve fibers (top) and retinal vessels (bottom) with low, middle and high tortuosity levels.
Applsci 10 04788 g001
Figure 2. Overview of the proposed exponential curvature estimation on corneal nerve fibers and retinal vessels: (a) nerve fiber image; (b) enhanced nerve fiber image; (c) nerve fiber curvature map in 3D orientation score space with color-coded; (d) nerve fiber curvature map in the 2D space; (e) vessel image; (f) enhanced vessel image; (g) 3D vessel curvature map; (h) 2D vessel curvature map.
Figure 2. Overview of the proposed exponential curvature estimation on corneal nerve fibers and retinal vessels: (a) nerve fiber image; (b) enhanced nerve fiber image; (c) nerve fiber curvature map in 3D orientation score space with color-coded; (d) nerve fiber curvature map in the 2D space; (e) vessel image; (f) enhanced vessel image; (g) 3D vessel curvature map; (h) 2D vessel curvature map.
Applsci 10 04788 g002
Figure 3. Validation of exponential curvature estimation on (a) a synthetic image with SNR = 1; (b) the lifted data representation; (c) the curvature map via best exponential curve fit; (d) the projected curvature map.
Figure 3. Validation of exponential curvature estimation on (a) a synthetic image with SNR = 1; (b) the lifted data representation; (c) the curvature map via best exponential curve fit; (d) the projected curvature map.
Applsci 10 04788 g003
Figure 4. Nerve fiber images with four tortuosity levels, and their manual label.
Figure 4. Nerve fiber images with four tortuosity levels, and their manual label.
Applsci 10 04788 g004
Figure 5. RET-TORT dataset images and pre-processing steps; (ac) are original vessels images with manual labeled ground truth; (df) are images enhanced by local phase tensor filters; (gi) are segmentation results by IPACHR; (jl) are skeletonization of segmentation.
Figure 5. RET-TORT dataset images and pre-processing steps; (ac) are original vessels images with manual labeled ground truth; (df) are images enhanced by local phase tensor filters; (gi) are segmentation results by IPACHR; (jl) are skeletonization of segmentation.
Applsci 10 04788 g005
Figure 6. Comparison results of the exponential curvature estimation (b,c) and (e,f) from original image (a) and our enhanced image (d).
Figure 6. Comparison results of the exponential curvature estimation (b,c) and (e,f) from original image (a) and our enhanced image (d).
Applsci 10 04788 g006
Figure 7. Comparison of nerve fiber segmentation results between raw image and our enhanced image with U-Net.
Figure 7. Comparison of nerve fiber segmentation results between raw image and our enhanced image with U-Net.
Applsci 10 04788 g007
Table 1. Summary of tortuosity metrics.
Table 1. Summary of tortuosity metrics.
Tortuosity MetricsSymbols
1Tortuosity Density [7]TD
2Absolute Curvature [14]AC
3Squared Curvature [14]SQC
4Weighted Absolute Curvature [15]WAC
5Absolute Direction Angle Change [16]AAC
6Arc-Chord Length Ratio [17]ACR
7Chord Length [18]CHD
8Curve Length [18]CUR
9Directional Change of a Line [19]DCI
10Inflection Count Metric [20]ICM
11Mean Direction Angle Change  [21]MAC
12Slope Chain Coding [22]SCC
13Tortuosity Coefficient [23]TC
Table 2. Comparison results of tortuosity classification on CCM-A and CCM-B.
Table 2. Comparison results of tortuosity classification on CCM-A and CCM-B.
CCM-A DatasetCCM-B Dataset
Level-1Level-2Level-3Level-4OverallLowMidHighOverall
Annunziata [1]Sen0.7180.6440.6600.7070.6630.7830.7430.7610.766
Spec0.8670.7900.7830.8570.8060.9040.8600.8640.912
Acc0.8580.7800.7590.8430.7900.8770.8260.8470.848
Kexp based SVMSen0.7310.6540.6690.7130.6750.8110.7580.7860.784
Spec0.8780.8020.7940.8650.8160.9140.8650.8870.921
Acc0.8650.7880.7750.8530.7970.9090.8330.8700.868
Kexp+ based SVMSen0.7420.6740.6830.7340.7150.8240.7650.7900.810
Spec0.8980.8190.8110.8810.8520.9290.8730.8910.929
Acc0.8810.7990.7930.8640.8200.9140.8410.8830.882
Table 3. Spearman Correlation scores of tortuosity metrics for retinal vessels with manual and automated segmentation.
Table 3. Spearman Correlation scores of tortuosity metrics for retinal vessels with manual and automated segmentation.
Manual SegmentationAutomated SegmentationError (%)
ArteriesVeinsArteriesVeins
WAC0.9190.8140.8770.7688.8
DCI0.7870.5890.7340.6218.4
TC0.9490.8530.9190.8127.1
CHD0.8010.6620.7560.6386.9
AC0.9220.8370.8930.8016.5
ICM0.6840.5750.6610.5425.6
CUR0.8130.7010.7840.6775.3
SCC0.8500.7700.8270.7454.8
SQC0.9250.8260.9010.8123.8
MAC0.8200.8140.8010.7953.8
ACR0.7920.6560.8120.6293.4
TD0.8900.7600.9120.7532.9
AAC0.8380.6950.8410.6772.1
Kexp0.9450.8680.9280.8571.9
Error = avg ( SC manual ) avg ( SC automated ) × 100 % .
Table 4. Different metrics in control subjects and other diseases cases.
Table 4. Different metrics in control subjects and other diseases cases.
MetricsHealthyDry EyeDiabetesDry Eye and Diabetes
AC5.73 ± 4.895.56 ± 3.434.12 ± 2.234.72 ± 3.69
WAC ( × 10 3 )0.47 ± 0.180.48 ± 0.170.50 ± 0.200.52 ± 0.26
AAC14.93 ± 10.1812.74 ± 7.9513.51 ± 8.2313.36 ± 9.75
ACR0.98 ± 0.010.97 ± 0.010.97 ± 0.010.97 ± 0.01
CHD73.31 ± 20.6369.16 ± 19.0662.02 ± 16.8063.74 ± 21.72
CUR74.85 ± 21.0370.92 ± 19.4864.11 ± 17.0965.47 ± 21.63
DCI ( × 10 4 )0.33 ± 0.260.37 ± 0.350.44 ± 0.420.42 ± 0.32
ICM5.82 ± 1.645.60 ± 1.605.13 ± 1.505.23 ± 1.68
MAC0.64 ± 4.160.80 ± 3.290.05 ± 3.670.36 ± 3.90
SQC ( × 10 3 )0.30 ± 0.140.32 ± 0.140.36 ± 0.150.36 ± 0.17
SCC ( × 10 )0.36 ± 0.090.36 ± 0.070.35 ± 0.080.34 ± 0.07
TC ( × 10 2 )0.42 ± 0.220.41 ± 0.130.43 ± 0.170.45 ± 0.22
TD0.09 ± 0.030.10 ± 0.010.10 ± 0.010.10 ± 0.01
Kexp0.17 ± 0.060.21 ± 0.040.24 ± 0.070.25 ± 0.05
Table 5. Segmentation results of different enhancement methods with IPACHR and U-Net on nerve fiber images.
Table 5. Segmentation results of different enhancement methods with IPACHR and U-Net on nerve fiber images.
EnhancementIPACHRU-Net
FDRSenFDRSen
Raw0.394 ± 0.0070.738 ± 0.0100.428 ± 0.0080.749 ± 0.013
Vesselness filtering [8]0.375 ± 0.0060.753 ± 0.0040.367 ± 0.0100.552 ± 0.019
WSF [13]0.363 ± 0.0070.759 ± 0.0040.353 ± 0.0090.788 ± 0.004
Ours0.357 ± 0.0100.760 ± 0.0050.338 ± 0.0070.807 ± 0.005

Share and Cite

MDPI and ACS Style

Chen, H.; Chen, B.; Zhang, D.; Zhang, J.; Liu, J.; Zhao, Y. Automatic Tortuosity Estimation of Nerve Fibers and Retinal Vessels in Ophthalmic Images. Appl. Sci. 2020, 10, 4788. https://doi.org/10.3390/app10144788

AMA Style

Chen H, Chen B, Zhang D, Zhang J, Liu J, Zhao Y. Automatic Tortuosity Estimation of Nerve Fibers and Retinal Vessels in Ophthalmic Images. Applied Sciences. 2020; 10(14):4788. https://doi.org/10.3390/app10144788

Chicago/Turabian Style

Chen, Honghan, Bang Chen, Dan Zhang, Jiong Zhang, Jiang Liu, and Yitian Zhao. 2020. "Automatic Tortuosity Estimation of Nerve Fibers and Retinal Vessels in Ophthalmic Images" Applied Sciences 10, no. 14: 4788. https://doi.org/10.3390/app10144788

APA Style

Chen, H., Chen, B., Zhang, D., Zhang, J., Liu, J., & Zhao, Y. (2020). Automatic Tortuosity Estimation of Nerve Fibers and Retinal Vessels in Ophthalmic Images. Applied Sciences, 10(14), 4788. https://doi.org/10.3390/app10144788

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop