Next Article in Journal
Identification of TLE Focus from EEG Signals by Using Deep Learning Approach
Next Article in Special Issue
An Efficient Combination of Convolutional Neural Network and LightGBM Algorithm for Lung Cancer Histopathology Classification
Previous Article in Journal
Contribution of Dynamic and Genetic Tests for Short Stature Diagnosing: A Case Report
Previous Article in Special Issue
A Hybrid Technique for Diabetic Retinopathy Detection Based on Ensemble-Optimized CNN and Texture Features
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Landmark-Assisted Anatomy-Sensitive Retinal Vessel Segmentation Network

College of Information Science and Engineering, Northeastern University, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Diagnostics 2023, 13(13), 2260; https://doi.org/10.3390/diagnostics13132260
Submission received: 31 May 2023 / Revised: 28 June 2023 / Accepted: 30 June 2023 / Published: 4 July 2023
(This article belongs to the Special Issue AI/ML-Based Medical Image Processing and Analysis)

Abstract

:
Automatic retinal vessel segmentation is important for assisting clinicians in diagnosing ophthalmic diseases. The existing deep learning methods remain constrained in instance connectivity and thin vessel detection. To this end, we propose a novel anatomy-sensitive retinal vessel segmentation framework to preserve instance connectivity and improve the segmentation accuracy of thin vessels. This framework uses TransUNet as its backbone and utilizes self-supervised extracted landmarks to guide network learning. TransUNet is designed to simultaneously benefit from the advantages of convolutional and multi-head attention mechanisms in extracting local features and modeling global dependencies. In particular, we introduce contrastive learning-based self-supervised extraction anatomical landmarks to guide the model to focus on learning the morphological information of retinal vessels. We evaluated the proposed method on three public datasets: DRIVE, CHASE-DB1, and STARE. Our method demonstrates promising results on the DRIVE and CHASE-DB1 datasets, outperforming state-of-the-art methods by improving the F1 scores by 0.36% and 0.31%, respectively. On the STARE dataset, our method achieves results close to the best-performing methods. Visualizations of the results highlight the potential of our method in maintaining topological continuity and identifying thin blood vessels. Furthermore, we conducted a series of ablation experiments to validate the effectiveness of each module in our model and considered the impact of image resolution on the results.

1. Introduction

Retinal vessel segmentation is an important diagnostic method for detecting hypertension, arteriosclerosis, and retinal diseases [1]. However, the retinal vascular structure is extremely complex, and the distribution of vascular pixel intensity is unbalanced. Furthermore, due to the low contrast between the blood vessel pixels and the background, the thin blood vessels located at the ends of the vascular structures are difficult to completely segment from the background. Accurate retinal vessel segmentation has always been an extremely challenging task.
In recent years, a great deal of work has focused on automatically segmenting retinal blood vessels. The methods used are broadly classified into two groups: unsupervised and supervised methods. Unsupervised methods are suitable for image segmentation with little annotation information. Commonly used algorithms include the matched filtering method [2], multi-threshold blood vessel detection method [3], mathematical morphology method [4], and so on. However, due to the absence of supervision from prior knowledge, unsupervised methods can easily detect false edges and achieve lower performance. In contrast to unsupervised methods, supervised methods utilize human-annotated data to train networks to learn feature information hidden in images. Currently, state-of-the-art semantic segmentation methods employ deep learning methods for pixel-level prediction. U-Net has shown excellent performance in medical image segmentation due to its unique encoder-decoder structure. Many U-Net variants have been designed for retinal vessel segmentation. Jin et al. [5] proposed a method combining deformable convolution and U-Net to detect retinal blood vessels. Wu et al. [6] incorporated U-Net into a generative adversarial network. Retinal vessel segmentation is performed in an end-to-end manner. Although these methods have improved the accuracy of retinal vessel segmentation to a certain extent, the connectivity of vessels is difficult to guarantee due to insufficient use of contextual information in the structure, and the segmentation of thin vessels is still difficult. Clinically, thin blood vessels and vascular connectivity provide an indispensable reference for diagnosing vascular diseases. Therefore, it is imperative to explore new retinal vessel segmentation techniques.
To tackle the above-mentioned problem, this paper proposes an anatomy-sensitive retinal vessel segmentation framework that can jointly improve the performance of retinal vessel segmentation by exploiting the latent association among multiple modules. The backbone network adopts the improved U-Net network. To take full advantage of semantic information, we design a context relation module, which effectively combines the strong local modeling ability of convolution and the advantages of transformers in long-range modeling, and maps the features of various scales of the encoder to the decoder through skip pathways. In addition, we also design a sub-network for landmark detection, which learns a set of landmarks from retinal images using heatmap regression, to guide the network segmentation direction. The main contributions of this paper are as follows.
  • TransUNet is more in line with anatomical retinal vessel segmentation due to its special structure. We use transformers as the segmentation backbone to benefit from the advantages of convolutional layers in extracting local features and multi-head self-attention in modeling global relations. Meanwhile, we reform the skip connections in TransUNet to decode deep semantics more easily and accurately.
  • A self-supervised landmark-assisted segmentation framework is proposed to further improve the accuracy of retinal vessel segmentation. In particular, we propose a strategy for contrastive learning to improve the plausibility and accuracy of landmark representations of anatomical topology. We utilize landmarks that sparsely represent retinal vessel morphology to guide the model towards learning the content, rather than the style that is not conducive to segmentation. Furthermore, landmarks enhance the richness of explicit descriptions of retinal vascular anatomy, which is friendly for the model to learn based on fewer samples.
  • We implement the proposed network on the DRIVE, CHASE-DB1, and STARE datasets, and extensive experimental results show that our method achieves state-of-the-art performance in most cases.

2. Related Work

Deep convolutional neural networks have become the most popular method for retinal vessel segmentation due to their excellent performance in medical image segmentation tasks. Among them, U-Net [7] and its variants are the most widely used as the backbone. A symmetric encoder-decoder structure and skip-connected architecture from encoding paths to decoding paths lead U-Net to achieve efficient information flow. Benefiting from an architecture that integrates local and global information from low-level and high-level feature maps, U-Net exhibits better performance in medical image analysis. However, although U-Net achieves multi-scale contextual information aggregation, it is still insufficient to cope with thin and irregular retinal vascular structures. Multiple studies have been devoted to addressing this issue.
Wang et al. [8] improved on the standard U-Net network and designed a two-channel encoder to extract information about retinal blood vessels. The improved encoder includes a context channel and a spatial channel to capture more receptive field and spatial information. The design of the backbone network of Li et al. [9] adopts the iterative principle to cascade multiple small U-Net networks to learn the structural features of retinal blood vessels. The input of each small U-Net network is the coarse segmentation probability map output by its previous U-Net network, and the vessel segmentation accuracy is improved by iterating from coarse to fine. Despite the excellent representational power of convolution, CNN-based methods often exhibit limitations in modeling explicit long-term relationships because of the inherent locality of convolution operations. The transformer module shows outstanding performance in capturing long-distance dependencies in the field of natural language processing, and is gradually being introduced into image processing. Cao et al. [10] designed the Swin-Unet network for medical image segmentation. The proposed network adopts a symmetrical structure similar to the U-Net network, and both the encoder and the decoder use pure transformer modules. However, the construction of a pure transformer network requires a large amount of computation, and the network is difficult to train. Xia et al. [11] proposed a combined CNN and transformer method to segment the optic cup and optic disc in the retina. First, the local features of the retina are obtained by convolution, and the extracted features are respectively passed through the multi-scale convolution module and the transformer module to obtain multi-scale feature information and global feature information. Finally, the segmentation performance of the optic cup and optic disc can be improved by fusing these two parts of the feature information. Chen et al. [12] integrated the transformer module into the U-Net network to achieve multi-organ segmentation. Convolutions are first utilized to extract low-level features, and then global interactions are modeled through the transformer module. The framework effectively combines the powerful local modeling capabilities of convolutions and the advantages of transformers in long-range modeling, enabling finer organ detail segmentation.
Accurate detection of landmark points is a critical step in medical imaging, as it provides quite valuable information for subsequent medical image analysis. Coordinate regression is the most typical method. The landmark coordinates are used as the target for the network regression to predict a set of landmark locations directly from the image space. Sun et al. [13] proposed a cascade of deep convolutional networks to improve the detection accuracy of face landmarks through coarse-to-fine regression. Zhang et al. [14] combined multi-task learning with a regression model for face landmark detection and used cascaded deep convolutional networks to predict face and landmark locations in a coarse-to-fine manner. However, the direct mapping from original images to landmark coordinates is a complex nonlinear problem that is not easily learned by the network. Compared with the numerical value of landmark coordinates, heatmaps can provide more abundant supervision information in space, which also improves the accuracy of landmark detection to a certain extent. Kowalski et al. [15] proposed a heatmap-based cascaded deep convolutional network DAN. The detected landmark positions are refined by each stage and passed to the next stage to correct the landmark positions iteratively. Shi et al. [16] designed a superimposed hourglass network and introduced offset learning to refine the predicted landmarks. The network effectively combines heatmap information and coordinate information to achieve accurate facial landmark detection.
Our proposed method focuses on improving the ability of the model to learn anatomical structures, thus achieving higher segmentation accuracy.

3. Methods and Materials

Our objective is to develop a deep learning model for segmenting blood vessel pixels in retinal images. To achieve this, we propose a framework, as depicted in Figure 1, which comprises two main sections: (i) An enhanced version of the U-Net is employed for precise segmentation of fundus blood vessels. (ii) Additionally, landmark detection is used as an auxiliary task to further enhance the accuracy of segmentation.

3.1. Datasets

We use three public datasets for experiments, namely DRIVE, CHASE-DB1, and STARE. To improve the accuracy of segmentation, we implemented a data augmentation technique that utilized random flipping, rotation, and scaling.
The DRIVE dataset includes 40 fundus retinal color images, 7 of which are pathologically abnormal. The dimensions of each image are 584 × 565 pixels. The last twenty images of this dataset are used to train the network, and the first twenty images are used to test the network. All images in the test set consist of the results of manual segmentation by two professionals. We chose to use the result of the first professional manual segmentation as the label of the retinal blood vessels.
The CHASE-DB1 dataset contains 28 retinal images. They were taken from the eyes of 14 children. All images in the dataset are 996 × 960 pixels. Unlike the DRIVE dataset, there are no fixed training and test set partitions for CHASE-DB1. We randomly placed 20 retinal images in the training set and 8 images in the test set.
The STARE dataset has a total of 20 images. All images are 700 × 605 pixels. Since the STARE dataset does not have a pre-separated training set and test set, we employed leave-one-out cross-validation to verify the feasibility of our proposed method.
We improved upon the common approach of completely random data augmentation. First, we defined a sliding window with dimensions 0.6 times the width and height of the original image (i.e., the window area is 0.36 times that of the original image). Then, using this sliding window, we extracted 9 slices of the image with a stride of 1. Next, we selected the slice with the highest proportion of foreground from these 9 slices and performed other operations (such as flipping, contrast adjustment, brightness modification) before adding it to the training data. This approach helps to alleviate the issues of class imbalance or foreground–background imbalance to some extent.

3.2. TransUNet

Medical images have the unique advantage of having explicit contextual priors, due to the anatomical properties of tissues. Therefore, we propose to consider the long-range dependencies of pixels while also extracting local features. As illustrated in Figure 1, we introduce a transformer into the U-Net architecture. The convolutional layer of U-Net ensures that the model remains locally sensitive to the image, while the transformer module allows the model to capture global features of the image.
First, some symbols are defined. The convolutional encoder is E = { E H 2 n d × W 2 n d } n d = 0 N d where H and W are the height and width of the input of the convolution operator, respectively. N d is the number of down-sampling operations f d . That is, convolution operators are grouped by the resolution of their input. Similarly, the convolutional decoder is D = { D H · 2 n u 2 N d × W · 2 n u 2 N d } n u = 0 N u . The transformer module is denoted by T . The feature map is denoted by M with channel C.

3.3. Convolutional Encoder

The convolutional encoder of our method is the same as that of the standard U-Net encoder. Considering the missing information of tiny blood vessels caused by down-sampling and the over-fitting problem caused by too deep model layers, N d is set to 2. That is, the convolution operators are divided into three groups, i.e., the feature maps have three resolutions. The original image is denoted as X. Then,
M e n c , 1 = E H × W ( X ) R H × W × C 1 , M e n c , 2 = E H 2 × W 2 ( f d ( M e n c , 1 ) ) R H 2 × W 2 × C 2 , M e n c , 3 = E H 4 × W 4 ( f d ( M e n c , 2 ) ) R H 4 × W 4 × C 3 .

3.4. Transformer Module

To address the challenges of training transformers and their resource-intensive nature, we propose to connect the transformer module behind the convolutional encoder. This approach allows the transformer to receive input of smaller resolution, thereby reducing the equipment resources required. Moreover, the feature maps that are fed into the transformer already contain deep semantic information, making it easier to train. By incorporating the transformer module in this way, we can ensure that the model captures both local and global information, as the transformer mines long-range dependencies based on feature maps that have already extracted local features.
First, M e n c , 3 is decomposed into N P 2 patches, i.e., M e n c , 3 { M e n c , 3 n P R H N P × W N P × C 3 } n P = 1 N P . The input of the transformer is
Z 0 = { z p o s n p + z p a t n p } n p = 1 N p 2 ,
where z p o s n p and z p a t n p are the position embedding and feature embedding of M e n c , 3 n P , respectively. z p a t n p = f p f ( M e n c , 3 n P ) , where f p f is patch-wise flatten.
The transformer module T is composed of N t transformer layers; each of them T n t consists of a multi-head self-attention (MHSA) block, multi-layer perceptron (MLP) block, and layer normalization (LN) blocks. The output of the n t -th transformer layer is Z n t = T n t ( Z n t 1 ) , specifically,
Z n t = M H S A ( L N ( Z n t 1 ) ) + Z n t 1 , Z n t = M L P ( L N ( Z n t ) ) + Z n t .
Finally, the output sequence Z N t of T is reconstructed into M t by the patch merging layer f p m , i.e., M t = f p m ( Z N t ) R H 4 × W 4 × C 3 .

3.5. Convolutional Decoder

We elaborately design a convolutional decoder D for progressive decoding. Similar to the convolutional encoder, the convolutional decoding layer is also divided into three groups according to the resolution, i.e., N u = 2 . Up-sampling f u is bilinear interpolation. D H 4 × W 4 is fed by M d e c , 0 channel-wise connected by M t and M g m , where M g m is the Gaussian map of the landmark. In a traditional UNet decoder, D H 2 × W 2 is fed by the channel-wise connection of f u ( D H 2 × W 2 ( M d e c , 0 ) ) and M e n c , 2 . Considering the local detailed information lost due to the transformer modeling global relations, we further fuse f d ( M e n c , 1 ) , which contains more texture information for D H 2 × W 2 . In particular, we enhance the sensitivity of convolutional encoders to anatomical topology through contrastive learning; M e n c , 1 is considered to represent dense local shape details. In this way, when the global information and local information are fused in D H 2 × W 2 , they are constrained by the texture information of the shape, which can avoid decoding information that violates the anatomical topology. Formalized,
M d e c , 1 = D H 4 × W 4 ( M t , M g m ) R H 4 × W 4 × C 3 . M d e c , 2 = D H 2 × W 2 ( f u ( M d e c , 1 ) , M e n c , 2 , f d ( M e n c , 1 ) ) R H 2 × W 2 × C 2 , Y p r e = D H × W ( f u ( M d e c , 2 ) , M e n c , 1 ) R H × W × 1 ,
where Y p r e is the label predicted by the model.

3.6. Self-Supervised Landmark Detection

To address the difficulty of segmenting thin blood vessels from the background in retinal images due to their high complexity and low contrast, we propose a novel approach that incorporates landmark points to assist the network in segmentation. This approach represents a departure from previous methods that relied solely on implicit feature vectors learned from images by the network for pixel-by-pixel segmentation. The introduction of landmark detection represents a critical component of our segmentation network. Considering the small amount of retinal vessel data and the high cost of manual annotation, we propose unsupervised learning of a set of ordered landmarks from dense retinal vessel images under the framework of contrastive learning to guide the model for segmentation. The detected landmarks sparsely characterize the key features of dense anatomical topology and thus can represent the intrinsic structure of fundus vessels. For the model to extract accurate and robust landmark points, we propose a contrastive learning strategy and introduce a series of optimization objectives to train the model. Landmarks are generated based on the heatmap of the convolutional encoder, as shown in the landmark detector part of Figure 1.

Coordinate Extraction for Landmarks

We extract the landmarks in a way that activates the highest weighted pixel in the feature map. We extract landmarks in the feature space M e n c , 3 * spanned by E . First, we adopt the spatial softmax normalization method to convert all channels of M e n c , 3 * to the probability response map M p r o b * . Then, the site with the highest weight in the probability map M p r o b * is activated by soft-argmax as a landmark. Formally, the feature map M e n c , 3 * , [ c ] of the c-th channel is probabilized as
M p r o b * [ c ] = exp ( M e n c , 3 * , [ c , r ] ) r ( H 4 × W 4 ) exp ( M e n c , 3 * , [ c , r ] ) r = 1 H 4 × W 4 .
The set of landmark coordinates is
R * = s o f t a r g m a x ( M p r o b * [ c ] ) c = 1 C 3 ,
where R * is the landmarks.
We utilize consistency loss L c s t to guarantee the quality of landmarks. L c s t is defined as
L c s t = d i s t c s t ( R Y , A Y 1 ( R Y ) ) ,
where d i s t c s t is the L2 distance. The landmarks are stable and reliable when the landmarks extracted in Y can be consistent with the landmarks extracted in Y by inverse affine transformation. This is as described in [17].

3.7. Landmark Auxiliary Guided Segmentation

The total loss for model training is
L t o t a l = λ 1 L s e g + λ 2 L a d v + λ 3 ( L c t r + L c s t ) + λ 4 L l m d ,
where λ 1 , λ 2 , λ 3 and λ 4 are the balance coefficients of corresponding loss. L s e g is the pixel-level loss for the segmentation task.
L s e g = B C E ( Y p r e , Y ) + D I C E ( Y p r e , Y )
where B C E and D I C E are the binary cross-entropy (BCE) loss and dice loss, respectively. L a d v is the adversarial loss as the global loss for segmentation.
L a d v = E Y log D ( Y ) + E Y log ( 1 D ( Y ) )
where D is the discriminator. L s e g and L a d v constrain the segmentation of the model locally and globally, respectively. L l m d is the landmark-based auxiliary loss based on optimal transport theory. We use the obtained landmarks based on ground truth Y as pseudo-labels, i.e., R Y . L l m d is defined as
L l m d = R R Y 2 2
where R is the landmark-obtained base on X. L l m d can guide the convolutional encoder to learn more effective information.
Further, we map the landmark information into a Gaussian map M g m that is easier to embed in the network, and feed it to the convolutional decoder in order to boost the performance of the decoder. The Gaussian map is defined as
M g m = exp 1 2 σ 2 R R Y 2 ,
where the standard deviation σ is set to 0.7 for all the experiments. Then, M g m is connected with M t channel-wise as the input of D .
As D accepts the input composed of M t and M g m , it benefits from both global and local high-level semantic information extracted earlier. In particular, M g m is an explicit sparse representation of the anatomical topology. Additionally, M g m is a further disentangled representation of the anatomical topology. In addition, M g m provides the model with prior topological constraints, which enrich the semantics of the data.

3.8. Implementation Details

Considering that there are many tiny blood vessels in the retinal vascular structure, excessively deep convolutional layers may cause some features that are beneficial for segmentation to be ignored. Therefore, we only take the first three layers of the U-Net network and integrate the transformer module into the network in the third layer. Apart from that, the encoders in the semantic segmentation network share the same weights as those under the contrastive learning framework. After conducting experiments, we determined that the values of λ 1 , λ 2 , λ 3 and λ 4 should be set to 0.2, 0.3, 0.4, and 0.1, respectively.
During the training, instead of patches, we input the entire image into the model to generate the retinal vessel prediction map. We adopt Adam to optimize the deep model with an initial learning rate of 0.001 and a weight decay of 0.0005. Due to GPU memory constraints, we only input one retinal vessel image per iteration and resize all training images to 512 × 512 pixels. All models used in the experiments are implemented using pytorch-based python programs. They run on a computer configured with RTX3090 GPU.

4. Results and Discussions

4.1. Evaluation Metrics

The retinal vessel segmentation problem can be viewed as a binary classification. All pixels in retinal images can be classified into vascular and non-vascular pixels. Therefore, four definitions are derived according to the classification results of blood vessels. Those correctly classified as vascular pixels are regarded as true positives (TP). Those correctly detected as non-vascular pixels are counted as true negatives (TN). Those misclassified as non-vascular pixels are recorded as false positives (FP). Non-vascular pixels falsely detected as vascular pixels are counted as false negatives (FN).
To validate the feasibility of our designed network, we introduce four metrics of accuracy (Acc), sensitivity (Se), specificity (Sp), and F1 score to evaluate our network. Among them, the F1 score, as a trade-off between sensitivity and specificity, dominates the performance evaluation.

4.2. Comparison with the State-of-the-Art Methods

4.2.1. Quantitative Analysis

We compare our method with other state-of-the-art methods on the DRIVE, CHASE, and STARE datasets. The experimental evaluation indicators are shown in Table 1. It is evident that our method achieves leading F1 scores in all three datasets. For the DRIVE dataset, the Se, Sp, Acc, and F1 scores obtained by our proposed method are 0.9577, 0.8147, 0.9862, and 0.8329, respectively. Jiang et al.’s method [18] obtained the highest Acc and Sp scores, but only 0.7839 and 0.8246 for Se and F1. These are far lower than our results, and our Sp is only 0.0028 lower than theirs, which can be negligible. In the CHASE dataset, we obtain Acc, Se, Sp, and F1 of 0.9754, 0.8110, 0.9881, and 0.8222, respectively. The best performance metrics obtained by other methods are 0.9670, 0.8329, 0.9813, and 0.8191, respectively. In contrast, our F1 reaches the peak of existing methods. Although the Acc, Se and Sp scores produced by our network are not optimal, these three metrics are also at high levels compared with other methods. On the STARE dataset, our method achieves high Acc, Sp, and F1 results while maintaining the highest Se score. Compared with other methods, these results show that our network has stronger vessel detection ability and stronger generalization ability across different databases.

4.2.2. Qualitative Analysis

Figure 2 shows the results of retinal vessel segmentation using several representative methods and our proposed method. The results show that our proposed method preserves almost all the structures of retinal vessels and guarantees the connectivity of the vessel tree. In addition, the model can clearly segment from the background thin blood vessels that cannot be segmented by other methods, especially at the retinal edge and vessel ends. To more clearly show the difference between the prediction results of other network models and our network model, we visualize the local segmentation results of the model and color-label the different segmentation cases. Blue pixels in the image represent false negatives from undetected vessel regions. Red pixels represent false positives, indicating over-segmentation of blood vessels. It is evident from the patches in Figure 2 that the predicted segmentation maps of other methods show more blue pixels. This further proves that our proposed model has certain advantages in detecting thin blood vessels.
Some segmentation examples are given in Figure 3, which contains locally enlarged images of the original retinal images, the corresponding ground truth values, and segmentation prediction maps obtained by several other methods and our proposed method. As can be seen from Figure 3, our algorithm can detect thin blood vessels more clearly and ensure connectivity between blood vessels.
These experimental data demonstrate that our model can more accurately distinguish vascular and non-vascular pixels and preserve vascular structure better.

4.3. Ablation Experiments

In this paper, we introduce the TransUNet structure and self-supervised landmark detection to improve retinal vessel segmentation performance. To test the effectiveness of these modules, ablation experiments are performed on DRIVE, STARE and CHASE-DB1. We start with the original U-Net method to evaluate how these modules affect segmentation performance. The self-supervised landmark detection is denoted by SLD. The results are shown in Table 2. For simplicity, we only visualize a few of the most representative instance images.

4.3.1. Effect of TransUNet

To demonstrate the feasibility of the proposed TransUNet structure, we compare the U-Net network with the U-Net with transformer embedded. The same configuration and environment were used for both experiments. The results show that we achieve 0.9543, 0.7874, 0.9860, and 0.8148 on the DRIVE dataset for Acc, Se, Sp, and F1, respectively, and 0.9536, 0.7653, 0.9811, and 0.8078 on the baseline model for Acc, Se, Sp, and F1, respectively. At the same time, the performance on the other two datasets is also improved. Additionally, from the visualization in Figure 4, we can observe that the TransUNet structure can fully help the network to learn more feature information that ensures the connectivity of blood vessels.

4.3.2. Effect of Self-Supervised Landmark Detection

To justify the use of landmark points to guide network segmentation, in Figure 5, we show an example visualization including the original retinal image and the style-transformed image, ground truth, and the affine-transformed ground-truth image of the DRIVE dataset.
The affine transformation matrix is shown in (13).
A = 0.90411 0.17613 0 0.05871 0.82583 0 ,
According to Table 2, it can be observed that the segmentation results with the addition of the self-supervised cues show improvements on all three datasets to varying degrees. Furthermore, in the visualization results shown in Figure 6, the segmentation guided by the self-supervised cues demonstrates superior performance in segmenting small blood vessels.
Therefore, our proposed landmark detection module can help us detect thin blood vessels more accurately.

4.4. Effect of Image Size

As is customary in most works, we initially resized all training images to dimensions of 512 × 512 pixels. However, inspired by the findings in work [34] regarding the impact of image size on deep learning, we conducted an additional evaluation. We resized the images to a dimension of 256 × 256 pixels and performed training accordingly. As shown in Table 3, the adjusted F1 scores and other metrics exhibited improvements. Moreover, as illustrated in Figure 7, the visualizations demonstrate that the segmented vessels became more intact.

5. Conclusions

In this paper, we construct a novel retinal vessel segmentation framework, aiming to address the problems of vessel breakage and low accuracy of thin vessels in segmentation. The U-Net acts as the basic network. The designed TransUNet structure combines context information of different scales in the process of encoding and decoding, which effectively ensures the connectivity of blood vessels. The detected landmarks sparsely represent the anatomical features of retinal blood vessels, and segmentation guided by landmarks can help the network better detect thin blood vessels. Experimental results on three public datasets demonstrate that our constructed network outperforms the existing mainstream networks. In the future, we will conceive more methods to integrate into the retinal segmentation network.

Author Contributions

Conceptualization, H.Z. and Y.Q. and C.S.; methodology, H.Z. and Y.Q. and J.L.; software, H.Z.; validation, H.Z., Y.Q. and J.L.; formal analysis, H.Z., Y.Q. and C.S.; investigation, H.Z.; resources, H.Z.; data curation, Y.Q.; writing—original draft preparation, J.L.; writing—review and editing, H.Z.; visualization, Y.Q.; supervision, C.S. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

No additional data are available.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Grélard, F.; Baldacci, F.; Vialard, A.; Domenger, J.P. New methods for the geometrical analysis of tubular organs. Med. Image Anal. 2017, 42, 89–101. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  2. Saroj, S.K.; Kumar, R.; Singh, N.P. Fréchet PDF based Matched Filter Approach for Retinal Blood Vessels Segmentation. Comput. Methods Programs Biomed. 2020, 194, 105490. [Google Scholar] [CrossRef] [PubMed]
  3. Mapayi, T.; Owolawi, P.A. Automatic Retinal Vascular Network Detection using Multi-Thresholding Approach based on Otsu. In Proceedings of the 2019 International Multidisciplinary Information Technology and Engineering Conference (IMITEC), Vanderbijlpark, South Africa, 21–22 November 2019; pp. 1–5. [Google Scholar]
  4. Welfer, D.; Scharcanski, J.; Marinho, D.R. Fovea center detection based on the retina anatomy and mathematical morphology. Comput. Methods Programs Biomed. 2011, 104, 397–409. [Google Scholar] [CrossRef] [PubMed]
  5. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A deformable network for retinal vessel segmentation. Knowl. Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef] [Green Version]
  6. Wu, C.; Zou, Y.; Yang, Z. U-GAN: Generative Adversarial Networks with U-Net for Retinal Vessel Segmentation. In Proceedings of the 2019 14th International Conference on Computer Science & Education (ICCSE), Toronto, ON, USA, 19–21 August 2019; pp. 642–646. [Google Scholar]
  7. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional Networks for Biomedical Image Segmentation; Springer: Cham, Switzerland, 2015. [Google Scholar]
  8. Wang, B.; Wang, S.; Qiu, S.; Wei, W.; Wang, H.; He, H. CSU-Net: A Context Spatial U-Net for Accurate Blood Vessel Segmentation in Fundus Images. IEEE J. Biomed. Health Inform. 2021, 25, 1128–1138. [Google Scholar] [CrossRef] [PubMed]
  9. Li, L.; Verma, M.; Nakashima, Y.; Nagahara, H.; Kawasaki, R. IterNet: Retinal Image Segmentation Utilizing Structural Redundancy in Vessel Networks. In Proceedings of the 2020 IEEE Winter Conference on Applications of Computer Vision (WACV), Snowmass Village, CO, USA, 1–5 March 2020; pp. 3645–3654. [Google Scholar]
  10. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-Unet: Unet-like Pure Transformer for Medical Image Segmentation. In Computer Vision—ECCV 2022 Workshops; Springer: Cham, Switzerland, 2023. [Google Scholar]
  11. Xia, X.; Huang, Z.; Huang, Z.; Shu, L.; Li, L. A CNN-Transformer Hybrid Network for Joint Optic Cup and Optic Disc Segmentation in Fundus Images. In Proceedings of the 2022 International Conference on Computer Engineering and Artificial Intelligence (ICCEAI), Shijiazhuang, China, 22–24 July 2022; pp. 482–486. [Google Scholar]
  12. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. TransUNet: Transformers Make Strong Encoders for Medical Image Segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  13. Sun, Y.; Wang, X.; Tang, X. Deep Convolutional Network Cascade for Facial Point Detection. In Proceedings of the 2013 IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 3476–3483. [Google Scholar]
  14. Zhang, K.; Zhang, Z.; Li, Z.; Qiao, Y. Joint Face Detection and Alignment Using Multitask Cascaded Convolutional Networks. IEEE Signal Process. Lett. 2016, 23, 1499–1503. [Google Scholar] [CrossRef] [Green Version]
  15. Kowalski, M.; Naruniec, J.; Trzcinski, T. Deep Alignment Network: A Convolutional Neural Network for Robust Face Alignment. In Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Honolulu, HI, USA, 21–26 July 2017; pp. 2034–2043. [Google Scholar]
  16. Shi, H.; Wang, Z. Improved Stacked Hourglass Network with Offset Learning for Robust Facial Landmark Detection. In Proceedings of the 2019 9th International Conference on Information Science and Technology (ICIST), Kopaonik, Serbia, 10–13 March 2019; pp. 58–64. [Google Scholar]
  17. Siarohin, A.; Lathuiliere, S.; Tulyakov, S.; Ricci, E.; Sebe, N. Animating arbitrary objects via deep motion transfer. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, CA, USA, 15–20 June 2019; pp. 2377–2386. [Google Scholar]
  18. Jiang, Y.; Tan, N.; Peng, T.; Zhang, H. Retinal Vessels Segmentation Based on Dilated Multi-Scale Convolutional Neural Network. IEEE Access 2019, 7, 76342–76352. [Google Scholar] [CrossRef]
  19. Orlando, J.I.; Prokofyeva, E.; Blaschko, M.B. A Discriminatively Trained Fully Connected Conditional Random Field Model for Blood Vessel Segmentation in Fundus Images. IEEE Trans. Biomed. Eng. 2017, 64, 16–27. [Google Scholar] [CrossRef] [Green Version]
  20. Zhang, J.; Chen, Y.; Bekkers, E.; Wang, M.; Dashtbozorg, B.; ter Haar Romeny, B.M. Retinal vessel delineation using a brain-inspired wavelet transform and random forest. Pattern Recognit. 2017, 69, 107–123. [Google Scholar] [CrossRef]
  21. Srinidhi, C.L.; Aparna, P.; Rajan, J. A visual attention guided unsupervised feature learning for robust vessel delineation in retinal images. Biomed. Signal Process. Control. 2018, 44, 110–126. [Google Scholar] [CrossRef]
  22. Yan, Z.; Yang, X.; Cheng, K.-T. Joint Segment-Level and Pixel-Wise Losses for Deep Learning Based Retinal Vessel Segmentation. IEEE Trans. Biomed. Eng. 2018, 65, 1912–1923. [Google Scholar] [CrossRef] [PubMed]
  23. Xu, R.; Jiang, G.; Ye, X.; Chen, Y. Retinal vessel segmentation via multiscaled deep-guidance. In Pacific Rim Conference on Multimedia; Springer: Berlin, Germany, 2018; pp. 158–168. [Google Scholar]
  24. Zhuang, J. LadderNet: Multi-Path Networks Based on U-Net for Medical Image Segmentation. arXiv 2018, arXiv:1810.07810. [Google Scholar]
  25. Alom, M.Z.; Yakopcic, C.; Hasan, M.; Taha, T.M.; Asari, V.K. Recurrent residual U-Net for medical image segmentation. J. Med. Imaging 2019, 6, 14006. [Google Scholar] [CrossRef] [PubMed]
  26. Guo, S.; Wang, K.; Kang, H.; Zhang, Y.; Gao, Y.; Li, T. BTS-DSN: Deeply Supervised Neural Network with Short Connections for Retinal Vessel Segmentation. Int. J. Med. Inform. 2018, 126, 105–113. [Google Scholar] [CrossRef] [Green Version]
  27. Wang, B.; Qiu, S.; He, H. Dual encoding U-Net for retinal vessel segmentation, Medical Image Computing and Computer Assisted Intervention. Med. Image Comput. Comput. Assist. Interv. 2019, 22, 84–92. [Google Scholar]
  28. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. UNet++: Redesigning Skip Connections to Exploit Multiscale Features in Image Segmentation. IEEE Trans. Med. Imaging 2020, 39, 1856–1867. [Google Scholar] [CrossRef]
  29. Xu, R.; Ye, X.; Jiang, G.; Liu, T.; Tanaka, S. Retinal Vessel Segmentation via a Semantics and Multi-Scale Aggregation Network. In Proceedings of the ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Barcelona, Spain, 4–8 May 2020. [Google Scholar]
  30. Wang, D.; Haytham, A.; Pottenburgh, J.; Saeedi, O.; Tao, Y. Hard attention net for automatic retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2020, 24, 3384–3396. [Google Scholar] [CrossRef]
  31. Mou, L.; Zhao, Y.; Fu, H.; Liu, Y.; Cheng, J.; Zheng, Y.; Su, P.; Yang, J.; Chen, L.; Frangi, A.F.; et al. CS2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Med. Image Anal. 2021, 67, 101874. [Google Scholar] [CrossRef]
  32. Zhang, Y.; He, M.; Chen, Z.; Hu, K.; Li, X.; Gao, X. Bridge-Net: Context-involved U-Net with patch-based loss weight mapping for retinal blood vessel segmentation. Exp. Syst. Appl. 2022, 195, 116526. [Google Scholar] [CrossRef]
  33. Liu, Y.; Shen, J.; Yang, L.; Bian, G.; Yu, H. ResDO-UNet: A deep residual network for accurate retinal vessel segmentation from fundus images. Biomed. Signal Process. Control. 2023, 79, 104087. [Google Scholar] [CrossRef]
  34. Rukundo, O. Effects of image size on deep learning. Electronics 2023, 12, 985. [Google Scholar] [CrossRef]
Figure 1. The pipeline of the proposed method.
Figure 1. The pipeline of the proposed method.
Diagnostics 13 02260 g001
Figure 2. Examples of retinal vessel segmentation for three datasets (Welfer 2011 [4]; Wang 2019 [27]).
Figure 2. Examples of retinal vessel segmentation for three datasets (Welfer 2011 [4]; Wang 2019 [27]).
Diagnostics 13 02260 g002
Figure 3. Locally magnified view of the segmentation results: (a) raw fundus image, (b) ground truth, (c) U-Net, (d) Jin 2019 [5], (e) Zhou 2020 [28], (f) our method.
Figure 3. Locally magnified view of the segmentation results: (a) raw fundus image, (b) ground truth, (c) U-Net, (d) Jin 2019 [5], (e) Zhou 2020 [28], (f) our method.
Diagnostics 13 02260 g003
Figure 4. Illustration of vessel connectivity: (a) the retinal fundus patches, (b) ground truth, (c) segmentation output from U-Net, (d) segmentation output from TransUNet. First row and second row: DRIVE dataset, third row: CHASE-DB1 dataset, fourth row: STARE dataset.
Figure 4. Illustration of vessel connectivity: (a) the retinal fundus patches, (b) ground truth, (c) segmentation output from U-Net, (d) segmentation output from TransUNet. First row and second row: DRIVE dataset, third row: CHASE-DB1 dataset, fourth row: STARE dataset.
Diagnostics 13 02260 g004
Figure 5. Example of transformation: (a) original retinal image, (b) style-transformed retinal image, (c) ground truth, (d) ground truth image after affine transformation.
Figure 5. Example of transformation: (a) original retinal image, (b) style-transformed retinal image, (c) ground truth, (d) ground truth image after affine transformation.
Diagnostics 13 02260 g005
Figure 6. Illustration of thin vessel segmentation results: (a) ground truth, (b) segmentation results of the network without the self-supervised landmark detection module, (c) segmentation results of our method.
Figure 6. Illustration of thin vessel segmentation results: (a) ground truth, (b) segmentation results of the network without the self-supervised landmark detection module, (c) segmentation results of our method.
Diagnostics 13 02260 g006
Figure 7. Sample segmentation results for small blood vessels in images of different sizes: (a) original retinal image, (b) ground truth, (c) segmentation results for an input image of size 256 × 256 pixels, (d) segmentation results for an input image of size 512 × 512 pixels.
Figure 7. Sample segmentation results for small blood vessels in images of different sizes: (a) original retinal image, (b) ground truth, (c) segmentation results for an input image of size 256 × 256 pixels, (d) segmentation results for an input image of size 512 × 512 pixels.
Diagnostics 13 02260 g007
Table 1. Performance comparison with state-of-the-art methods on the DRIVE, CHASE-DB1 and STARE datasets.
Table 1. Performance comparison with state-of-the-art methods on the DRIVE, CHASE-DB1 and STARE datasets.
DRIVECHASE-DB1STARE
MethodYearAccSeSpF1AccSeSpF1AccSeSpF1
U-Net [7]20150.95360.76530.98110.80780.96040.78700.97770.78280.95880.76390.97960.7817
Orlando et al. [19]20170.94540.78970.96840.78570.94670.75650.96550.73320.95190.76800.97380.7644
Zhang et al. [20]20170.94660.78610.97120.79530.95020.76440.97160.75810.95470.78820.97290.7815
Srinidhi et al. [21]20180.95890.86440.96670.76070.94740.82970.96630.71890.95020.83250.97460.7698
Yan et al. [22]20180.95420.76530.9818-0.96100.76330.9809-0.96120.75810.9846-
Xu et al. [23]20180.95570.80260.97800.81890.96130.78990.97850.78560.94990.81960.96610.7982
Zhuang et al. [24]20180.95610.78560.98100.82020.95360.79780.98180.8031----
Alom et al. [25]20190.95560.77920.98130.81710.96340.77560.98200.79280.97120.82920.98620.8475
Jin et al. [5]20190.95660.79630.98000.82370.96100.81550.97520.78830.96410.75950.98780.8143
Jiang et al. [18]20190.97090.78390.98900.82460.97210.78390.98940.80620.97810.82490.99040.8482
Guo et al. [26]20190.95610.78910.98040.82490.96270.78880.98010.7983----
Wang et al. [27]20190.95670.79400.98160.82700.96610.80740.98210.8037----
Zhou et al. [28]20200.95350.74730.98350.80350.95060.63610.98940.73900.96050.77760.98320.8132
Xu et al. [29]20200.95570.79530.98070.82520.96500.84550.97690.81380.95900.83780.97410.8308
Wang et al. [30]20200.95810.79910.98130.82930.96700.83290.98130.81910.96730.81860.9844-
Li et al. [9]20200.95730.77350.98380.82050.97600.79690.98810.80720.97010.77150.98860.8146
Mou et al. [31]20210.95530.81540.97570.82280.96510.83290.97840.81410.96700.83960.98130.8420
Zhang et al. [32]20220.95650.7850.96180.82----0.96680.80020.98640.8289
Liu et al. [33]20230.95610.79850.97910.82290.96720.80200.97940.82360.96350.80390.98360.8315
Proposed20230.95770.81470.98620.83290.97540.81100.98810.82220.96350.85180.98290.8450
Table 2. Ablation studies on the DRIVE, CHASE-DB1 and STARE datasets.
Table 2. Ablation studies on the DRIVE, CHASE-DB1 and STARE datasets.
DRIVECHASE-DB1STARE
MethodAccSeSpF1AccSeSpF1AccSeSpF1
U-Net0.95360.76530.98110.80780.96040.78700.97770.78280.95880.76390.97960.7817
TransUNet0.95430.78740.98600.81480.96810.79940.98780.80790.96100.76700.98790.8057
TransUNet + SLD0.95770.81470.98620.83290.97540.81100.98810.82220.96350.85180.98290.8450
Table 3. Segmentation results for images of different sizes on the DRIVE, CHASE-DB1 and STARE datasets.
Table 3. Segmentation results for images of different sizes on the DRIVE, CHASE-DB1 and STARE datasets.
DRIVECHASE-DB1STARE
SizeAccSeSpF1AccSeSpF1AccSeSpF1
512 × 5120.95770.81470.98620.83290.97540.81100.98810.82220.96350.85180.98290.8450
256 × 2560.96880.81880.98690.84550.96850.81550.98890.82430.96410.81880.98880.8466
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Zhang, H.; Qiu, Y.; Song, C.; Li, J. Landmark-Assisted Anatomy-Sensitive Retinal Vessel Segmentation Network. Diagnostics 2023, 13, 2260. https://doi.org/10.3390/diagnostics13132260

AMA Style

Zhang H, Qiu Y, Song C, Li J. Landmark-Assisted Anatomy-Sensitive Retinal Vessel Segmentation Network. Diagnostics. 2023; 13(13):2260. https://doi.org/10.3390/diagnostics13132260

Chicago/Turabian Style

Zhang, Haifeng, Yunlong Qiu, Chonghui Song, and Jiale Li. 2023. "Landmark-Assisted Anatomy-Sensitive Retinal Vessel Segmentation Network" Diagnostics 13, no. 13: 2260. https://doi.org/10.3390/diagnostics13132260

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop