You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

17 June 2022

MTPA_Unet: Multi-Scale Transformer-Position Attention Retinal Vessel Segmentation Network Joint Transformer and CNN

,
,
,
,
and
College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
*
Authors to whom correspondence should be addressed.
This article belongs to the Section Sensing and Imaging

Abstract

Retinal vessel segmentation is extremely important for risk prediction and treatment of many major diseases. Therefore, accurate segmentation of blood vessel features from retinal images can help assist physicians in diagnosis and treatment. Convolutional neural networks are good at extracting local feature information, but the convolutional block receptive field is limited. Transformer, on the other hand, performs well in modeling long-distance dependencies. Therefore, in this paper, a new network model MTPA_Unet is designed from the perspective of extracting connections between local detailed features and making complements using long-distance dependency information, which is applied to the retinal vessel segmentation task. MTPA_Unet uses multi-resolution image input to enable the network to extract information at different levels. The proposed TPA module not only captures long-distance dependencies, but also focuses on the location information of the vessel pixels to facilitate capillary segmentation. The Transformer is combined with the convolutional neural network in a serial approach, and the original MSA module is replaced by the TPA module to achieve finer segmentation. Finally, the network model is evaluated and analyzed on three recognized retinal image datasets DRIVE, CHASE DB1, and STARE. The evaluation metrics were 0.9718, 0.9762, and 0.9773 for accuracy; 0.8410, 0.8437, and 0.8938 for sensitivity; and 0.8318, 0.8164, and 0.8557 for Dice coefficient. Compared with existing retinal image segmentation methods, the proposed method in this paper achieved better vessel segmentation in all of the publicly available fundus datasets tested performance and results.

1. Introduction

Automatic segmentation of the retinal vessels plays an important role in the clinical evaluation and diagnosis of many ocular-related diseases. Since the fundus is the only part of the human body where arterioles and capillaries can be directly and centrally observed with the naked eye, morphological information of these retinal vessels, such as thickness, curvature and density, can reflect the occurrence of disease to some extent [,]. Studies have shown that the thickness and curvature of retinal vessels are associated with some extent with hypertension and diabetes mellitus. For example, primary hypertension causes spasms and narrowing of the retinal vessels, thickening of the vessel walls, and in severe cases, exudates, hemorrhages, and cotton wool spots []. As can be seen in Figure 1, the fundus of the patient’s eye shows symptoms such as exudates and hemorrhagic spots to varying degrees compared to normal fundus images. The degree of fundus lesions is closely related to the duration of hypertension and its severity. Hypertensive retinopathy will show general arterial stenosis to varying degrees in different disease stages, as shown in Figure 1d, which shows the stenosis of the entire venous tree. Diabetic retinopathy is manifested as retinal hemorrhage, exudation, thinning or even blocking of small blood vessels, leading to retinal anemia and hypoxia, thus promoting the appearance of regenerated blood vessels. The existence of new capillaries is an important sign of the further deterioration of diabetic retinopathy. Simultaneously, diabetic retinopathy and macular mutations are also the main causes of vision loss []. Therefore, early detection and diagnosis of these lesions are an important tool to prevent the onset and progression of the disease.
Figure 1. Retinal fundus images. (a) normal fundus image; (b) background diabetic retinopathy, pigment epithelial atrophy; (c) choroidal lesion; (d) narrowing, entire venous tree.
However, in current clinical practice, the manual examination is usually relied upon to obtain information on these retinal fundus lesions. This task is not only time-consuming and laborious but also requires a high level of medical skills from the physician. Therefore, the automatic and accurate segmentation of retinal vessels from retinal fundus images to assist physicians in examination and diagnosis is very important and meaningful work. Many researchers have applied machine learning methods to retinal vessel segmentation tasks, such as using high-pass filtering for vessel enhancement [] and Gabor wavelet filters to segment retina vessels []. Some researchers have applied the EM maximum likelihood estimation algorithm [] and the GMM expectation-maximization algorithm [] to the retinal vessel and background pixel classification as well. All of these methods have contributed in retinal vessel segmentation, but further improvements are needed in the accuracy and efficiency of retinal vessel segmentation.
With the rapid development in the field of computer vision, deep learning techniques have played an important function in the field of image processing. Compared with traditional machine learning methods, deep convolutional neural networks [] have a high capability of extracting effective features of data []. Based on the classical UNet [], FCN [], and ResNet [], researchers have proposed many improved convolutional neural network methods. UNet++ [] uses multiple layers of skip connections to capture features at different levels on the structure of encoder–decoder. Wang et al. [] proposed dual encoding UNet (DEUNet), which significantly enhanced the ability of the network to segment retinal vessels in an end-to-end and pixel-to-pixel manner. Res-UNet [] added a weighted attention mechanism to the UNet model to better discriminate between retina vessel and background pixel features.
Although convolutional neural networks (CNN) have a strong feature extraction capability, they still suffer from the problem of a limited convolutional kernel field of receptiive. Therefore, CNN is limited to processing local information but cannot focus on global contextual information. In addition, the difficulty of the retinal vessel segmentation task is how to obtain to perform accurate pixel-level classification rather than image-level classification. To solve the above problems, some researchers have introduced the Transformer [] framework to computer vision tasks. Vision Transformer (ViT) [] pioneered the use of a pure Transformer architecture to handle image recognition tasks. Based on the ViT architecture, Deit [] introduced several training strategies which enable ViT to be trained on ImageNet datasets as well. Pyramid Vision Transformer (PVT) [], which inherits the advantages of CNN and Transformer, uses a convolution-free backbone to handle computer vision tasks. In addition, more research works are dedicated to combine Transformer with CNN to achieve higher accuracy, such as the work of Chen J. [], Chen B. [], Valanarasu [], and others’ work. Currently, Transformer performs well on medical image processing tasks but usually requires pre-trained networks to make the model perform better, as well as a large amount of data to train the model. Although Transformer is good for acquiring long-distance dependencies in images, it is not good at capturing detailed information about the blood vessels in the fundus of the eye. Just a single-minded pursuit of using Transformer may not be suitable for retinal fundus datasets with small amounts of data. Therefore, this paper takes a look at the convolutional neural network and Transformer mechanisms and their respective focuses. Considering that convolutional neural networks are good at capturing detailed local information, Transformer can complement global information as well as contextual information. In the actual feature extraction and recovery process, the connection between local detail information is more beneficial for extracting features, and the long-distance dependent information plays more of a role of information supplement.
In this paper, we propose an MTPA_Unet(Multi-scale Transformer-Position Attention_Unet) network model for retinal vessel segmentation. It consists of a serial combination of Transformer and CNN. Specifically, we first propose a TPA module to replace the traditional Transformer’s multi-headed attention module. Considering that the Transformer structure is not well adapted to retinal datasets with a small number of samples. Therefore, a lightweight positional attention module is added behind T M S A , which is designed to capture the positional information of retinal vessel pixels more precisely. Secondly, the selection of multi-scale information input makes the network sensitive to different scales to achieve better segmentation. After the Transformer for feature extraction, we feed the extracted information into the encoder of the U-shaped network structure for further fine-grained segmentation of the feature map. A multilayer pooling module is added at the end of downsampling to expand the receptive field. At the same time, the information from each stage of downsampling is fused and provided to the higher levels to compensate for the shallow information. A residual connection is used between the encoder and decoder to reduce noise. Finally, the features are recovered and reconstructed by decoders to enable the network structure to output the segmentation results of retinal vessels. We used three public retinal fundus datasets to evaluate MTPA_Unet, respectively, DRIVE, CHASE DB1, and STARE. Experimental results show that the network achieves better segmentation performance. The main research of the paper is as follows:
  • A TPA module is proposed to replace the MSA structure in the traditional Transformer, which not only considers the relationship between long-distance pixels but also focuses on the acquisition of blood vessel pixel position information. The network model is adapted to the fine segmentation task of retinal blood vessels with a small number of samples.
  • The MTPA_Unet network model is proposed, and the Transformer and convolutional neural network are combined to design and apply it to the retinal blood vessel segmentation task. MTPA_Unet can alleviate the limitations exhibited by CNN in modeling long-term dependencies and achieve higher retinal vessel segmentation accuracy.
  • Perform ablation experiments and comparative experiments on three datasets, DRIVE, CHASE DB1 and STARE, and analyze the results. The results show that the network model proposed in this paper achieves better vessel segmentation performance.
The rest of this paper is organized as follows: Section 2 describes the work related to convolutional neural networks and Transformer. Section 3 describes the MTPA_Unet network model for retinal vessel segmentation. The dataset, implementation details, and evaluation metrics of the experiments are described in Section 4. The ablation experiments and comparison experiments are designed, and the results are analyzed in Section 5. The full text is summarized in Section 6.

3. Muti-Transformer-Position Attention_Unet Method

Since CNN has great advantages in extracting local information, it is insufficient for feature extraction of long-distance dependencies. Therefore, in order to balance between long-distance dependencies and short-distance dependencies, this paper uses a combination of traditional CNN and Transformer architectures to achieve high-precision segmentation of retinal fundus vessels. The general structure of our proposed MTPA_Unet network model is discussed as follows.
The input to the network is derived from slices of the original retinal vessel images. Due to the high detail information of fundus vessel endings, feature map inputs of different scales are used to enhance the feature extraction capability of the network. These feature maps were input layer by layer into each stage of the Transformer structure, with each layer having image input sizes of 64 × 64, 32 × 32, 16 × 16, and 8 × 8 pixels. Each stage consists of Patch embedding, position encoding, and TPA modules. Furthermore, the results extracted by the Transformer are passed to the encoder in the corresponding CNN network, i.e., the output of each stage corresponds as the input of the encoder block. After the initial extraction of the image features by the Transformer, the advantages of the Transformer for long-distance dependency acquisition are exploited and the shortcomings of the encoder are compensated. Since short-distance dependencies occupy a more important proportion in retinal vessel segmentation, we perform further fine-grained segmentation of the feature map. The encoder block consists of a feature extraction module and a downsampling module. Here, the output of each layer of the encoder block is fused and passed into the underlying multilayer pooling module together with the encoder block of the last layer in order to make use of the information in the shallow layers and to assist in better segmentation. Residual connections are used between the corresponding encoder and decoder blocks in each layer to reduce noise interference. Finally, the processed feature maps are then fed to the decoder for feature reconstruction and recovery by upsampling operations. The parts are described in detail as follows. The overall structure of the network is shown in Figure 2.
Figure 2. Figure of downsampling module.

3.1. Encoder Block

The encoder block consists of a feature extraction module (FE) and a downsampling module (DS). For the input retinal vessel image, it is first passed through the feature extraction module. The FE module is designed to extract the retinal image vessel features, while the number of channels of the input feature map is adjusted, doubling from 32 channels of the input layer by layer to 256 channels of the fourth layer. The process is as follows: for the input feature maps, a 1 × 1 convolutional layer is first used for dimensionality reduction. The image information is extracted using a 3 × 3 convolution and a 3 × 3 transposed convolution for the reduced-dimensional feature map, respectively, and the extracted information is fused. A 1 × 1 bottleneck layer is subsequently used, and the normalization layer is selected for batch normalization and finally activated by the ReLU function. The feature map is output before it is output with the previous input superimposed. The detailed structure is shown in Figure 3.
Figure 3. Feature extraction module.
Since the pooling layer has the advantage of good feature degradation and feature invariance, we introduce a downsampling module (DS) after the FE module, which allows the model to extract a wider range of features and serves for further feature extraction and downsampling of the image. The DS module consists of an adaptive pooling, three consecutive batch normalization layers, a ReLU activation function, and a Conv layer. The detailed structure is shown in Figure 4.
Figure 4. Downsampling module.

3.2. Transformer-Position Attention Module

The Transformer-Position Attention (TPA) module consists of a modified multi-headed attention T M S A and a position attention module. The Patch Embedding layer and the Positional encoding layer, as necessary structures of the Transformer, are described in detail in the T M S A structure description section. The T M S A and location attention modules are described in detail in turn as follows.

3.2.1. T M S A Structure Description

Patch Embedding layer: The PE layer is used to serialize the input image. Specifically, the input image dimension is H × W × C , and H, W, and C denote the height, width, and number of channels, respectively. Firstly, the input image is divided into N blocks of size P 2 × C , and then it is reshaped into blocks of dimension N × P 2 × C .
Since each stage of the TPA module works on a different size of the input feature map, the PE layer is able to downsample the feature map and gradually expand the channel dimension to achieve a hierarchical feature representation. We use PE before each layer of the TPA module except for the first stage, with the aim of using the PE module to scale the feature map spatial dimension and channel dimension. The spatial dimension is reduced by a factor of 4 and the channel dimension is increased by a factor of 2. This process is implemented using a 3 × 3 convolution with a step size of 2 and a padding of 1. The output of each PE layer can be formalized as Equations (1) and (2), where x and x denote the feature maps before and after processing, respectively:
x = B N x · Pr o j ( x )
P E ( x ) = S i g m o i d ( C o n v 2 d ( x ) ) · x
Positional encoding layer: to make the positional encoding more flexible, we refer to the setting of positional encoding in []. Unlike the traditional positional encoding in ViT [], we use a deep convolution operation of size 3 × 3 with a padding of 1 to obtain the weights in the pixel direction. The weights are then normalized and scaled by a sigmoid function. The positional encoding process can be expressed as Equation (3):
x ^ = S i g m o i d ( Deep C o n v 2 d ( x ) ) · x
T M S A structure: The main advantage of the Transformer is that it enables the model to focus on semantic information from the global context and to capture contextual information in both absolute and relative positions. Structurally, the Transformer consists mainly of L-layer Multiheaded Self-Attention (MSA) and Multilayer Perceptron (MLP) blocks. The T M S A used in this paper is similar to the traditional MSA. For the input feature map, a set of projections is first used to obtain Q, Q R n × d m . A given Q, K and V can be shared among all attention layers. To reduce the computational effort as well as memory pressure, the input x R n × d m is reshaped into a three-dimensional x ^ R d m × h × w , and then the spatial resolution is reduced by convolution operations and normalized using layer normalization. For the newly obtained x ^ R d m × h h s s × w w s s , two sets of projections are used to obtain K and V, K , V R d m × h h s s × w w s s , respectively. For the obtained Q, K and V, the 1 × 1 convolution operation is applied to the transpose of Q and K for simulating the interaction between different heads. A normalization operation is done using softmax to generate the matrix of the contextual attention map. To obtain the set of values weighted by the attention weights, the contextual attention map will be multiplied by V. Finally, after layer normalization of the output, T M S A can be expressed as Equation (4):
T M S A = L N S o f t max C o n v Q · K T d k V
where d k is the dimensionality of Q, K, and V. Finally, we linearly project the optimized feature mapping, after Equations (5) and (6), and add F F N after T M S A to achieve feature transformation and nonlinearity to obtain the final output of T M S A :
x L 1 = T M S A x L 1 + x L 1
Y = x L 1 + F F N ( L N ( x L 1 ) )

3.2.2. Description of Location Attention Structure

Due to the presence of more capillaries in retinal fundus images, more details need to be captured in the information extraction. Therefore, we do not simply use T M S A to process the images but add the location attention module afterward. Under the role of modeling with strong contextual information, the global semantic information description is obtained by establishing the connection between long-distance features of fundus vessel pixels. In turn, a more refined retinal vessel segmentation is achieved. Specifically, for a given feature map F, M, N, V, M , N , V R C × W × H can be obtained after 1 × 1 convolution layers, respectively. For the obtained feature maps M, N, there is a vector Q x at any pixel position x in M. In order not to increase extra computational effort, when calculating the correlation of a pixel in the whole image with position x, the feature vectors that are in the same row and column as position x are first searched in the feature map N and saved in the set D x R ( H × W 1 ) × C . The correlation between the pixel position x and the feature vector associated with it is obtained by the calculation as shown in Equation (7), and the softmax function is further applied on the multiplication result to generate the attention map S Z R ( H + W 1 ) × H × W :
S z = s o f t max Q x D i , x T
The attention map and the set ψ x R ( H × W 1 ) × C of feature vectors in the same column as x in the feature map V are then multiplied to obtain the new feature map Y . Finally, the Y is added to the input feature map F to generate the final output feature map Y. See Equation (8):
Y = i = 0 H + W 1 S z · ψ i . x + F
By using this feature correlation calculation twice, it is possible to obtain global contextual information about each pixel location. This more comprehensive information extraction enhances the Transformer and introduces little computational effort. The structure of the TPA module is shown in Figure 5.
Figure 5. Structure of TPA module.

3.3. Loss Function

In order to correct the segmentation error which exists between the segmentation results and the given true value, in this paper, we use the Dice loss function to enhance the retinal vessel segmentation results. The Dice coefficient is an ensemble similarity measure function which takes values in the range [0, 1]. It is used in this paper to calculate the difference between the predicted retinal vessel segmentation result (denoted as P) and the true value (denoted as G), and the Dice coefficient formula is defined as Equation (9):
D i c e C o e f f i c i e n t = 2 × P G P + G
where P G denotes the intersection of the predicted retinal vessel segmentation result and the true value, and |P| and |G| denote their pixel counts, respectively. The Dice loss function is then deduced from the Dice coefficient, denoted as DiceLoss = 1DiceCoefficient, which is defined as in Equation (10). A constant w is introduced in the concrete implementation to prevent the denominator from being zero. Because the real goal in the semantic segmentation task is to maximize the Dice Coefficient, in order to improve the segmentation accuracy, it is to minimize the DiceLoss. In addition, since DiceLoss is a region-related loss, that is, the loss of the current pixel is also related to the values of other points. It can also be seen from the definition form of DiceLoss that the loss calculated by the fixed-size positive sample area is the same, and the supervision effect on the network will not change with the size of the image. Therefore, in the training process, DiceLoss is more inclined to mine the foreground area, and the effect may be better for the class imbalance problem:
D i c e L o s s = 1 2 × P G + w P + G + w

4. Dataset and Evaluation Criteria

4.1. Dataset

The retinal images used in this paper are from three publicly available datasets, respectively, the DRIVE, CHASEDB1, and STARE dataset. The DRIVE dataset [] consists of 40 color images of retinal fundus vessels, of which seven images suffer from different degrees of lesions. It also contains groundtruth images and corresponding mask images that were manually segmented by two experts. The size of each image is 565 × 584, and the first twenty fundus images are set as the test set. The last twenty images are set as the training set. The experimental comparison labels were chosen from the manual segmentation results of the first expert.
The STARE dataset [] consists of 20 color images of retinal fundus vessels, 10 of which suffer from different degrees of lesions. It also contains the groundtruth images manually segmented by two experts and consists of the corresponding mask images. The size of each image was 700 × 605 pixels. The experimental comparison labels were selected from the manual segmentation results of the first expert.
The CHASE DB1 dataset [] consists of 28 color images of the retinal fundus vessels, with images acquired from the left and right eyes of 14 affected children. It also contains the groundtruth images manually segmented by two experts and consists of the corresponding mask images. The image size was 999 × 960. Twenty images were used as the training set, and the remaining eight images were used as the test set. The experimental comparison labels are chosen from the manual segmentation results of the first expert.
Figure 6 shows three example dataset images, from top to bottom, the CHASE DB1, DRIVE, and STARE dataset, respectively. From left to right are the original retinal fundus vessel medical image, the masked image, and the true value of the expert’s manual segmentation, respectively.
Figure 6. Example images of three datasets (a) original retinal fundus vessel medical image, (b) masked image, and (c) expert manual segmentation of the groundtruth.

4.2. Image Preprocessing

In this paper, we also use the necessary preprocessing to enhance the vessel contours in the original retinal images. In this paper, we used the preprocessing methods proposed by Jiang et al. [], which are data normalization, adaptive histogram equalization (CLAHE) processing, and gamma correction methods, respectively. It was experimentally verified that the blood vessels in the grayscale images were clearest after fusing the G, R, and B channels in the ratio of 29.9%, 58.7%, and 11.4%. Normalization was used to improve the convergence speed of the model, and CLAHE processing was used to enhance the contrast between the blood vessels and the background in the original images. Finally, gamma correction is used to improve the quality of retinal fundus vessel images. The images processed by the four strategies are shown in Figure 7b–e. Obviously, the blood vessels in the retinal images are clearer, and the contrast with the background is more obvious after the above preprocessing operations.
Figure 7. Pre-processing results of (a) original retinal fundus vessel medical image, (b) RGB three-channel scaled fusion image, (c) data normalized image, (d) CLAHE processed image, and (e) gamma corrected image.

4.3. Experimental Evaluation Metrics

To quantitatively evaluate the accuracy of the method in this paper for the retinal vessel segmentation task, the performance of the evaluation metrics such as Dice coefficient, Accuracy, Sensitivity, and Specificity were analyzed using a confusion matrix. The corresponding equations for each evaluation metric are expressed in Equations (11)–(15). In image segmentation tasks, the Dice coefficient is usually used to express the proportional relationship between sensitivity and accuracy, and its value is closer to 1.0 to indicate better segmentation. Accuracy indicates the ratio of the sum of correctly segmented vessel pixels and background pixels to the total pixels of the whole image. The sensitivity indicates the ratio of correctly segmented vessel pixels to the total real vessel pixels, and its value is Specificity, indicating the proportion of correctly segmented background pixels to the total real background pixels, and the value is closer to 1.0, which means the fewer pixels are incorrectly segmented:
D i c e = 2 × T P 2 × T P + F N + F P
A c c u r a c y = T P + T N T P + F N + F P + T N
S e n s i t i v i t y = T P T P + F N
S p e c i f i c i t y = T N T N + F P
P r e c i s i o n = T P T P + F P
where true positive (TP) is the number of vessel pixels that are correctly segmented, true negative (TN) is the number of background pixels that are correctly segmented, false positive (FP) is the number of background pixels that are incorrectly segmented as vessel pixels, and false negative (FN) is the number of vessel pixels that are incorrectly segmented as background pixels.

5. Experimental Results and Analysis

5.1. Experimental Environment and Parameter Settings

The method in this paper is implemented using the Pytorch framework for deep learning. The model training was implemented on a Quadro RTX 6000 server with a GPU memory size of 24 GB and an operating system of Ubuntu64. The initial learning rate of 0.001 was used for training. We used the model with the best validation performance in the test, and the Dice loss function was used for the loss function.
For the DRIVE and CHASE DB1 datasets, the number of iterations of the model is set to 100, the training batch size is 32, and the threshold is set to 0.5. Since there are only 20 images in the STARE dataset, the experiments are conducted using the leave-one-out method to make the training effect as good as possible. That is, one image is used for training at a time, and the remaining 19 samples are used for testing. The training batch size was set to 64, the number of iterations of the model was set to 100, and the threshold value was set to 0.48.

5.2. Experimental Comparison of Ablation Structures

In order to verify the effectiveness of the utilization of the shallow information in the model MTPA_Unet, the TPA module, and the Transformer for the retinal vessel segmentation task in this paper, in the same experimental setting, we performed retinal vessel segmentation experiments in DRIVE, CHASE DB1, and STARE datasets, respectively, using the U-network as the baseline model. The performance of these modules was quantified by designing ablation experiments. First, the baseline network is based on a modification of the U-Net incorporating residual connectivity and a multiscale pooling module, denoted as BaseLine. Based on this, the output of each encoder block is fused and passed to the multiscale pooling module to utilize shallow coarse-grained feature information. Next, the Transformer is added to the BaseLine+SCI to compensate for the contextual information, and a multi-scale network input is used. Finally, the MSA is replaced by the TPA structure, which is the model MTPA_Unet in this paper. MTPA_Unet (w/o pre) means operating directly on the original image without any preprocessing.
The structure of the ablation experiments performed on the DRIVE and CHASE DB1 datasets are presented in Table 1 and Table 2, respectively. The bolded data in the tables indicate the maximum values achieved by the different network models on the corresponding evaluation metrics. As far as the performance of BaseLine is concerned, the Dice coefficient and sensitivity reached 0.8136 and 0.8266, 0.8324, and 0.8060 on the DRIVE and CHASE DB1 datasets, respectively. Model BaseLine+SCI reached 0.8289 and 0.8278 on the DRIVE dataset for Dice coefficient and sensitivity, respectively. The sensitivity increased by 1.18%. On the CHASE DB1 dataset, the Dice coefficient and sensitivity reached 0.8143 and 0.8296, respectively, with some decrease in sensitivity but a small increase in Dice coefficient, while the rest of the metrics were basically the same. This proves that the inclusion of shallow information is beneficial to segment the vessel pixels from the background pixels and can capture more details of the vessels. In order to combine the long-distance dependency extraction capability of Transformer with the local information extraction capability of CNN, we added a Transformer structure based on the BaseLine+SCI model and used multi-scale input in order to exploit the multi-resolution feature of retinal vessel images. It can be seen that the Dice coefficient and sensitivity reach 0.8300 and 0.8249 on the DRIVE dataset, and 0.8139 and 0.8434 on the CHASE DB1 dataset, respectively. This proves that Transformer is helpful for the vessel segmentation task. However, retinal vessel segmentation has more vessel branches as well as terminal parts compared with other medical image segmentation tasks. The acquisition of vessel location information is insufficient using only the Transformer structure. Therefore, after adding the location attention, the Dice coefficient and sensitivity of model MTPA_Unet on DRIVE, CHASE DB1 datasets reach 0.8318 and 0.8164, 0.8410 and 0.8437, respectively, which are improved by 0.18% and 0.21%, 1.65% and 0.04%, respectively. It is further demonstrated that our proposed module is effective in extracting the contextual information of retinal images. Multiple retinal vessel section images at different scales also enable the model to learn different characteristics as well as fine vessel features.
Table 1. Ablation experiments on the DRIVE dataset.
Table 2. Ablation experiments on the CHASE DB1 dataset.
For the STARE dataset, the same ablation experiments were designed. For clarity, the test results of the MTP_Unet model on 20 images are listed in Table 3. The results of the 20 tests on the five metrics are averaged as the test results of MTPA_Unet on the STARE dataset.
Table 3. Test results on STARE dataset using the leave-one-out method.
For the models BaseLine, BaseLine+SCI, and BaseLine+SCI+MT were trained and tested on the STARE dataset using the leave-one-out method. For the sake of simplicity, only the average values obtained on the five evaluated metrics are shown here. The experimental results are shown in Table 4, with the highest value for each metric in bold. Comparing the performance of BaseLine, Dice and sensitivity improved by 0.46% and 0.71%, respectively, after adding the shallow information. Combining the multiscale Transformer with it increases Dice and sensitivity by 0.09% and 0.3%, respectively. The method MTPA_Unet in this paper improves Dice and sensitivity by 1.23% and 1.26%, respectively, on this basis. The combined performance shows that the method in this paper can improve in each index and is very effective for more accurate segmentation of vessel pixels.
Table 4. Ablation experiments on the STARE dataset.
In addition to using these evaluation metrics to measure the effectiveness of the models, to see more clearly the segmentation effect of the retinal fundus vessels, the segmentation effect of each model on the same data set is shown in Figure 8. The visualization of the segmentation results shows details that are not reflected in the numerical data. Columns (a)–(f) in Figure 8 show the original retinal vessel image, the true value of manual segmentation by professionals, the segmentation result of BaseLine, and the segmentation result of BaseLine+SCI, the segmentation result of BaseLine+SCI+ MT, and the segmentation result of MTPA_Unet, respectively. From top to bottom, the segmentation results of medical images on CHASE DB1, DRIVE, and STARE datasets are shown in order. To highlight the model segmentation effect, the capillary segmentation region is highlighted in the visualization comparison. Some regions in the original retinal image, the baseline image, and each model segmentation result map are shown enlarged and marked with red boxes.
Figure 8. Visualization of ablation experiments on three datasets. (a) original image, (b) groundtruth image, (c) BaseLine, (d) BaseLine+SCI, (e) BaseLine+SCI+ MT, (f) MTPA_Unet.
It can be seen from the performance of each model on the three datasets in Figure 8. The difficulty of retinal fundus vessel segmentation is the accurate segmentation of the surrounding fine vessel branches and some interlacing locations of vessels. In contrast, segmentation of the thicker veins or slightly thinner arteries in the center of the retina is easier for the common model. The addition of the Transformer structure brings more information to the network and enables more detailed segmentation of the blood vessels, which is not possible with the baseline model. The model in this paper obtains better segmentation results compared with the above ablation structures. This fully demonstrates that considering the information of vessel pixel location can enhance the information extraction ability of Transformer structure and make the network have better segmentation ability. It is clearly observed from the comparison of the regions marked with red boxes in the figure that the segmentation of the border part and capillary part of the blood vessels is more accurate and clear. The above facts show that the network structure proposed in this paper is feasible and effective in a real segmentation task. The improved network model is able to obtain better segmentation results on all three datasets.

5.3. Comparison with Existing Models

To further illustrate the model validity, the method in this paper conducts comparison experiments with some existing state-of-the-art methods on three datasets. Classical advanced methods such as U-Net++ [], R2U-Net [], CA-Net [], and SCS-Net [] within the last five years were selected. The evaluation was carried out according to four evaluation metrics, namely Dice, Accuracy, Sensitivity, and Specificity. Table 5, Table 6 and Table 7 show the evaluation results of different models on the DRIVE, STARE, and CHASE DB1 datasets, respectively, for the retinal vessel segmentation task.
Table 5. Comparison with other methods on the DRIVE dataset.
Table 6. Comparison with other methods on the STARE dataset.
Table 7. Comparison with other methods on the CHASE DB1 dataset.
As can be seen from Table 5, on the DRIVE dataset, the method MTPA_Unet in this paper performs the best on the three metrics of accuracy, sensitivity, and Dice. Compared with the suboptimal method, the improvement is 0.08%, 1.21%, and 0.16%, respectively. As can be seen in Table 7, on the CHASE DB1 dataset, the method in this paper performs better in terms of accuracy and Dice. The improvement is 0.02% and 0.25%, respectively. From Table 6, it can be seen that, on the STARE dataset, the method in this paper performs best on two metrics, Sensitivity and Dice. Compared with the suboptimal method, it improves 2.52% in sensitivity and 0.82% in Dice. The improvement of the evaluation index results verifies the effectiveness of the method in this paper. The superior sensitivity metric results prove the better accuracy of the method in this paper for the correct classification of vessel pixels.
Similarly, the segmentation capability of the model is visualized with visualized images of retinal vessel segmentation results. The method in this paper is compared with the current better performing UNet++ [], CA-Net [], and AG-UNet [] network models for the segmentation task of retinal fundus vessels for visualization. Figure 9 and Figure 10 show the comparison of the visualization results of different models on the DRIVE and CHASE DB1 datasets for the retinal vessel segmentation task, respectively. Columns (a)–(f) show the segmentation results of the original retinal vessel images, the true values manually segmented by a professional, UNet++, CA-Net, AG-UNet, and MTPA_Unet, respectively. On the DRIVE and CHASE DB1 datasets, we can see that CA-Net and AG-UNet can basically segment all arteries and veins, but there are still some blood vessels that are not segmented, and the background pixels are incorrectly segmented as blood vessel pixels, and the noise is more obvious in the results of CA-Net segmentation, while UNet++ performs well in the evaluation metrics and visualization segmentation results. However, the segmentation of some of the vascular details is slightly inferior. In contrast, MTPA_Unet performs better in the segmentation of small blood vessels because it fully utilizes the inter-pixel position relationship and multi-scale feature maps to reduce the pixel misclassification problem. Since MTPA_Unet takes into account the information of deep and shallow layers, the noise effect is reduced in the segmentation results.
Figure 9. Comparison of visualization results with other methods on DRIVE dataset (a) original image, (b) groundtruth image, (c) UNet++ [], (d) CA-Net [], (e) AG-UNet [], (f) Ours.
Figure 10. Comparison of visualization results with other methods on CHASE DB1 dataset (a) original image, (b) groundtruth image, (c) UNet++ [], (d) CA-Net [], (e) AG-UNet [], (f) Ours.
Figure 11 shows the comparison of the visualization results of different models on the STARE dataset for the retinal vessel segmentation task. Columns (a)–(e) show the original retinal vessel images, the true values manually segmented by professionals, the segmentation results of UNet++, CA-Net, and the segmentation results of MTPA_Unet, respectively. The method in this paper can obtain clearer vessel segmentation results compared with UNet++ and CA-Net. The segmentation of capillaries is more accurate. The segmentation is more smooth in the articulation part of some vessels. This is due to the fact that this method takes into account the long-range relationship between pixels and the local relationship, and focuses on the location connected with the surrounding pixels when obtaining the pixel information of blood vessels. The comparative analysis shows that the method in this paper has better performance and advantages for the retinal vessel segmentation task. This conclusion can be clearly derived from Figure 9 and Figure 11.
Figure 11. Comparison of visualization results with other methods on STARE dataset (a) original image, (b) groundtruth image, (c) UNet++ [], (d) CA-Net [], (e) Ours.

5.4. Analysis of the Number of Model Parameters and Evaluation of ROC Curves

We evaluate the cost of the network model to obtain better segmentation performance from the perspective of the model parameters. Ordinary CNN networks usually do not introduce too much computation, while Transformer leads to a higher number of parameters due to the complex multi-headed attention computation. To demonstrate that our final model MTPA_Unet does not introduce too many parameters, the obtained high precision experimental results do not only rely on the complex model to be achieved. The model with the introduction of the Transformer structure is compared with the number of MTPA_Unet parameters with the addition of the position attention module. As can be seen in Table 8, the Transformer network model with the addition of multiscale inputs brings a higher number of parameters compared to the CNN network. However, the MTPA_Unet modification of the Transformer not only does not introduce too many parameters but also enables the model to achieve higher accuracy in the retinal vessel segmentation task. This proves that the method in this paper is not complicated and effective.
Table 8. Comparison of the number of parameters of each ablation structure network model.
To further judge the model performance, Receiver Operating Characteristic (ROC) curves and Precision Recall (PR) curves were calculated for each ablation structure network model and displayed in visualization in Figure 12. The ROC curves express the information between the incorrect segmentation of background pixels into vascular pixels and the correct segmentation of vascular pixels. When the proportion of these two is larger, the PR curve can better reflect the real situation of pixel classification. As far as the experimental results are concerned, the area occupied by the ROC curve and PR curve of MTPA_Unet is the largest in all three data sets. This indicates that the method in this paper achieves the best results on the retinal vessel segmentation task, and is able to utilize the long-distance dependence and local information in combination. It is also able to extract the positional relationships between retinal vessel pixels and take into account the deep and shallow feature information, resulting in the best performance of the model.
Figure 12. PR and ROC curves for each ablation structure.

6. Conclusions

The MTPA_Unet retinal vessel segmentation network model proposed in this paper jointly uses Transformer and convolutional neural network to help improve the performance of the network model. Since the connection between two distant pixels on an image is important for more accurate retinal vessel segmentation, the convolutional neural network is utilized to extract the long-distance dependencies while taking advantage of the convolutional neural network for local information extraction. The proposed TPA module can further enhance the acquisition of retinal vessel location information, having richer feature information to be fully used in the refinement process. The multi-resolution image input and the utilization of shallow feature information further alleviate the problems of blurred boundaries of segmentation results and inaccurate capillary segmentation. We trained and tested the MTPA_Unet network model proposed in this paper on the DRIVE, CHASEDB1, and STARE datasets. The evaluation shows that the model has achieved good results in terms of Accuracy and Dice. Comparison experiments were also designed to compare and analyze the evaluation results with other popular methods to visually demonstrate the segmentation details of each network model on the retinal vessel task. The comparison of the segmentation results and the analytical discussion show that the MTPA_Unet network model proposed in this paper is more advantageous compared with other methods. Future research will aim to further improve the accuracy of the network model for the retinal vessel segmentation task without sacrificing time and storage.

Author Contributions

Data curation, T.C., X.L., Y.Z. and J.D.; Writing—original draft, J.L.; Writing—review and editing, Y.J. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported in part by the National Natural Science Foundation of China (61962054), in part by the National Natural Science Foundation of China (61163036), in part by the 2016 Gansu Provincial Science and Technology Plan Funded by the Natural Science Foundation of China (1606RJZA047), in part by the Cultivation plan of major Scientific Research Projects of Northwest Normal University (NWNU-LKZD2021-06).

Data Availability Statement

Data availability statement. We use three publicly available retinal image datasets to evaluate the segmentation network proposed in this paper, namely, the DRIVE dataset, the CHASE DB1 dataset, and the STARE dataset. They can be downloaded from the URL http://www.isi.uu.nl/Research/Databases/DRIVE/ (accessed on 30 December 2021), https://blogs.kingston.ac.uk/retinal/chasedb1/ (accessed on 30 December 2021) and https://cecas.clemson.edu/~ahoover/stare/ (accessed on 31 December 2021).

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

The following abbreviations are used in this manuscript:
MTPA_UNetMulti-scale Transformer-Position Attention_Unet
CNNConvolutional Neural Networks
TPATransformer-Position Attention
FEExtraction Module
DSDownsampling Module

References

  1. Fan, Z.; Wei, J.; Zhu, G.; Mo, J.; Li, W. ENAS U-Net: Evolutionary Neural Architecture Search for Retinal Vessel Segmentation. arXiv 2020, arXiv:2001.06678. [Google Scholar]
  2. Oshitari, T. Diabetic retinopathy: Neurovascular disease requiring neuroprotective and regenerative therapies. Neural Regen. Res. 2022, 17, 795. [Google Scholar] [CrossRef] [PubMed]
  3. Xing, C.; Klein, B.E.; Klein, R.; Jun, G.; Lee, K.E.; Iyengar, S.K. Genome-wide linkage study of retinal vessel diameters in the Beaver Dam Eye Study. Hypertension 2006, 47, 797–802. [Google Scholar] [CrossRef] [PubMed]
  4. Cunha-Vaz, J. The blood-retinal barrier in the management of retinal disease: EURETINA award lecture. Ophthalmologica 2017, 237, 1–10. [Google Scholar] [CrossRef] [PubMed]
  5. Roychowdhury, S.; Koozekanani, D.D.; Parhi, K.K. Blood vessel segmentation of fundus images by major vessel extraction and subimage classification. IEEE J. Biomed. Health Inform. 2014, 19, 1118–1128. [Google Scholar]
  6. Shah, S.A.A.; Shahzad, A.; Khan, M.A.; Lu, C.K.; Tang, T.B. Unsupervised Method for Retinal Vessel Segmentation based on Gabor Wavelet and Multiscale Line Detector. IEEE Access 2019, 7, 167221–167228. [Google Scholar] [CrossRef]
  7. Jainish, G.R.; Jiji, G.W.; Infant, P.A. A novel automatic retinal vessel extraction using maximum entropy based EM algorithm. Multimed. Tools Appl. 2020, 79, 22337–22353. [Google Scholar] [CrossRef]
  8. LeCun, Y.; Bottou, L.; Bengio, Y.; Haffner, P. Gradient-based learning applied to document recognition. Proc. IEEE 1998, 86, 2278–2324. [Google Scholar] [CrossRef] [Green Version]
  9. Gu, R.; Wang, G.; Song, T.; Huang, R.; Aertsen, M.; Deprest, J.; Ourselin, S.; Vercauteren, T.; Zhang, S. CA-Net: Comprehensive attention convolutional neural networks for explainable medical image segmentation. IEEE Trans. Med. Imaging 2020, 40, 699–711. [Google Scholar] [CrossRef]
  10. Ronneberger, O.; Fischer, P.; Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2015; pp. 234–241. [Google Scholar]
  11. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 3431–3440. [Google Scholar]
  12. He, K.; Zhang, X.; Ren, S.; Sun, J. Deep residual learning for image recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar]
  13. Zhou, Z.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J. Unet++: A nested u-net architecture for medical image segmentation. In Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support; Springer: Cham, Swizterland, 2018; pp. 3–11. [Google Scholar]
  14. Wang, B.; Qiu, S.; He, H. Dual encoding u-net for retinal vessel segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2019; pp. 84–92. [Google Scholar]
  15. Xiao, X.; Lian, S.; Luo, Z.; Li, S. Weighted res-unet for high-quality retina vessel segmentation. In Proceedings of the 9th International Conference on Information Technology in Medicine and Education (ITME), Hangzhou, China, 19–21 October 2018; pp. 327–331. [Google Scholar]
  16. Hu, R.; Singh, A. Transformer is all you need: Multimodal multitask learning with a unified transformer. arXiv 2021, arXiv:2102.10772. [Google Scholar]
  17. Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. An image is worth 16 × 16 words: Transformers for image recognition at scale. arXiv 2020, arXiv:2010.11929. [Google Scholar]
  18. Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; Jégou, H. Training data-efficient image transformers & distillation through attention. In Proceedings of the International Conference on Machine Learning, Online, 18–24 July 2021; pp. 10347–10357. [Google Scholar]
  19. Wang, W.; Xie, E.; Li, X.; Fan, D.P.; Song, K.; Liang, D.; Lu, T.; Luo, P.; Shao, L. Pyramid vision transformer: A versatile backbone for dense prediction without convolutions. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, BC, Canada, 11–17 October 2021; pp. 568–578. [Google Scholar]
  20. Chen, J.; Lu, Y.; Yu, Q.; Luo, X.; Adeli, E.; Wang, Y.; Lu, L.; Yuille, A.L.; Zhou, Y. Transunet: Transformers make strong encoders for medical image segmentation. arXiv 2021, arXiv:2102.04306. [Google Scholar]
  21. Chen, B.; Liu, Y.; Zhang, Z.; Lu, G.; Zhang, D. Transattunet: Multi-level attention-guided u-net with transformer for medical image segmentation. arXiv 2021, arXiv:2107.05274. [Google Scholar]
  22. Valanarasu, J.M.J.; Oza, P.; Hacihaliloglu, I.; Patel, V.M. Medical transformer: Gated axial-attention for medical image segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2021; pp. 36–46. [Google Scholar]
  23. Woo, S.; Park, J.; Lee, J.Y.; Kweon, I.S. Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision, Munich, Germany, 8–14 September 2018; pp. 3–19. [Google Scholar]
  24. Wang, X.; Girshick, R.; Gupta, A.; He, K. Non-local neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7794–7803. [Google Scholar]
  25. Huang, Z.; Wang, X.; Huang, L.; Huang, C.; Wei, Y.; Liu, W. Ccnet: Criss-cross attention for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 603–612. [Google Scholar]
  26. Zhu, Z.; Xu, M.; Bai, S.; Huang, T.; Bai, X. Asymmetric non-local neural networks for semantic segmentation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October–2 November 2019; pp. 593–602. [Google Scholar]
  27. Fu, J.; Liu, J.; Tian, H.; Li, Y.; Bao, Y.; Fang, Z.; Lu, H. Dual attention network for scene segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 3146–3154. [Google Scholar]
  28. Lian, S.; Li, L.; Lian, G.; Xiao, X.; Luo, Z.; Li, S. A global and local enhanced residual u-net for accurate retinal vessel segmentation. IEEE ACM Trans. Comput. Biol. Bioinform. 2019, 18, 852–862. [Google Scholar] [CrossRef] [PubMed]
  29. Li, Y.; Li, H.; Fan, Y. ACEnet: Anatomical context-encoding network for neuroanatomy segmentation. Med. Image Anal. 2021, 70, 101991. [Google Scholar] [CrossRef]
  30. Zhang, Y.; He, M.; Chen, Z.; Hu, K.; Li, X.; Gao, X. Bridge-Net: Context-involved U-net with patch-based loss weight mapping for retinal blood vessel segmentation. Expert Syst. Appl. 2022, 195, 116526. [Google Scholar] [CrossRef]
  31. Tan, Y.; Yang, K.F.; Zhao, S.X.; Li, Y.J. Retinal Vessel Segmentation with Skeletal Prior and Contrastive Loss. IEEE Trans. Med. Imaging 2022. [Google Scholar] [CrossRef]
  32. Arsalan, M.; Haider, A.; Choi, J.; Park, K.R. Diabetic and Hypertensive Retinopathy Screening in Fundus Images Using Artificially Intelligent Shallow Architectures. J. Pers. Med. 2021, 12, 7. [Google Scholar] [CrossRef]
  33. Arsalan, M.; Haider, A.; Lee, Y.W.; Park, K.R. Detecting retinal vasculature as a key biomarker for deep Learning-based intelligent screening and analysis of diabetic and hypertensive retinopathy. Expert Syst. Appl. 2022, 200, 117009. [Google Scholar] [CrossRef]
  34. Yin, P.; Cai, H.; Wu, Q. DF-Net: Deep fusion network for multi-source vessel segmentation. Inf. Fusion 2022, 78, 199–208. [Google Scholar] [CrossRef]
  35. d’Ascoli, S.; Touvron, H.; Leavitt, M.; Morcos, A.; Biroli, G.; Sagun, L. Convit: Improving vision Transformers with soft convolutional inductive biases. In Proceedings of the ICLR 2021, Online, 3–7 May 2021. [Google Scholar]
  36. Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S.; Guo, B. Swin transformer: Hierarchical vision transformer using shifted windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, BC, Canada, 11–17 October 2021; pp. 10012–10022. [Google Scholar]
  37. Huang, S.; Li, J.; Xiao, Y.; Shen, N.; Xu, T. RTNet: Relation Transformer Network for Diabetic Retinopathy Multi-lesion Segmentation. IEEE Trans. Med. Imaging 2022, 41, 1596–1607. [Google Scholar] [CrossRef] [PubMed]
  38. Heo, B.; Yun, S.; Han, D.; Chun, S.; Choe, J.; Oh, S.J. Rethinking spatial dimensions of vision transformers. In Proceedings of the IEEE/CVF International Conference on Computer Vision 2021, Montreal, BC, Canada, 11–17 October 2021; pp. 11936–11945. [Google Scholar]
  39. Cao, H.; Wang, Y.; Chen, J.; Jiang, D.; Zhang, X.; Tian, Q.; Wang, M. Swin-unet: Unet-like pure transformer for medical image segmentation. arXiv 2021, arXiv:2105.05537. [Google Scholar]
  40. Gao, Y.; Zhou, M.; Liu, D.; Metaxas, D. A Multi-scale Transformer for Medical Image Segmentation: Architectures, Model Efficiency, and Benchmarks. arXiv 2022, arXiv:2203.00131. [Google Scholar]
  41. Zhang, Q.; Yang, Y.B. Rest: An efficient transformer for visual recognition. Adv. Neural Inf. Process. Syst. 2021, 34, 15475–15485. [Google Scholar]
  42. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef]
  43. Owen, C.G.; Rudnicka, A.R.; Mullen, R.; Barman, S.A.; Monekosso, D.; Whincup, P.H.; Ng, J.; Paterson, C. Measuring retinal vessel tortuosity in 10-year-old children: Validation of the computer-assisted image analysis of the retina(CAIAR) program. Investig. Ophthalmol. Vis. Sci. 2009, 50, 2004–2010. [Google Scholar] [CrossRef] [Green Version]
  44. Hoover, A.D.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [Green Version]
  45. Jiang, Y.; Zhang, H.; Tan, N.; Chen, L. Automatic retinal blood vessel segmentation based on fully convolutional neural networks. Symmetry 2019, 11, 1112. [Google Scholar] [CrossRef] [Green Version]
  46. Alom, M.Z.; Hasan, M.; Yakopcic, C.; Taha, T.M.; Asari, V.K. Recurrent residual convolutional neural network based on u-net (r2u-net) for medical image segmentation. arXiv 2018, arXiv:1802.06955. [Google Scholar]
  47. Wu, H.; Wang, W.; Zhong, J.; Lei, B.; Wen, Z.; Qin, J. SCS-Net: A Scale and Context Sensitive Network for Retinal Vessel Segmentation. Med. Image Anal. 2021, 70, 102025. [Google Scholar] [CrossRef]
  48. Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable COSFIRE filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  49. Miao, Y.; Cheng, Y. Automatic extraction of retinal blood vessel based on matched filtering and local entropy thresholding. In Proceedings of the 2015 8th International Conference on Biomedical Engineering and Informatics (BMEI), Shenyang, China, 14–16 October 2015; pp. 62–67. [Google Scholar]
  50. Chen, G.; Chen, M.; Li, J.; Zhang, E. Retina image vessel segmentation using a hybrid CGLI level set method. BioMed Res. Int. 2017, 2017, 1263056. [Google Scholar] [CrossRef] [PubMed]
  51. Guo, C.; Szemenyei, M.; Yi, Y.; Zhou, W.; Bian, H. Residual Spatial Attention Network for Retinal Vessel Segmentation. In International Conference on Neural Information Processing; Springer: Cham, Switzerland, 2020; pp. 509–519. [Google Scholar]
  52. Lv, Y.; Ma, H.; Li, J.; Liu, S. Attention guided u-net with atrous convolution for accurate retinal vessels segmentation. IEEE Access 2020, 8, 32826–32839. [Google Scholar] [CrossRef]
  53. Tomar, N.K.; Jha, D.; Riegler, M.A.; Johansen, H.D.; Johansen, D.; Rittscher, J.; Halvorsen, P.; Ali, S. FANet: A Feedback Attention Network for Improved Biomedical Image Segmentation. arXiv 2021, arXiv:2103.17235. [Google Scholar] [CrossRef] [PubMed]
  54. Tong, H.; Fang, Z.; Wei, Z.; Cai, Q.; Gao, Y. SAT-Net: A side attention network for retinal image segmentation. Appl. Intell. 2021, 51, 5146–5156. [Google Scholar] [CrossRef]
  55. Wang, W.; Zhong, J.; Wu, H.; Wen, Z.; Qin, J. Rvseg-net: An efficient feature pyramid cascade network for retinal vessel segmentation. In International Conference on Medical Image Computing and Computer-Assisted Intervention; Springer: Cham, Switzerland, 2020; pp. 796–805. [Google Scholar]
  56. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. DUNet: A deformable network for retinal vessel segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef] [Green Version]
  57. Huang, Z.; Fang, Y.; Huang, H.; Xu, X.; Wang, J.; Lai, X. Automatic Retinal Vessel Segmentation Based on an Improved U-Net Approach. Sci. Program. 2021, 2021, 5520407. [Google Scholar] [CrossRef]
  58. Li, L.; Verma, M.; Nakashima, Y.; Nagahara, H.; Kawasaki, R. Iternet: Retinal image segmentation utilizing structural redundancy in vessel networks. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision 2020, Snowmass Village, CO, USA, 1–5 March 2020; pp. 3656–3665. [Google Scholar]
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.