Next Article in Journal
Qualitative Investigation of Hamiltonian Systems by Application of Skew-Symmetric Differential Forms
Previous Article in Journal
Local and Nonlocal Reductions of Two Nonisospectral Ablowitz-Kaup-Newell-Segur Equations and Solutions
 
 
Article

A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation

by , *,†, and
College of Computer Science and Engineering, Northwest Normal University, Lanzhou 730070, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Symmetry 2021, 13(1), 24; https://doi.org/10.3390/sym13010024
Received: 8 November 2020 / Revised: 3 December 2020 / Accepted: 17 December 2020 / Published: 24 December 2020
(This article belongs to the Section Computer Science and Symmetry/Asymmetry)
Accurate segmentation of retinal blood vessels is a key step in the diagnosis of fundus diseases, among which cataracts, glaucoma, and diabetic retinopathy (DR) are the main diseases that cause blindness. Most segmentation methods based on deep convolutional neural networks can effectively extract features. However, convolution and pooling operations also filter out some useful information, and the final segmented retinal vessels have problems such as low classification accuracy. In this paper, we propose a multi-scale residual attention network called MRA-UNet. Multi-scale inputs enable the network to learn features at different scales, which increases the robustness of the network. In the encoding phase, we reduce the negative influence of the background and eliminate noise by using the residual attention module. We use the bottom reconstruction module to aggregate the feature information under different receptive fields, so that the model can extract the information of different thicknesses of blood vessels. Finally, the spatial activation module is used to process the up-sampled image to further increase the difference between blood vessels and background, which promotes the recovery of small blood vessels at the edges. Our method was verified on the DRIVE, CHASE, and STARE datasets. Respectively, the segmentation accuracy rates reached 96.98%, 97.58%, and 97.63%; the specificity reached 98.28%, 98.54%, and 98.73%; and the F-measure scores reached 82.93%, 81.27%, and 84.22%. We compared the experimental results with some state-of-art methods, such as U-Net, R2U-Net, and AG-UNet in terms of accuracy, sensitivity, specificity, F-measure, and AUCROC. Particularly, MRA-UNet outperformed U-Net by 1.51%, 3.44%, and 0.49% on DRIVE, CHASE, and STARE datasets, respectively. View Full-Text
Keywords: deep convolutional neural work; multi-scale; retinal vessel segmentation; attention mechanism; skip connection deep convolutional neural work; multi-scale; retinal vessel segmentation; attention mechanism; skip connection
Show Figures

Figure 1

MDPI and ACS Style

Jiang, Y.; Yao, H.; Wu, C.; Liu, W. A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation. Symmetry 2021, 13, 24. https://doi.org/10.3390/sym13010024

AMA Style

Jiang Y, Yao H, Wu C, Liu W. A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation. Symmetry. 2021; 13(1):24. https://doi.org/10.3390/sym13010024

Chicago/Turabian Style

Jiang, Yun, Huixia Yao, Chao Wu, and Wenhuan Liu. 2021. "A Multi-Scale Residual Attention Network for Retinal Vessel Segmentation" Symmetry 13, no. 1: 24. https://doi.org/10.3390/sym13010024

Find Other Styles
Note that from the first issue of 2016, MDPI journals use article numbers instead of page numbers. See further details here.

Article Access Map by Country/Region

1
Back to TopTop