Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (73)

Search Parameters:
Keywords = fundus vessel segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 12221 KiB  
Article
Retinal Vessel Segmentation Based on a Lightweight U-Net and Reverse Attention
by Fernando Daniel Hernandez-Gutierrez, Eli Gabriel Avina-Bravo, Mario Alberto Ibarra-Manzano, Jose Ruiz-Pinales, Emmanuel Ovalle-Magallanes and Juan Gabriel Avina-Cervantes
Mathematics 2025, 13(13), 2203; https://doi.org/10.3390/math13132203 - 5 Jul 2025
Viewed by 904
Abstract
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This [...] Read more.
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This concept could be of benefit to portable or embedded recognition systems with limited resources for real-time operation. Compared to the baseline model (7.7 M parameters), the proposed U-Net model has only 1.9 M parameters and was tested on the DRIVE (Digital Retinal Images for Vesselness Extraction), CHASE (Child Heart and Health Study in England), and HRF (High-Resolution Fundus) datasets for vesselness analysis. The proposed model achieved Dice coefficients and IoU scores of 0.7871 and 0.6318 on the DRIVE dataset, 0.8036 and 0.6910 on the CHASE-DB1 Retinal Vessel Reference dataset, as well as 0.6902 and 0.5270 on the HRF dataset, respectively. Notably, the integration of the reverse attention mechanism contributed to a more accurate delineation of thin and peripheral vessels, which are often undetected by conventional models. The model comprised 1.94 million parameters and 12.21 GFLOPs. Furthermore, during inference, the model achieved a frame rate average of 208 FPS and a latency of 4.81 ms. These findings support the applicability of the proposed model in real-world clinical and mobile healthcare environments where efficiency and Accuracy are essential. Full article
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)
Show Figures

Figure 1

24 pages, 7389 KiB  
Article
A Novel Approach to Retinal Blood Vessel Segmentation Using Bi-LSTM-Based Networks
by Pere Marti-Puig, Kevin Mamaqi Kapllani and Bartomeu Ayala-Márquez
Mathematics 2025, 13(13), 2043; https://doi.org/10.3390/math13132043 - 20 Jun 2025
Viewed by 424
Abstract
The morphology of blood vessels in retinal fundus images is a key biomarker for diagnosing conditions such as glaucoma, hypertension, and diabetic retinopathy. This study introduces a deep learning-based method for automatic blood vessel segmentation, trained from scratch on 44 clinician-annotated images. The [...] Read more.
The morphology of blood vessels in retinal fundus images is a key biomarker for diagnosing conditions such as glaucoma, hypertension, and diabetic retinopathy. This study introduces a deep learning-based method for automatic blood vessel segmentation, trained from scratch on 44 clinician-annotated images. The proposed architecture integrates Bidirectional Long Short-Term Memory (Bi-LSTM) layers with dropout to mitigate overfitting. A distinguishing feature of this approach is the column-wise processing, which improves feature extraction and segmentation accuracy. Additionally, a custom data augmentation technique tailored for retinal images is implemented to improve training performance. The results are presented in their raw form—without post-processing—to objectively assess the method’s effectiveness and limitations. Further refinements, including pre- and post-processing and the use of image rotations to combine multiple segmentation outputs, could significantly boost performance. Overall, this work offers a novel and effective approach to the still unresolved task of retinal vessel segmentation, contributing to more reliable automated analysis in ophthalmic diagnostics. Full article
(This article belongs to the Special Issue Intelligent Computing with Applications in Computer Vision)
Show Figures

Figure 1

17 pages, 2685 KiB  
Article
DAF-UNet: Deformable U-Net with Atrous-Convolution Feature Pyramid for Retinal Vessel Segmentation
by Yongchao Duan, Rui Yang, Ming Zhao, Mingrui Qi and Sheng-Lung Peng
Mathematics 2025, 13(9), 1454; https://doi.org/10.3390/math13091454 - 29 Apr 2025
Cited by 1 | Viewed by 547
Abstract
Segmentation of retinal vessels from fundus images is critical for diagnosing diseases such as diabetes and hypertension. However, the inherent challenges posed by the complex geometries of vessels and the highly imbalanced distribution of thick versus thin vessel pixels demand innovative solutions for [...] Read more.
Segmentation of retinal vessels from fundus images is critical for diagnosing diseases such as diabetes and hypertension. However, the inherent challenges posed by the complex geometries of vessels and the highly imbalanced distribution of thick versus thin vessel pixels demand innovative solutions for robust feature extraction. In this paper, we introduce DAF-UNet, a novel architecture that integrates advanced modules to address these challenges. Specifically, our method leverages a pre-trained deformable convolution (DC) module within the encoder to dynamically adjust the sampling positions of the convolution kernel, thereby adapting the receptive field to capture irregular vessel morphologies more effectively than traditional convolutional approaches. At the network’s bottleneck, an enhanced atrous spatial pyramid pooling (ASPP) module is employed to extract and fuse rich, multi-scale contextual information, significantly improving the model’s capacity to delineate vessels of varying calibers. Furthermore, we propose a hybrid loss function that combines pixel-level and segment-level losses to robustly address the segmentation inconsistencies caused by the disparity in vessel thickness. Experimental evaluations on the DRIVE and CHASE_DB1 datasets demonstrated that DAF-UNet achieved a global accuracy of 0.9572/0.9632 and a Dice score of 0.8298/0.8227, respectively, outperforming state-of-the-art methods. These results underscore the efficacy of our approach in precisely capturing fine vascular details and complex boundaries, marking a significant advancement in retinal vessel segmentation. Full article
(This article belongs to the Special Issue Mathematics Methods in Image Processing and Computer Vision)
Show Figures

Figure 1

15 pages, 11124 KiB  
Article
Intraoperative Augmented Reality for Vitreoretinal Surgery Using Edge Computing
by Run Zhou Ye and Raymond Iezzi
J. Pers. Med. 2025, 15(1), 20; https://doi.org/10.3390/jpm15010020 - 6 Jan 2025
Viewed by 1096
Abstract
Purpose: Augmented reality (AR) may allow vitreoretinal surgeons to leverage microscope-integrated digital imaging systems to analyze and highlight key retinal anatomic features in real time, possibly improving safety and precision during surgery. By employing convolutional neural networks (CNNs) for retina vessel segmentation, [...] Read more.
Purpose: Augmented reality (AR) may allow vitreoretinal surgeons to leverage microscope-integrated digital imaging systems to analyze and highlight key retinal anatomic features in real time, possibly improving safety and precision during surgery. By employing convolutional neural networks (CNNs) for retina vessel segmentation, a retinal coordinate system can be created that allows pre-operative images of capillary non-perfusion or retinal breaks to be digitally aligned and overlayed upon the surgical field in real time. Such technology may be useful in assuring thorough laser treatment of capillary non-perfusion or in using pre-operative optical coherence tomography (OCT) to guide macular surgery when microscope-integrated OCT (MIOCT) is not available. Methods: This study is a retrospective analysis involving the development and testing of a novel image-registration algorithm for vitreoretinal surgery. Fifteen anonymized cases of pars plana vitrectomy with epiretinal membrane peeling, along with corresponding preoperative fundus photographs and optical coherence tomography (OCT) images, were retrospectively collected from the Mayo Clinic database. We developed a TPU (Tensor-Processing Unit)-accelerated CNN for semantic segmentation of retinal vessels from fundus photographs and subsequent real-time image registration in surgical video streams. An iterative patch-wise cross-correlation (IPCC) algorithm was developed for image registration, with a focus on optimizing processing speeds and maintaining high spatial accuracy. The primary outcomes measured were processing speed in frames per second (FPS) and the spatial accuracy of image registration, quantified by the Dice coefficient between registered and manually aligned images. Results: When deployed on an Edge TPU, the CNN model combined with our image-registration algorithm processed video streams at a rate of 14 FPS, which is superior to processing rates achieved on other standard hardware configurations. The IPCC algorithm efficiently aligned pre-operative and intraoperative images, showing high accuracy in comparison to manual registration. Conclusions: This study demonstrates the feasibility of using TPU-accelerated CNNs for enhanced AR in vitreoretinal surgery. Full article
(This article belongs to the Section Methodology, Drug and Device Discovery)
Show Figures

Figure 1

17 pages, 15128 KiB  
Article
Retinal Vessel Segmentation Based on Self-Attention Feature Selection
by Ligang Jiang, Wen Li, Zhiming Xiong, Guohui Yuan, Chongjun Huang, Wenhao Xu, Lu Zhou, Chao Qu, Zhuoran Wang and Yuhua Tong
Electronics 2024, 13(17), 3514; https://doi.org/10.3390/electronics13173514 - 4 Sep 2024
Cited by 3 | Viewed by 1776
Abstract
Many major diseases can cause changes in the morphology of blood vessels, and the segmentation of retinal blood vessels is of great significance for preventing these diseases. Obtaining complete, continuous, and high-resolution segmentation results is very challenging due to the diverse structures of [...] Read more.
Many major diseases can cause changes in the morphology of blood vessels, and the segmentation of retinal blood vessels is of great significance for preventing these diseases. Obtaining complete, continuous, and high-resolution segmentation results is very challenging due to the diverse structures of retinal tissues, the complex spatial structures of blood vessels, and the presence of many small ships. In recent years, deep learning networks like UNet have been widely used in medical image processing. However, the continuous down-sampling operations in UNet can result in the loss of a significant amount of information. Although skip connections between the encoder and decoder can help address this issue, the encoder features still contain a large amount of irrelevant information that cannot be efficiently utilized by the decoder. To alleviate the irrelevant information, this paper proposes a feature selection module between the decoder and encoder that utilizes the self-attention mechanism of transformers to accurately and efficiently select the relevant encoder features for the decoder. Additionally, a lightweight Residual Global Context module is proposed to obtain dense global contextual information and establish dependencies between pixels, which can effectively preserve vascular details and segment small vessels accurately and continuously. Experimental results on three publicly available color fundus image datasets (DRIVE, CHASE, and STARE) demonstrate that the proposed algorithm outperforms existing methods in terms of both performance metrics and visual quality. Full article
(This article belongs to the Section Bioelectronics)
Show Figures

Figure 1

9 pages, 4309 KiB  
Communication
Attention Mechanism-Based Glaucoma Classification Model Using Retinal Fundus Images
by You-Sang Cho, Ho-Jung Song, Ju-Hyuck Han and Yong-Suk Kim
Sensors 2024, 24(14), 4684; https://doi.org/10.3390/s24144684 - 19 Jul 2024
Cited by 5 | Viewed by 2128
Abstract
This paper presents a classification model for eye diseases utilizing attention mechanisms to learn features from fundus images and structures. The study focuses on diagnosing glaucoma by extracting retinal vessels and the optic disc from fundus images using a ResU-Net-based segmentation model and [...] Read more.
This paper presents a classification model for eye diseases utilizing attention mechanisms to learn features from fundus images and structures. The study focuses on diagnosing glaucoma by extracting retinal vessels and the optic disc from fundus images using a ResU-Net-based segmentation model and Hough Circle Transform, respectively. The extracted structures and preprocessed images were inputted into a CNN-based multi-input model for training. Comparative evaluations demonstrated that our model outperformed other research models in classifying glaucoma, even with a smaller dataset. Ablation studies confirmed that using attention mechanisms to learn fundus structures significantly enhanced performance. The study also highlighted the challenges in normal case classification due to potential feature degradation during structure extraction. Future research will focus on incorporating additional fundus structures such as the macula, refining extraction algorithms, and expanding the types of classified eye diseases. Full article
Show Figures

Figure 1

11 pages, 2001 KiB  
Case Report
High-Resolution Imaging in Macular Telangiectasia Type 2: Case Series and Literature Review
by Andrada Elena Mirescu, Florian Balta, Ramona Barac, Dan George Deleanu, Ioana Teodora Tofolean, George Balta, Razvan Cojanu and Sanda Jurja
Diagnostics 2024, 14(13), 1351; https://doi.org/10.3390/diagnostics14131351 - 25 Jun 2024
Cited by 1 | Viewed by 2413
Abstract
Background: Macular telangiectasia (MacTel), also known as idiopathic juxtafoveolar telangiectasis (IJFTs), involves telangiectatic changes in the macular capillary network. The most common variant, MacTel type 2, has distinct clinical features and management strategies. Methods: This study offers a comprehensive review of MacTel and [...] Read more.
Background: Macular telangiectasia (MacTel), also known as idiopathic juxtafoveolar telangiectasis (IJFTs), involves telangiectatic changes in the macular capillary network. The most common variant, MacTel type 2, has distinct clinical features and management strategies. Methods: This study offers a comprehensive review of MacTel and focuses on a series of three patients diagnosed with MacTel type 2 in our clinic. A meticulous ophthalmological evaluation, augmented by high-resolution imaging techniques like optical coherence tomography (OCT), OCT angiography (OCT-A), fundus autofluorescence (FAF), fluorescein angiography (FA), and adaptive optics (AOs) imaging, was conducted. Results: The findings revealed normal anterior segment features and a grayish discoloration in the temporal perifoveal area on fundus examination. OCT exhibited hyporeflective cavities in the inner and outer neurosensory retina, along with other changes, while OCT-A identified retinal telangiectatic vessels in the deep capillary plexus. FAF demonstrated increased foveal autofluorescence, while FA initially detected telangiectatic capillaries followed by diffuse perilesional leakage in the later phase. Adaptive optics images showed the cone mosaic pattern. Notably, one patient developed a macular hole as a complication, which was successfully managed surgically. Conclusion: This study underscores the challenges in diagnosing and managing MacTel, emphasizing the importance of a multidisciplinary approach and regular follow-ups for optimal outcomes. Full article
(This article belongs to the Special Issue Diagnostics for Ocular Diseases: Its Importance in Patient Care)
Show Figures

Figure 1

15 pages, 8511 KiB  
Article
Vessel Segmentation in Fundus Images with Multi-Scale Feature Extraction and Disentangled Representation
by Yuanhong Zhong, Ting Chen, Daidi Zhong and Xiaoming Liu
Appl. Sci. 2024, 14(12), 5039; https://doi.org/10.3390/app14125039 - 10 Jun 2024
Cited by 3 | Viewed by 1470
Abstract
Vessel segmentation in fundus images is crucial for diagnosing eye diseases. The rapid development of deep learning has greatly improved segmentation accuracy. However, the scale of the retinal blood-vessel structure varies greatly, and there is a lot of noise unrelated to blood-vessel segmentation [...] Read more.
Vessel segmentation in fundus images is crucial for diagnosing eye diseases. The rapid development of deep learning has greatly improved segmentation accuracy. However, the scale of the retinal blood-vessel structure varies greatly, and there is a lot of noise unrelated to blood-vessel segmentation in fundus images, which increases the complexity and difficulty of the segmentation algorithm. Comprehensive consideration of factors like scale variation and noise suppression is imperative to enhance segmentation accuracy and stability. Therefore, we propose a retinal vessel segmentation method based on multi-scale feature extraction and decoupled representation. Specifically, we design a multi-scale feature extraction module at the skip connections, utilizing dilated convolutions to capture multi-scale features and further emphasizing crucial information through channel attention modules. Additionally, to separate useful spatial information from redundant information and enhance segmentation performance, we introduce an image reconstruction branch to assist in the segmentation task. The specific approach involves using a disentangled representation method to decouple the image into content and style, utilizing the content part for segmentation tasks. We conducted experiments on the DRIVE, STARE, and CHASE_DB1 datasets, and the results showed that our method outperformed others, achieving the highest accuracy across all three datasets (DRIVE:0.9690, CHASE_DB1:0.9757, and STARE:0.9765). Full article
Show Figures

Figure 1

24 pages, 5112 KiB  
Article
Enhanced Vascular Bifurcations Mapping: Refining Fundus Image Registration
by Jesús Eduardo Ochoa-Astorga, Linni Wang, Weiwei Du and Yahui Peng
Electronics 2024, 13(9), 1736; https://doi.org/10.3390/electronics13091736 - 1 May 2024
Cited by 1 | Viewed by 1731
Abstract
Fundus image registration plays a crucial role in the clinical evaluation of ocular diseases, such as diabetic retinopathy and macular degeneration, necessitating meticulous monitoring. The alignment of multiple fundus images enables the longitudinal analysis of patient progression, widening the visual scope, or augmenting [...] Read more.
Fundus image registration plays a crucial role in the clinical evaluation of ocular diseases, such as diabetic retinopathy and macular degeneration, necessitating meticulous monitoring. The alignment of multiple fundus images enables the longitudinal analysis of patient progression, widening the visual scope, or augmenting resolution for detailed examinations. Currently, prevalent methodologies rely on feature-based approaches for fundus registration. However, certain methods exhibit high feature point density, posing challenges in matching due to point similarity. This study introduces a novel fundus image registration technique integrating U-Net for the extraction of feature points employing Fundus Image Vessel Segmentation (FIVES) dataset for its training and evaluation, a novel and large dataset for blood vessels segmentation, prioritizing point distribution over abundance. Subsequently, the method employs medial axis transform and pattern detection to obtain feature points characterized by the Fast Retina Keypoint (FREAK) descriptor, facilitating matching for transformation matrix computation. Assessment of the vessel segmentation achieves 0.7559 for Intersection Over Union (IoU), while evaluation on the Fundus Image Registration Dataset (FIRE) demonstrates the method’s comparative performance against existing methods, yielding a registration error of 0.596 for area under the curve, refining similar earlier methods and suggesting promising performance comparable to prior methodologies. Full article
Show Figures

Figure 1

17 pages, 3449 KiB  
Article
Exploiting Cross-Scale Attention Transformer and Progressive Edge Refinement for Retinal Vessel Segmentation
by Yunyi Yuan, Yingkui Zhang, Lei Zhu, Li Cai and Yinling Qian
Mathematics 2024, 12(2), 264; https://doi.org/10.3390/math12020264 - 13 Jan 2024
Cited by 5 | Viewed by 1818
Abstract
Accurate retinal vessel segmentation is a crucial step in the clinical diagnosis and treatment of fundus diseases. Although many efforts have been presented to address the task, the segmentation performance in challenging regions (e.g., collateral vessels) is still not satisfactory, due to their [...] Read more.
Accurate retinal vessel segmentation is a crucial step in the clinical diagnosis and treatment of fundus diseases. Although many efforts have been presented to address the task, the segmentation performance in challenging regions (e.g., collateral vessels) is still not satisfactory, due to their thin morphology or the low contrast between foreground and background. In this work, we observe that an intrinsic appearance exists in the retinal image: among the dendritic vessels there are generous similar structures, e.g., the main and collateral vessels are all curvilinear, but they have noticeable scale differences. Based on this observation, we propose a novel cross-scale attention transformer (CAT) to encourage the segmentation effects in challenging regions. Specifically, CAT consumes features with different scales to produce their shared attention matrix, and then fully integrates the beneficial information between them. Such new attention architecture could explore the multi-scale idea more efficiently, thus realizing mutual learning of similar structures. In addition, a progressive edge refinement module (ERM) is designed to refine the edges of foreground and background in the segmentation results. Through the idea of edge decoupling, ERM could suppress the background feature near the blood vessels while enhancing the foreground feature, so as to segment vessels accurately. We conduct extensive experiments and discussions on DRIVE and CHASE_DB1 datasets to verify the proposed framework. Experimental results show that our method has great advantages in the Se metric, which are 0.88–7.26% and 0.81–7.11% higher than the state-of-the-art methods on DRIVE and CHASE_DB1, respectively. In addition, the proposed method also outperforms other methods with 0.17–2.06% in terms of the Dice metric on DRIVE. Full article
(This article belongs to the Topic AI and Data-Driven Advancements in Industry 4.0)
Show Figures

Figure 1

11 pages, 4128 KiB  
Article
GAN-Based Approach for Diabetic Retinopathy Retinal Vasculature Segmentation
by Anila Sebastian, Omar Elharrouss, Somaya Al-Maadeed and Noor Almaadeed
Bioengineering 2024, 11(1), 4; https://doi.org/10.3390/bioengineering11010004 - 21 Dec 2023
Cited by 8 | Viewed by 2648
Abstract
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes [...] Read more.
Most diabetes patients develop a condition known as diabetic retinopathy after having diabetes for a prolonged period. Due to this ailment, damaged blood vessels may occur behind the retina, which can even progress to a stage of losing vision. Hence, doctors advise diabetes patients to screen their retinas regularly. Examining the fundus for this requires a long time and there are few ophthalmologists available to check the ever-increasing number of diabetes patients. To address this issue, several computer-aided automated systems are being developed with the help of many techniques like deep learning. Extracting the retinal vasculature is a significant step that aids in developing such systems. This paper presents a GAN-based model to perform retinal vasculature segmentation. The model achieves good results on the ARIA, DRIVE, and HRF datasets. Full article
(This article belongs to the Special Issue Computer Vision and Machine Learning in Medical Applications)
Show Figures

Figure 1

21 pages, 6929 KiB  
Article
Arteriovenous Length Ratio: A Novel Method for Evaluating Retinal Vasculature Morphology and Its Diagnostic Potential in Eye-Related Diseases
by Sufian A. Badawi, Maen Takruri, Mohammad Al-Hattab, Ghaleb Aldoboni, Djamel Guessoum, Isam ElBadawi, Mohamed Aichouni, Imran Ali Chaudhry, Nasrullah Mahar and Ajay Kamath Nileshwar
J. Imaging 2023, 9(11), 253; https://doi.org/10.3390/jimaging9110253 - 20 Nov 2023
Cited by 1 | Viewed by 2681
Abstract
Retinal imaging is a non-invasive technique used to scan the back of the eye, enabling the extraction of potential biomarkers like the artery and vein ratio (AVR). This ratio is known for its association with various diseases, such as hypertensive retinopathy (HR) or [...] Read more.
Retinal imaging is a non-invasive technique used to scan the back of the eye, enabling the extraction of potential biomarkers like the artery and vein ratio (AVR). This ratio is known for its association with various diseases, such as hypertensive retinopathy (HR) or diabetic retinopathy, and is crucial in assessing retinal health. HR refers to the morphological changes in retinal vessels caused by persistent high blood pressure. Timely identification of these alterations is crucial for preventing blindness and reducing the risk of stroke-related fatalities. The main objective of this paper is to propose a new method for assessing one of the morphological changes in the fundus through morphometric analysis of retinal images. The proposed method in this paper introduces a novel approach called the arteriovenous length ratio (AVLR), which has not been utilized in previous studies. Unlike commonly used measures such as the arteriovenous width ratio or tortuosity, AVLR focuses on assessing the relative length of arteries and veins in the retinal vasculature. The initial step involves segmenting the retinal blood vessels and distinguishing between arteries and veins; AVLR is calculated based on artery and vein caliber measurements for both eyes. Nine equations are used, and the length of both arteries and veins is measured in the region of interest (ROI) covering the optic disc for each eye. Using the AV-Classification dataset, the efficiency of the iterative AVLR assessment is evalutaed. The results show that the proposed approach performs better than the existing methods. By introducing AVLR as a diagnostic feature, this paper contributes to advancing retinal imaging analysis. It provides a valuable tool for the timely diagnosis of HR and other eye-related conditions and represents a novel diagnostic-feature-based method that can be integrated to serve as a clinical decision support system. Full article
(This article belongs to the Special Issue Advances in Retinal Image Processing)
Show Figures

Figure 1

18 pages, 6142 KiB  
Article
A High-Resolution Network with Strip Attention for Retinal Vessel Segmentation
by Zhipin Ye, Yingqian Liu, Teng Jing, Zhaoming He and Ling Zhou
Sensors 2023, 23(21), 8899; https://doi.org/10.3390/s23218899 - 1 Nov 2023
Cited by 3 | Viewed by 2002
Abstract
Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels [...] Read more.
Accurate segmentation of retinal vessels is an essential prerequisite for the subsequent analysis of fundus images. Recently, a number of methods based on deep learning have been proposed and shown to demonstrate promising segmentation performance, especially U-Net and its variants. However, tiny vessels and low-contrast vessels are hard to detect due to the issues of a loss of spatial details caused by consecutive down-sample operations and inadequate fusion of multi-level features caused by vanilla skip connections. To address these issues and enhance the segmentation precision of retinal vessels, we propose a novel high-resolution network with strip attention. Instead of the U-Net-shaped architecture, the proposed network follows an HRNet-shaped architecture as the basic network, learning high-resolution representations throughout the training process. In addition, a strip attention module including a horizontal attention mechanism and a vertical attention mechanism is designed to obtain long-range dependencies in the horizontal and vertical directions by calculating the similarity between each pixel and all pixels in the same row and the same column, respectively. For effective multi-layer feature fusion, we incorporate the strip attention module into the basic network to dynamically guide adjacent hierarchical features. Experimental results on the DRIVE and STARE datasets show that the proposed method can extract more tiny vessels and low-contrast vessels compared with existing mainstream methods, achieving accuracies of 96.16% and 97.08% and sensitivities of 82.68% and 89.36%, respectively. The proposed method has the potential to aid in the analysis of fundus images. Full article
(This article belongs to the Section Biomedical Sensors)
Show Figures

Figure 1

19 pages, 2531 KiB  
Article
BEAC-Net: Boundary-Enhanced Adaptive Context Network for Optic Disk and Optic Cup Segmentation
by Lincen Jiang, Xiaoyu Tang, Shuai You, Shangdong Liu and Yimu Ji
Appl. Sci. 2023, 13(18), 10244; https://doi.org/10.3390/app131810244 - 12 Sep 2023
Cited by 4 | Viewed by 1331
Abstract
Accurately segmenting the optic disk (OD) and optic cup (OC) on retinal fundus images is important for treating glaucoma. With the development of deep learning, some CNN-based methods have been implemented to segment OD and OC, but it is difficult to accurately segment [...] Read more.
Accurately segmenting the optic disk (OD) and optic cup (OC) on retinal fundus images is important for treating glaucoma. With the development of deep learning, some CNN-based methods have been implemented to segment OD and OC, but it is difficult to accurately segment OD and OC boundaries affected by blood vessels and the lesion area. To this end, we propose a novel boundary-enhanced adaptive context network (BEAC-Net) for OD and OC segmentation. Firstly, a newly designed efficient boundary pixel attention (EBPA) module enhances pixel-by-pixel feature capture to collect the boundary contextual information of OD and OC in the horizontal and vertical directions. In addition, background noise makes segmenting boundary pixels difficult. To this end, an adaptive context module (ACM) was designed, which simultaneously learns local-range and long-range information to capture richer context. Finally, BEAC-Net adaptively integrates the feature maps from different levels using the attentional feature fusion (AFF) module. In addition, we provide a high-quality retinal fundus image dataset named the 66 Vision-Tech dataset, which advances the field of diagnostic glaucoma. Our proposed BEAC-Net was used to perform extensive experiments on the RIM-ONE-v3, DRISHTI-GS, and 66 Vision-Tech datasets. In particular, BEAC-Net achieved a Dice coefficient of 0.8267 and an IoU of 0.8138 for OD segmentation and a Dice coefficient of 0.8057 and an IoU value of 0.7858 for OC segmentation on the 66 Vision-Tech dataset, achieving state-of-the-art segmentation results. Full article
(This article belongs to the Special Issue Big Data Analysis and Management Based on Deep Learning)
Show Figures

Figure 1

22 pages, 8541 KiB  
Article
A Straightforward Bifurcation Pattern-Based Fundus Image Registration Method
by Jesús Eduardo Ochoa-Astorga, Linni Wang, Weiwei Du and Yahui Peng
Sensors 2023, 23(18), 7809; https://doi.org/10.3390/s23187809 - 11 Sep 2023
Cited by 2 | Viewed by 2093
Abstract
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image’s limited field of view might be insufficient. By combining multiple images, the [...] Read more.
Fundus image registration is crucial in eye disease examination, as it enables the alignment of overlapping fundus images, facilitating a comprehensive assessment of conditions like diabetic retinopathy, where a single image’s limited field of view might be insufficient. By combining multiple images, the field of view for retinal analysis is extended, and resolution is enhanced through super-resolution imaging. Moreover, this method facilitates patient follow-up through longitudinal studies. This paper proposes a straightforward method for fundus image registration based on bifurcations, which serve as prominent landmarks. The approach aims to establish a baseline for fundus image registration using these landmarks as feature points, addressing the current challenge of validation in this field. The proposed approach involves the use of a robust vascular tree segmentation method to detect feature points within a specified range. The method involves coarse vessel segmentation to analyze patterns in the skeleton of the segmentation foreground, followed by feature description based on the generation of a histogram of oriented gradients and determination of image relation through a transformation matrix. Image blending produces a seamless registered image. Evaluation on the FIRE dataset using registration error as the key parameter for accuracy demonstrates the method’s effectiveness. The results show the superior performance of the proposed method compared to other techniques using vessel-based feature extraction or partially based on SURF, achieving an area under the curve of 0.526 for the entire FIRE dataset. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop