Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (157)

Search Parameters:
Keywords = Retinal vessel segmentation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
27 pages, 23751 KB  
Article
A Mathematical Framework for Retinal Vessel Segmentation: Fractional Hessian-Based Curvature Analysis
by Priyanka Harjule, Mukesh Delu, Rajesh Kumar and Pilani Nkomozepi
Fractal Fract. 2026, 10(4), 246; https://doi.org/10.3390/fractalfract10040246 - 8 Apr 2026
Viewed by 198
Abstract
This study proposes an improved retinal blood vessel segmentation method to enhance the diagnosis of microvascular retinal complications. The proposed method extracts local shape features from retinal images utilizing a fractional Hessian matrix, which models blood vessels as surface structures characterized by ridges [...] Read more.
This study proposes an improved retinal blood vessel segmentation method to enhance the diagnosis of microvascular retinal complications. The proposed method extracts local shape features from retinal images utilizing a fractional Hessian matrix, which models blood vessels as surface structures characterized by ridges and valleys resulting from variations in curvature. The methodology integrates adaptive principal curvature estimation with a new framework leveraging the fractional Hessian matrix with nonsingular and nonlocal kernels. The effectiveness of the suggested method is assessed using publicly accessible datasets, including DRIVE, HRF, STARE, and some real images obtained from a local hospital. The proposed segmentation achieves 96.77% accuracy and 98.82% specificity on the DRIVE database, 96.91% accuracy and 98.69% specificity on STARE, and 95.90% accuracy and 98.36% specificity on the HRF database. Optimal parameters for the fractional order and Gaussian standard deviation were empirically determined by maximizing segmentation accuracy. Our findings show that the proposed approach achieves competitive performance compared to the listed methods, including several deep learning approaches, while maintaining significant computational efficiency. The output of the suggested method can be further utilized with deep learning techniques, which will be applied in the clinical context of diabetic retinopathy and glaucoma to identify abnormalities likely related to disease progression and different stages. Full article
Show Figures

Figure 1

11 pages, 1038 KB  
Data Descriptor
Refined IDRiD: An Enhanced Dataset for Diabetic Retinopathy Segmentation with Expert-Validated Annotations and Comprehensive Anatomical Context
by Sakon Chankhachon, Supaporn Kansomkeat, Patama Bhurayanontachai and Sathit Intajag
Data 2026, 11(2), 30; https://doi.org/10.3390/data11020030 - 1 Feb 2026
Viewed by 946
Abstract
The Indian Diabetic Retinopathy Image Dataset (IDRiD) has been widely adopted for DR lesion segmentation research. However, it contains annotation gaps for proliferative DR lesions and labeling errors that limit its utility for comprehensive automated screening systems. We present Refined IDRiD, an enhanced [...] Read more.
The Indian Diabetic Retinopathy Image Dataset (IDRiD) has been widely adopted for DR lesion segmentation research. However, it contains annotation gaps for proliferative DR lesions and labeling errors that limit its utility for comprehensive automated screening systems. We present Refined IDRiD, an enhanced version that addresses these limitations through (1) expert ophthalmologist validation and correction of labeling errors in original annotations for four non-proliferative lesions (microaneurysms, hemorrhages, hard exudates, cotton-wool spots), (2) the addition of three critical proliferative DR lesion annotations (neovascularization, vitreous hemorrhage, intraretinal microvascular abnormalities), and (3) the integration of comprehensive anatomical context (optic disc, fovea, blood vessels, retinal region). A team of three ophthalmologists (one senior specialist with >10 years’ experience, two expert fundus image annotators) conducted systematic annotation refinement, achieving an inter-rater agreement F1-score of 0.9012. The enhanced dataset comprises 81 high-resolution fundus images with pixel-level annotations for seven DR lesion types and four anatomical structures. All images were cropped to the retinal region of interest and resized to 1024 × 1024 pixels, with annotations stored as unified grayscale masks containing 12 classes enabling efficient multi-task learning. Refined IDRiD enables training of comprehensive DR screening systems capable of detecting both non-proliferative and proliferative stages while reducing false positives through anatomical context awareness. Full article
Show Figures

Figure 1

4 pages, 691 KB  
Interesting Images
Pigmentary Retinopathy in Alagille Syndrome: Fundus Findings in a Two-Year-Old Boy
by Bogumiła Wójcik-Niklewska, Zofia Oliwa, Karina Dzięcioł and Adrian Smędowski
Diagnostics 2026, 16(2), 241; https://doi.org/10.3390/diagnostics16020241 - 12 Jan 2026
Viewed by 370
Abstract
Alagille syndrome (ALGS) is a rare autosomal dominant multisystem disorder characterized by bile duct paucity, congenital heart defects, characteristic facial features, skeletal anomalies, and distinctive ocular findings. Although anterior segment anomalies such as posterior embryotoxon are well recognized, posterior segment involvement has recently [...] Read more.
Alagille syndrome (ALGS) is a rare autosomal dominant multisystem disorder characterized by bile duct paucity, congenital heart defects, characteristic facial features, skeletal anomalies, and distinctive ocular findings. Although anterior segment anomalies such as posterior embryotoxon are well recognized, posterior segment involvement has recently gained attention. We present fundus findings in a 2-year-old boy with genetically confirmed Alagille syndrome. Under general anesthesia, fundus examination revealed pink optic discs with blurred margins and drusen-like deposits, absence of the foveal reflex, and mottled hypopigmented and hyperpigmented areas that were consistent with retinal pigment epithelium (RPE) degeneration. Peripheral pigment clumping and RPE atrophy were also observed, while retinal vessels appeared normal. These features are characteristic of pigmentary retinopathy associated with ALGS and highlight the expanding spectrum of posterior segment changes in this condition. Recognition of such findings is essential, as they may contribute to visual impairment and support the systemic diagnosis. Full article
(This article belongs to the Section Medical Imaging and Theranostics)
Show Figures

Figure 1

10 pages, 1718 KB  
Proceeding Paper
Explainability of Diabetic Retinopathy Detection and Classification with Deep Learning Hybrid Architecture: AlterNet-K and ResNet-101
by Lavkush Gupta, Richa Gupta, Parul Agarwal and Suraiya Praveen
Chem. Proc. 2025, 18(1), 141; https://doi.org/10.3390/ecsoc-29-26888 - 13 Nov 2025
Viewed by 545
Abstract
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly [...] Read more.
Diabetic retinopathy (DR), an eye disease that is a threatening cause of irreversible blindness, always challenging to detect and diagnose on time. There are many ophthalmic invasive procedures which exist in medical science for the diagnosis of oculi (eyes). These all require highly skilled medical practitioners with operational knowledge of diagnosing sensitive organs like the retina and its tiny vessels. Due to the dearth of retinal specialists, the eye’s organs’ sensitivity, and the complexity of retinal therapy, invasive procedures are time-consuming, costly, and have slow progress. The fundus images are the visual information of the rear part of the retina. The progression of lesions around the retinal tissue’s surface causes the electric signals to not able to reach at the visual cortex, thus causing blurry vision or vision loss experienced by patients. The older methods using retinal fundus images for diagnosing lesions and symptoms of DR take time, causing delays in treatment and hence reducing the chance of success. Therefore, for early diagnosis, using fundus or retinal images can save the required effort and time of both doctors and patients. Artificial intelligence (AI) techniques have the capability to learn the tissue structures of the eye’s anatomy and to provide an analysis of the disease through the retinal fundus images. This process consists of operations, first apply the image preprocessing techniques followed by segmentation and filtering, then classify the disease using the artificial intelligence-based model. The proposed model trained over a dataset of DR images, for the prediction of accurate results, followed by deciding if the diagnosis by the model is correctly classified or not using the Explainable AI (XAI) algorithm. The rapid growth and better outcome of machine learning and deep learning algorithms are reasons to adopt, enhance the early diagnosis and treatments of patients. Full article
Show Figures

Figure 1

32 pages, 57072 KB  
Article
Deep Learning Network with Illuminant Augmentation for Diabetic Retinopathy Segmentation Using Comprehensive Anatomical Context Integration
by Sakon Chankhachon, Supaporn Kansomkeat, Patama Bhurayanontachai and Sathit Intajag
Diagnostics 2025, 15(21), 2762; https://doi.org/10.3390/diagnostics15212762 - 31 Oct 2025
Cited by 1 | Viewed by 1406
Abstract
Background/Objectives: Diabetic retinopathy (DR) segmentation faces critical challenges from domain shift and false positives caused by heterogeneous retinal backgrounds. Recent transformer-based studies have shown that existing approaches do not comprehensively integrate the anatomical context, particularly training datasets combining blood vessels with DR lesions. [...] Read more.
Background/Objectives: Diabetic retinopathy (DR) segmentation faces critical challenges from domain shift and false positives caused by heterogeneous retinal backgrounds. Recent transformer-based studies have shown that existing approaches do not comprehensively integrate the anatomical context, particularly training datasets combining blood vessels with DR lesions. Methods: These limitations were addressed by deploying a DeepLabV3+ framework enhanced with more comprehensive anatomical contexts, rather than more complex architectures. The approach produced the first training dataset that systematically integrates DR lesions with complete retinal anatomical structures (optic disc, fovea, blood vessels, retinal boundaries) as contextual background classes. An innovative illumination-based data augmentation simulated diverse camera characteristics using color constancy principles. Two-stage training (cross-entropy and Tversky loss) managed class imbalance effectively. Results: An extensive evaluation of the IDRiD, DDR, and TJDR datasets demonstrated significant improvements. The model achieved competitive performances (AUC-PR: 0.7715, IoU: 0.6651, F1: 0.7930) compared with state-of-the-art methods, including transformer approaches, while showing promising generalization on some unseen datasets, though performance varied across different domains. False-positive returns were reduced through anatomical context awareness. Conclusions: The framework demonstrates that comprehensive anatomical context integration is more critical than architectural complexity for DR segmentation. By combining systematic anatomical annotation with effective data augmentation, conventional network performances can be improved while maintaining computational efficiency and clinical interpretability, establishing a new paradigm for medical image segmentation. Full article
Show Figures

Figure 1

17 pages, 3666 KB  
Article
Efficient Retinal Vessel Segmentation with 78K Parameters
by Zhigao Zeng, Jiakai Liu, Xianming Huang, Kaixi Luo, Xinpan Yuan and Yanhui Zhu
J. Imaging 2025, 11(9), 306; https://doi.org/10.3390/jimaging11090306 - 8 Sep 2025
Cited by 4 | Viewed by 1594
Abstract
Retinal vessel segmentation is critical for early diagnosis of diabetic retinopathy, yet existing deep models often compromise accuracy for complexity. We propose DSAE-Net, a lightweight dual-stage network that addresses this challenge by (1) introducing a Parameterized Cascaded W-shaped Architecture enabling progressive feature refinement [...] Read more.
Retinal vessel segmentation is critical for early diagnosis of diabetic retinopathy, yet existing deep models often compromise accuracy for complexity. We propose DSAE-Net, a lightweight dual-stage network that addresses this challenge by (1) introducing a Parameterized Cascaded W-shaped Architecture enabling progressive feature refinement with only 1% of the parameters of a standard U-Net; (2) designing a novel Skeleton Distance Loss (SDL) that overcomes boundary loss limitations by leveraging vessel skeletons to handle severe class imbalance; (3) developing a Cross-modal Fusion Attention (CMFA) module combining group convolutions and dynamic weighting to effectively expand receptive fields; and (4) proposing Coordinate Attention Gates (CAGs) to optimize skip connections via directional feature reweighting. Evaluated extensively on DRIVE, CHASE_DB1, HRF, and STARE datasets, DSAE-Net significantly reduces computational complexity while outperforming state-of-the-art lightweight models in segmentation accuracy. Its efficiency and robustness make DSAE-Net particularly suitable for real-time diagnostics in resource-constrained clinical settings. Full article
(This article belongs to the Section Image and Video Processing)
Show Figures

Figure 1

27 pages, 5802 KB  
Article
Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering
by Zheng Lu, Jiaguang Li, Zhenyu Liu, Qian Cao, Tao Tian, Xianchao Wang and Zanjie Huang
Symmetry 2025, 17(9), 1462; https://doi.org/10.3390/sym17091462 - 5 Sep 2025
Cited by 2 | Viewed by 1515
Abstract
Retinal vessel segmentation is crucial for analyzing medical images, where symmetry in vascular structures plays a fundamental role in diagnostic accuracy. In recent years, the rapid advancements in deep learning have provided robust tools for predicting detailed images. However, within many scenarios of [...] Read more.
Retinal vessel segmentation is crucial for analyzing medical images, where symmetry in vascular structures plays a fundamental role in diagnostic accuracy. In recent years, the rapid advancements in deep learning have provided robust tools for predicting detailed images. However, within many scenarios of medical image analysis, the task of data annotation remains costly and challenging to acquire. By leveraging symmetry-aware semi-supervised learning frameworks, our approach requires only a small portion of annotated data to achieve remarkable segmentation outcomes, significantly diminishing the costs associated with data labeling. At present, most semi-supervised approaches rely on pseudo-label update strategies. Nonetheless, while these methods generate high-quality pseudo-label images, they inevitably contain minor prediction errors in a few pixels, which can accumulate during iterative training, ultimately impacting learner performance. To address these challenges, we propose an enhanced semi-supervised vessel semantic segmentation approach that employs a symmetry-preserving pixel-level filtering strategy. This method retains highly reliable pixels in pseudo labels while eliminating those with low reliability, ensuring spatial symmetry coherence without altering the intrinsic spatial information of the images. The filtering strategy integrates various techniques, including probability-based filtering, edge detection, image filtering, mathematical morphology methods, and adaptive thresholding strategies. Each technique plays a unique role in refining the pseudo labels. Extensive experimental results demonstrate the superiority of our proposed method, showing that each filtering strategy contributes to enhancing learner performance through symmetry-constrained optimization. Full article
(This article belongs to the Section Computer)
Show Figures

Figure 1

23 pages, 1804 KB  
Article
Automatic Algorithm-Aided Segmentation of Retinal Nerve Fibers Using Fundus Photographs
by Diego Luján Villarreal
J. Imaging 2025, 11(9), 294; https://doi.org/10.3390/jimaging11090294 - 28 Aug 2025
Cited by 1 | Viewed by 1469
Abstract
This work presents an image processing algorithm for the segmentation of the personalized mapping of retinal nerve fiber layer (RNFL) bundle trajectories in the human retina. To segment RNFL bundles, preprocessing steps were used for noise reduction and illumination correction. Blood vessels were [...] Read more.
This work presents an image processing algorithm for the segmentation of the personalized mapping of retinal nerve fiber layer (RNFL) bundle trajectories in the human retina. To segment RNFL bundles, preprocessing steps were used for noise reduction and illumination correction. Blood vessels were removed. The image was fed to a maximum–minimum modulation algorithm to isolate retinal nerve fiber (RNF) segments. A modified Garway-Heath map categorizes RNF orientation, assuming designated sets of orientation angles for aligning RNFs direction. Bezier curves fit RNFs from the center of the optic disk (OD) to their corresponding end. Fundus images from five different databases (n = 300) were tested, with 277 healthy normal subjects and 33 classified as diabetic without any sign of diabetic retinopathy. The algorithm successfully traced fiber trajectories per fundus across all regions identified by the Garway-Heath map. The resulting trace images were compared to the Jansonius map, reaching an average efficiency of 97.44% and working well with those of low resolution. The average mean difference in orientation angles of the included images was 11.01 ± 1.25 and the average RMSE was 13.82 ± 1.55. A 24-2 visual field (VF) grid pattern was overlaid onto the fundus to relate the VF test points to the intersection of RNFL bundles and their entry angles into the OD. The mean standard deviation (95% limit) obtained 13.5° (median 14.01°), ranging from less than 1° to 28.4° for 50 out of 52 VF locations. The influence of optic parameters was explored using multiple linear regression. Average angle trajectories in the papillomacular region were significantly influenced (p < 0.00001) by the latitudinal optic disk position and disk–fovea angle. Given the basic biometric ground truth data (only fovea and OD centers) that is publicly accessible, the algorithm can be customized to individual eyes and distinguish fibers with accuracy by considering unique anatomical features. Full article
(This article belongs to the Special Issue Progress and Challenges in Biomedical Image Analysis—2nd Edition)
Show Figures

Figure 1

17 pages, 2559 KB  
Systematic Review
Optical Coherence Tomography Angiography (OCTA) Characteristics of Acute Retinal Arterial Occlusion: A Systematic Review
by Saud Aljohani
Healthcare 2025, 13(16), 2056; https://doi.org/10.3390/healthcare13162056 - 20 Aug 2025
Cited by 1 | Viewed by 3312
Abstract
Purpose: To systematically review the evidence regarding the characteristics of Optical Coherence Tomography Angiography (OCTA) in acute retinal arterial occlusion (RAO), with a particular focus on vascular alterations across the superficial and deep capillary plexuses, choroid, and peripapillary regions. Methods: A comprehensive [...] Read more.
Purpose: To systematically review the evidence regarding the characteristics of Optical Coherence Tomography Angiography (OCTA) in acute retinal arterial occlusion (RAO), with a particular focus on vascular alterations across the superficial and deep capillary plexuses, choroid, and peripapillary regions. Methods: A comprehensive literature search was performed across PubMed, Web of Science, Scopus, EMBASE, Google Scholar, and the Cochrane Database up to April 2025. The search terms included “Optical coherence tomography angiography,” “OCTA,” “Retinal arterial occlusion,” “Central retinal artery occlusion,” and “Branch retinal artery occlusion.” Studies were included if they evaluated the role of OCTA in diagnosing or assessing acute RAO. Case reports, conference abstracts, and non-English articles were excluded. Two reviewers independently conducted the study selection and data extraction. The methodological quality of the included studies was assessed using the Risk of Bias in Non-randomized Studies of Interventions (ROBINS-I) tool. Results: The initial search yielded 457 articles, from which 10 studies were ultimately included in the final analysis after a rigorous screening process excluding duplicates, non-English publications, and ineligible articles based on title, abstract, or full-text review. The included studies consistently demonstrated that OCTA is a valuable, noninvasive modality for evaluating microvascular changes in RAO. Key OCTA findings in acute RAO include significant perfusion deficits and reduced vessel density in both the superficial capillary plexus (SCP) and deep capillary plexus (DCP). Several studies noted more pronounced involvement of the SCP compared to the DCP. OCTA parameters, such as vessel density in the macular region, have been found to correlate with visual acuity, suggesting a prognostic value. While findings regarding the foveal avascular zone (FAZ) were mixed, the peripapillary area frequently showed reduced vessel density. Conclusion: Acute RAO is an ocular emergency that causes microvascular ischemic changes detectable by OCTA. This review establishes OCTA as a significant noninvasive tool for diagnosing, monitoring, and prognosticating RAO. It effectively visualizes perfusion deficits that correlate with clinical outcomes. However, limitations such as susceptibility to motion artifacts, segmentation errors, and the lack of standardized normative data must be considered. Future standardization of OCTA protocols and analysis is essential to enhance its clinical application in managing RAO. Full article
Show Figures

Figure 1

24 pages, 3961 KB  
Article
Hierarchical Multi-Scale Mamba with Tubular Structure-Aware Convolution for Retinal Vessel Segmentation
by Tao Wang, Dongyuan Tian, Haonan Zhao, Jiamin Liu, Weijie Wang, Chunpei Li and Guixia Liu
Entropy 2025, 27(8), 862; https://doi.org/10.3390/e27080862 - 14 Aug 2025
Viewed by 2017
Abstract
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making [...] Read more.
Retinal vessel segmentation plays a crucial role in diagnosing various retinal and cardiovascular diseases and serves as a foundation for computer-aided diagnostic systems. Blood vessels in color retinal fundus images, captured using fundus cameras, are often affected by illumination variations and noise, making it difficult to preserve vascular integrity and posing a significant challenge for vessel segmentation. In this paper, we propose HM-Mamba, a novel hierarchical multi-scale Mamba-based architecture that incorporates tubular structure-aware convolution to extract both local and global vascular features for retinal vessel segmentation. First, we introduce a tubular structure-aware convolution to reinforce vessel continuity and integrity. Building on this, we design a multi-scale fusion module that aggregates features across varying receptive fields, enhancing the model’s robustness in representing both primary trunks and fine branches. Second, we integrate multi-branch Fourier transform with the dynamic state modeling capability of Mamba to capture both long-range dependencies and multi-frequency information. This design enables robust feature representation and adaptive fusion, thereby enhancing the network’s ability to model complex spatial patterns. Furthermore, we propose a hierarchical multi-scale interactive Mamba block that integrates multi-level encoder features through gated Mamba-based global context modeling and residual connections, enabling effective multi-scale semantic fusion and reducing detail loss during downsampling. Extensive evaluations on five widely used benchmark datasets—DRIVE, CHASE_DB1, STARE, IOSTAR, and LES-AV—demonstrate the superior performance of HM-Mamba, yielding Dice coefficients of 0.8327, 0.8197, 0.8239, 0.8307, and 0.8426, respectively. Full article
Show Figures

Figure 1

24 pages, 1471 KB  
Article
WDM-UNet: A Wavelet-Deformable Gated Fusion Network for Multi-Scale Retinal Vessel Segmentation
by Xinlong Li and Hang Zhou
Sensors 2025, 25(15), 4840; https://doi.org/10.3390/s25154840 - 6 Aug 2025
Cited by 2 | Viewed by 1569
Abstract
Retinal vessel segmentation in fundus images is critical for diagnosing microvascular and ophthalmologic diseases. However, the task remains challenging due to significant vessel width variation and low vessel-to-background contrast. To address these limitations, we propose WDM-UNet, a novel spatial-wavelet dual-domain fusion architecture that [...] Read more.
Retinal vessel segmentation in fundus images is critical for diagnosing microvascular and ophthalmologic diseases. However, the task remains challenging due to significant vessel width variation and low vessel-to-background contrast. To address these limitations, we propose WDM-UNet, a novel spatial-wavelet dual-domain fusion architecture that integrates spatial and wavelet-domain representations to simultaneously enhance the local detail and global context. The encoder combines a Deformable Convolution Encoder (DCE), which adaptively models complex vascular structures through dynamic receptive fields, and a Wavelet Convolution Encoder (WCE), which captures the semantic and structural contexts through low-frequency components and hierarchical wavelet convolution. These features are further refined by a Gated Fusion Transformer (GFT), which employs gated attention to enhance multi-scale feature integration. In the decoder, depthwise separable convolutions are used to reduce the computational overhead without compromising the representational capacity. To preserve fine structural details and facilitate contextual information flow across layers, the model incorporates skip connections with a hierarchical fusion strategy, enabling the effective integration of shallow and deep features. We evaluated WDM-UNet in three public datasets: DRIVE, STARE, and CHASE_DB1. The quantitative evaluations demonstrate that WDM-UNet consistently outperforms state-of-the-art methods, achieving 96.92% accuracy, 83.61% sensitivity, and an 82.87% F1-score in the DRIVE dataset, with superior performance across all the benchmark datasets in both segmentation accuracy and robustness, particularly in complex vascular scenarios. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

25 pages, 4450 KB  
Article
Analyzing Retinal Vessel Morphology in MS Using Interpretable AI on Deep Learning-Segmented IR-SLO Images
by Asieh Soltanipour, Roya Arian, Ali Aghababaei, Fereshteh Ashtari, Yukun Zhou, Pearse A. Keane and Raheleh Kafieh
Bioengineering 2025, 12(8), 847; https://doi.org/10.3390/bioengineering12080847 - 6 Aug 2025
Viewed by 1642
Abstract
Multiple sclerosis (MS), a chronic disease of the central nervous system, is known to cause structural and vascular changes in the retina. Although optical coherence tomography (OCT) and fundus photography can detect retinal thinning and circulatory abnormalities, these findings are not specific to [...] Read more.
Multiple sclerosis (MS), a chronic disease of the central nervous system, is known to cause structural and vascular changes in the retina. Although optical coherence tomography (OCT) and fundus photography can detect retinal thinning and circulatory abnormalities, these findings are not specific to MS. This study explores the potential of Infrared Scanning-Laser-Ophthalmoscopy (IR-SLO) imaging to uncover vascular morphological features that may serve as MS-specific biomarkers. Using an age-matched, subject-wise stratified k-fold cross-validation approach, a deep learning model originally designed for color fundus images was adapted to segment optic disc, optic cup, and retinal vessels in IR-SLO images, achieving Dice coefficients of 91%, 94.5%, and 97%, respectively. This process included tailored pre- and post-processing steps to optimize segmentation accuracy. Subsequently, clinically relevant features were extracted. Statistical analyses followed by SHapley Additive exPlanations (SHAP) identified vessel fractal dimension, vessel density in zones B and C (circular regions extending 0.5–1 and 0.5–2 optic disc diameters from the optic disc margin, respectively), along with vessel intensity and width, as key differentiators between MS patients and healthy controls. These findings suggest that IR-SLO can non-invasively detect retinal vascular biomarkers that may serve as additional or alternative diagnostic markers for MS diagnosis, complementing current invasive procedures. Full article
(This article belongs to the Special Issue AI in OCT (Optical Coherence Tomography) Image Analysis)
Show Figures

Figure 1

27 pages, 12221 KB  
Article
Retinal Vessel Segmentation Based on a Lightweight U-Net and Reverse Attention
by Fernando Daniel Hernandez-Gutierrez, Eli Gabriel Avina-Bravo, Mario Alberto Ibarra-Manzano, Jose Ruiz-Pinales, Emmanuel Ovalle-Magallanes and Juan Gabriel Avina-Cervantes
Mathematics 2025, 13(13), 2203; https://doi.org/10.3390/math13132203 - 5 Jul 2025
Cited by 5 | Viewed by 5717
Abstract
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This [...] Read more.
U-shaped architectures have achieved exceptional performance in medical image segmentation. Their aim is to extract features by two symmetrical paths: an encoder and a decoder. We propose a lightweight U-Net incorporating reverse attention and a preprocessing framework for accurate retinal vessel segmentation. This concept could be of benefit to portable or embedded recognition systems with limited resources for real-time operation. Compared to the baseline model (7.7 M parameters), the proposed U-Net model has only 1.9 M parameters and was tested on the DRIVE (Digital Retinal Images for Vesselness Extraction), CHASE (Child Heart and Health Study in England), and HRF (High-Resolution Fundus) datasets for vesselness analysis. The proposed model achieved Dice coefficients and IoU scores of 0.7871 and 0.6318 on the DRIVE dataset, 0.8036 and 0.6910 on the CHASE-DB1 Retinal Vessel Reference dataset, as well as 0.6902 and 0.5270 on the HRF dataset, respectively. Notably, the integration of the reverse attention mechanism contributed to a more accurate delineation of thin and peripheral vessels, which are often undetected by conventional models. The model comprised 1.94 million parameters and 12.21 GFLOPs. Furthermore, during inference, the model achieved a frame rate average of 208 FPS and a latency of 4.81 ms. These findings support the applicability of the proposed model in real-world clinical and mobile healthcare environments where efficiency and Accuracy are essential. Full article
(This article belongs to the Special Issue Advanced Research in Image Processing and Optimization Methods)
Show Figures

Figure 1

24 pages, 7389 KB  
Article
A Novel Approach to Retinal Blood Vessel Segmentation Using Bi-LSTM-Based Networks
by Pere Marti-Puig, Kevin Mamaqi Kapllani and Bartomeu Ayala-Márquez
Mathematics 2025, 13(13), 2043; https://doi.org/10.3390/math13132043 - 20 Jun 2025
Viewed by 2003
Abstract
The morphology of blood vessels in retinal fundus images is a key biomarker for diagnosing conditions such as glaucoma, hypertension, and diabetic retinopathy. This study introduces a deep learning-based method for automatic blood vessel segmentation, trained from scratch on 44 clinician-annotated images. The [...] Read more.
The morphology of blood vessels in retinal fundus images is a key biomarker for diagnosing conditions such as glaucoma, hypertension, and diabetic retinopathy. This study introduces a deep learning-based method for automatic blood vessel segmentation, trained from scratch on 44 clinician-annotated images. The proposed architecture integrates Bidirectional Long Short-Term Memory (Bi-LSTM) layers with dropout to mitigate overfitting. A distinguishing feature of this approach is the column-wise processing, which improves feature extraction and segmentation accuracy. Additionally, a custom data augmentation technique tailored for retinal images is implemented to improve training performance. The results are presented in their raw form—without post-processing—to objectively assess the method’s effectiveness and limitations. Further refinements, including pre- and post-processing and the use of image rotations to combine multiple segmentation outputs, could significantly boost performance. Overall, this work offers a novel and effective approach to the still unresolved task of retinal vessel segmentation, contributing to more reliable automated analysis in ophthalmic diagnostics. Full article
(This article belongs to the Special Issue Intelligent Computing with Applications in Computer Vision)
Show Figures

Figure 1

24 pages, 3772 KB  
Article
Retinal Vessel Segmentation Using Math-Inspired Metaheuristic Algorithms
by Mehmet Bahadır Çetinkaya and Sevim Adige
Appl. Sci. 2025, 15(10), 5693; https://doi.org/10.3390/app15105693 - 20 May 2025
Viewed by 1091
Abstract
Artificial intelligence-based biomedical image processing has become an important area of research in recent decades. In this context, one of the most important problems encountered is the close contrast values between the pixels to be segmented in the image and the remaining pixels. [...] Read more.
Artificial intelligence-based biomedical image processing has become an important area of research in recent decades. In this context, one of the most important problems encountered is the close contrast values between the pixels to be segmented in the image and the remaining pixels. Among the crucial advantages provided by metaheuristic algorithms, they are generally able to provide better performances in the segmentation of biomedical images due to their randomized and gradient-free global search abilities. Math-inspired metaheuristic algorithms can be considered to be one of the most robust groups of algorithms, while also generally presenting non-complex structures. In this work, the recently proposed Circle Search Algorithm (CSA), Tangent Search Algorithm (TSA), Arithmetic Optimization Algorithm (AOA), Generalized Normal Distribution Optimization (GNDO), Global Optimization Method based on Clustering and Parabolic Approximation (GOBC-PA), and Sine Cosine Algorithm (SCA) were implemented for clustering and then applied to the retinal vessel segmentation task on retinal images from the DRIVE and STARE databases. Firstly, the segmentation results of each algorithm were obtained and compared with each other. Then, to compare the statistical performances of the algorithms, analyses were carried out in terms of sensitivity (Se), specificity (Sp), accuracy (Acc), standard deviation, and the Wilcoxon rank-sum test results. Finally, detailed convergence analyses were also carried out in terms of the convergence speed, mean squared error (MSE), CPU time, and number of function evaluations (NFEs) metrics. Full article
(This article belongs to the Section Biomedical Engineering)
Show Figures

Figure 1

Back to TopTop