Next Article in Journal
A Combinatorial Method for the Number of Components of DNA and Polypeptide Cages
Previous Article in Journal
ADG-SleepNet: A Symmetry-Aware Multi-Scale Dilation-Gated Temporal Convolutional Network with Adaptive Attention for EEG-Based Sleep Staging
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering

1
The School of Computer and Artificial Intelligence, Chaohu University, Hefei 238024, China
2
College of Information Science and Engineering, Northeastern University, Heping District, Shenyang 110819, China
*
Author to whom correspondence should be addressed.
Symmetry 2025, 17(9), 1462; https://doi.org/10.3390/sym17091462
Submission received: 24 July 2025 / Revised: 28 August 2025 / Accepted: 3 September 2025 / Published: 5 September 2025
(This article belongs to the Section Computer)

Abstract

Retinal vessel segmentation is crucial for analyzing medical images, where symmetry in vascular structures plays a fundamental role in diagnostic accuracy. In recent years, the rapid advancements in deep learning have provided robust tools for predicting detailed images. However, within many scenarios of medical image analysis, the task of data annotation remains costly and challenging to acquire. By leveraging symmetry-aware semi-supervised learning frameworks, our approach requires only a small portion of annotated data to achieve remarkable segmentation outcomes, significantly diminishing the costs associated with data labeling. At present, most semi-supervised approaches rely on pseudo-label update strategies. Nonetheless, while these methods generate high-quality pseudo-label images, they inevitably contain minor prediction errors in a few pixels, which can accumulate during iterative training, ultimately impacting learner performance. To address these challenges, we propose an enhanced semi-supervised vessel semantic segmentation approach that employs a symmetry-preserving pixel-level filtering strategy. This method retains highly reliable pixels in pseudo labels while eliminating those with low reliability, ensuring spatial symmetry coherence without altering the intrinsic spatial information of the images. The filtering strategy integrates various techniques, including probability-based filtering, edge detection, image filtering, mathematical morphology methods, and adaptive thresholding strategies. Each technique plays a unique role in refining the pseudo labels. Extensive experimental results demonstrate the superiority of our proposed method, showing that each filtering strategy contributes to enhancing learner performance through symmetry-constrained optimization.

1. Introduction

Retinal vascular image segmentation [1,2] aims to distinguish vascular pixels from background pixels in fundus color images, and its segmentation results can be used for the classification and auxiliary diagnosis of ophthalmic diseases. Different from general medical images, fundus images have low vascular contrast and multiple structural forms characteristics such as simplicity and complexity. The traditional retinal vessel segmentation methods [3] mainly include thresholding, tracking, and filtering methods, which are directly based on the pixel intensity or morphological features of blood vessels and heavily rely on domain knowledge. For example, the Hanbay [4] method is difficult to accurately characterize the contextual information of blood vessels due to its dependence on features such as color and shape. For this reason, many research works use methods based on deep convolutional neural networks, such as combining multiple U-Net networks [5], to improve the performance of retinal vessel segmentation. In addition, a combination of coarse segmentation network and fine segmentation network [6] was used to predict the probability map of retina from the input patch and refine the predicted map. However, the multiple cascaded deep networks mentioned above will increase computational costs and reduce the overall segmentation efficiency of retinal vessel images.
Some individual deep networks focus on obtaining richer feature maps. For instance, Guo et al. [7] introduced an attention map based on the spatial attention mechanism into the U-Net network and calculated the weights of vascular and non-vascular regions to enhance the accuracy of vascular segmentation. To generate complete prediction results and accurately obtain vessels of different sizes and shapes, Wu et al. [8] proposed a scale-aware feature aggregation module to dynamically adjust the receptive field for extracting multi-scale features and designed an adaptive feature fusion module to guide the effective fusion of features between adjacent layers and capture more semantic information. Zhong et al. [9] proposed multi-layer multi-scale dilated convolution to capture sufficient global information under different receptive fields through a cascading mode. These methods have greatly promoted the application of deep convolutional neural networks in retinal vessel segmentation tasks. However, due to the loss of some spatial information during the downsampling process in the classic U-shaped network, the segmentation effect of retinal capillaries, regions around lesions, and fine vessels in low-contrast areas is poor. To address these issues, Li et al. [10] fused edge and spatial features in a dual-encoder to provide enhanced edge information for the decoder and improve the model’s accuracy. Subsequently, Li et al. [11] proposed a combination of interaction fusion blocks, cross-layer fusion blocks, and scale feature fusion blocks to effectively fuse features of the same scale, adjacent scales, and full scales, achieving better segmentation performance. To distinguish vessels from the background, Tan et al. [12] introduced contrastive loss to improve the segmentation of vessels in lesion areas, but the segmentation accuracy of fine vessels was not high due to the lack of spatial information.
The aforementioned methods have achieved remarkable success in retinal vessel segmentation, primarily attributable to the powerful feature learning capabilities of fully supervised deep convolutional neural networks (DCNNs) and the availability of large-scale annotated datasets. However, medical image annotation involves substantial human and material resources. To address this challenge, various weakly supervised and semi-supervised learning approaches have emerged [13,14,15,16,17]: weakly supervised methods train models using only image-level labels or bounding boxes without requiring pixel-level annotations, whereas semi-supervised methods achieve segmentation with minimal annotated samples. Both paradigms significantly reduce labeling costs, offering viable solutions for medical image analysis.
Motivated by semi-supervised learning framework, this study aims to minimize pixel-level annotation efforts while maintaining segmentation performance comparable to or exceeding fully supervised approaches. We propose a pseudo-label refinement strategy-based semi-supervised method for retinal vessel segmentation. The method innovatively integrates multi-criteria filtering mechanisms to selectively preserve high-confidence pixels in pseudo-labels while eliminating low-reliability regions, without compromising spatial information integrity. The experimental results on the public database prove the effectiveness and superiority of the proposed algorithm. The overview of the proposed general framework has been shown in Figure 1.
To sum up, the main contributions of this paper are:
(1) This paper proposes a symmetry-aware pixel-level refinement-based semi-supervised learning framework for vessel segmentation, which leverages both limited labeled and abundant unlabeled images to train deep neural networks. By incorporating vascular symmetry constraints into the pseudo-label generation process, our approach effectively reduces the labor and resource consumption associated with manual data annotation through its capacity to exploit unlabeled data while preserving anatomical symmetry patterns in retinal vascular structures.
(2) We present a symmetry-preserving pixel-level pseudo-label refinement strategy that systematically eliminates low-confidence pixels in pseudo-label maps, thereby minimizing the interference from erroneous predictions inherent in pseudo-labeling. The proposed refinement mechanism employs a synergistic combination of multiple screening criteria, including probability-based thresholding, edge detection, image filtering, morphological symmetry enhancement operations, and adaptive thresholding strategies. Each component contributes distinct functionalities: probabilistic screening ensures confidence calibration, edge detection preserves structural continuity, morphological operations enhance vascular connectivity while maintaining bilateral symmetry, while adaptive threshold can adapt to the intensity changes of different imaging modes.

2. Related Work

Retinal vessel segmentation has evolved through two distinct phases. Traditional approaches [18,19,20,21,22,23,24] inherently rely on manual expertise for feature engineering, resulting in high computational complexity, suboptimal accuracy, and poor generalization. With advances in deep learning architectures, contemporary methods have achieved superior performance in retinal vessel segmentation [25,26,27,28,29,30,31,32], which can be broadly categorized into four paradigms: multi-stage deep networks, single-network architectures, lightweight networks and semi-supervised methods.

2.1. Multi-Stage Deep Networks

In the realm of multi-stage deep networks, Wu et al. [33] proposed a multi-scale nested network architecture comprising two complementary sub-models (up-pool and pool-up pathways) to refine retinal vessel segmentation. Li et al. [5] introduced an iterative refinement framework (Iter-Net) through cascading mini U-Nets, progressively correcting coarse segmentation maps to generate label-approximated predictions with superior performance over standard U-Net architectures. To address the challenge of simultaneously segmenting thin capillaries and thick vessels due to their distinct morphological characteristics, Yang et al. [34] developed a dual-stream hybrid model specifically designed for separate thin/thick vessel segmentation. Wang et al. [35] further advanced this paradigm by extending U-Net with three parallel branches: a coarse vessel map generator, a capillary-specific map extractor, and a major vessel map predictor, which were hierarchically fused to achieve fine-grained retinal vessel segmentation. While multi-stage encoding-decoding architectures achieve improved accuracy, stacked U-Net structures inevitably introduce high parameter redundancy and computational overhead.

2.2. Single-Network Architectures

The U-Net architecture [36,37] serves as a fundamental backbone network in single-network methodologies. The basic U-Net architecture comprises a downsampling encoder, an upsampling decoder, and skip connections that integrate local and global contextual information to address spatial imprecision and localization limitations. However, conventional skip connections face critical challenges: significant semantic gaps between encoder-decoder feature maps increase learning complexity. To mitigate this issue, Zhou et al. [38] proposed UNet++, which introduces dense convolutional blocks between skip pathways. This hierarchical design accumulates all preceding feature maps through dense convolutional blocks at each skip connection node. Low-level feature maps preserve spatial details and boundary information, while high-level semantic features encode positional context. However, these subtle signals often degrade during upsampling/downsampling operations. Huang et al. [39] further advanced this paradigm with UNet3+, which implements full-scale skip connections to directly integrate multi-scale semantic features. This holistic multi-scale fusion strategy simultaneously captures fine-grained details and coarse-grained semantics through comprehensive feature aggregation across all network depths. Lu et al. [17] proposed a semi-supervised segmentation framework built upon the U-Net architecture. The method initially generates pseudo-labels through conventional approaches, subsequently employing a limited set of ground-truth annotations to refine these pseudo-labels through collaborative optimization. Finally, the refined pseudo-labels are integrated with the manually annotated labels to retrain the model, thereby enhancing segmentation performance through this iterative knowledge distillation process.

2.3. Lightweight Networks

Zhang et al. [40] proposed an attention mechanism named AG-Net to better preserve vascular structural information. To retain shallow feature details, Lü et al. [41] introduced a probabilistic distribution attention mechanism that simultaneously generates corresponding feature representations. For enhanced classification of curvilinear structures from background regions, Mou et al. [42] integrated both channel-wise and spatial self-attention mechanisms to achieve accurate retinal vessel segmentation. To mitigate spatial information loss caused by convolutional operations and downsampling, Zhang et al. [43] developed a Sobel operator-based boundary enhancement method. Wu et al. [44] further advanced this paradigm by designing Vessel-Net—a network combining initial and residual approaches through Inception-Residual specialized convolution blocks. However, the significant variations in vessel morphology and width within retinal images, coupled with distinct receptive fields for foreground vessels, prompted Zhou et al. [45] to propose feature aggregation from multiple dilated convolutions. This multi-scale contextual fusion strategy effectively addresses these challenges by leveraging information across different spatial resolutions. Lightweight architectures have emerged as a research trend due to their efficiency and deployment advantages. Laibacher et al. [46] proposed M2U-Net by integrating MobileNetV2 and bilinear sampling structures into the U-Net encoder, replacing conventional convolutional neural network (CNN) components. However, this architecture still suffers from semantic information degradation caused by pooling operations and limited receptive field coverage. Liu et al. [47] recently introduced Wave-Net—a novel lightweight segmentation framework incorporating detail-enhanced denoising blocks for precise vessel extraction. Li et al. [48] further simplified this approach by designing a minimalistic module that improves segmentation performance through a single filtering layer. Despite these advancements, existing methods still demonstrate suboptimal performance in segmenting thin vessels and lesion-adjacent regions within retinal vasculature.

2.4. Semi-Supervised Methods

In order to take advantage of large amounts of unannotated data, many studies have shifted their focus towards semi-supervised learning strategies. Hung et al. [49] utilized an adversarial learning mechanism, employing the network’s logit output (probability map) as a confidence map in semi-supervised learning. Additionally, some methods have designed unsupervised loss with certain heuristic regularization (such as minimum entropy), integrating it into the overall semi-supervised loss function [50]. Sajjadi et al. [51] introduced a consistency loss based on the consistency of model outputs after applying random perturbations to the same image, achieving transformation invariance. In some previous studies, researchers assigned pseudo-labels to large datasets with numerous unannotated instances [52]. UA-MT [53] extended the Mean Teacher (MT) framework by introducing Monte Carlo dropout to estimate the uncertainty of predictions for unannotated data, which helps filter out unreliable predictions when calculating consistency loss, thus generating higher-quality pseudo-labels. MC-Net+ [54] also adopted a multi-decoder framework with a shared encoder, supervising and constraining predictions from different decoders to minimize model uncertainty. Moreover, SCANet [15] addressed the issue of data scarcity, where images used for training the segmentation branch are also fed into both adversarial and consistency branches. These branches assist in adversarial learning and image synthesis tasks respectively.
This study was inspired by the semi-supervised segmentation methodologies proposed by Lu et al. and Shen et al. [15,17]. However, our approach fundamentally differs through the implementation of a self-training-based semi-supervised paradigm. As illustrated in Figure 1 the technical workflow comprises five core modules: (1) image preprocessing and data augmentation, (2) supervised training with labeled data, (3) pseudo-label generation for unlabeled samples, (4) pseudo-label refinement through multi-criteria filtering, and (5) model retraining with augmented dataset. This iterative process systematically integrates refined pseudo-labels into the training pipeline until reaching predetermined termination criteria. Finally, the optimized network architecture and learned parameters are evaluated on the independent test cohort to validate segmentation performance.

3. Method

3.1. Framework of the Proposed Method

Our proposed semi-supervised vessel segmentation method operates with limited labeled samples in the training dataset. Inspired by semi-supervised learning principles, the framework leverages multiple pseudo-label refinement strategies to iteratively update the training dataset and optimize the segmentation network. Figure 1 illustrates the overall framework architecture, which systematically integrates pseudo-label generation, refinement, and model retraining across successive iterations.
We first apply image preprocessing and data augmentation to the images in the existing dataset. The preprocessing step is designed to enhance feature compatibility for subsequent neural network training. The dataset is then partitioned into labeled and unlabeled training subsets. After constructing the semantic segmentation neural network architecture, the model is trained on the labeled subset to obtain updated network weights. Subsequently, these weights are utilized to generate pseudo-labels for the unlabeled training data through forward prediction.The core innovation lies in our pixel-level pseudo-label refinement strategy, which systematically filters unreliable predictions from the pseudo-label maps through multi-criteria screening. The high-quality pseudo-labels obtained through this refinement process are merged with the original labeled dataset to form an augmented training set for network retraining. This iterative cycle—pseudo-label generation → refinement → dataset augmentation → network retraining—is repeated until the predetermined termination criteria are met. Finally, the fully optimized network architecture and converged parameters are evaluated on the independent test cohort to validate segmentation performance.

3.2. Image Preprocessing

This section focuses on two essential preprocessing stages: image enhancement and data augmentation. The primary objective of image preprocessing is to improve image quality by enhancing clarity and contrast, thereby facilitating vascular feature extraction. Data augmentation techniques are implemented to expand the dataset size, as larger dataset sizes enable neural networks to achieve more precise data fitting and improve model robustness.
The preprocessing and data augmentation process in this section mainly includes the following steps:
(1) Each retinal vessel image was randomly cropped into 400 patches of 96 × 96 pixels, as shown in the Figure 2.
(2) Grayscale conversion of retinal vessel images is performed using the following equation:
g r e y ( m , n ) = 0.299 r e d ( m , n ) + 0.587 g r e e n ( m , n ) + 0.114 b l u e ( m , n ) ,
where grey(m,n) denotes the pixel value at the m-th row and n-th column of the grayscale image after conversion, and red(m,n), green(m,n), blue(m,n) respectively represent the pixel values of the corresponding red, green, and blue channels in the original color image.
(3) The gray-level histogram of each retinal image is equalized to make the histogram conform to a normal distribution between 0 and 255. The gray-level values of pixels in each image are determined by the following formula:
I N = I M I M ¯ i , j S T D I M
I E = I N min ( I N ) max ( I N ) min ( I N )
where IM denotes the original grayscale image, I M ¯ i , j represents the mean of all pixel values in image IM, S T D I M denotes the standard deviation of pixel values in IM, IN follows a standard normal distribution with zero mean and unit variance, and IE represents the result of IM after histogram equalization and standardization.
(4) The Contrast-Limited Adaptive Histogram Equalization (CLAHE) algorithm [55] is applied to sub-blocks in the image for image enhancement. Each sub-block has a size of 10 × 10 pixels.
(5) Gamma transformation is applied to all images to enhance image brightness, as defined in Equation (4).
a = b γ
where b denotes the grayscale value of a single pixel in the original image, a represents the grayscale value of a single pixel in the image after Gamma transformation, and γ is set to 1.2.
(6) The grayscale values of all pixels in all images are divided by 255, resulting in transformed values between 0 and 1, thereby completing the normalization.

3.3. U-Net Segmentation Model

Current mainstream segmentation networks include FCN [56], U-Net [36], U-net++ [38], DeepLab [57], etc. We conducted fully supervised training using the aforementioned networks, and all data operations and environmental settings followed the same procedures. Table 1 presents the final test accuracy. As shown in the table, U-Net achieves the highest accuracy among the three models, and thus U-Net is selected as the baseline network for the proposed method.
We modified the employed U-Net architecture with a symmetry-aware design philosophy. The detailed structure is shown in Figure 3. The symmetric encoder-decoder framework contains 6 convolutional layers with 3 × 3 convolutional kernels and two 2 × 2 max-pooling layers in the encoder path. The decoder comprises four 3 × 3 convolutional layers and two 2 × 2 upsampling layers, maintaining spatial symmetry coherence through mirrored channel expansion/contraction patterns. After each 3 × 3 convolutional layer, a Rectified Linear Unit (ReLU) and a dropout layer are added to prevent overfitting. During downsampling, the number of feature channels is doubled while the spatial dimensions (height and width) are halved to extract deep features, preserving bilateral symmetry in feature hierarchy. In the upsampling process, the number of channels remains unchanged, and the spatial dimensions are doubled to reconstruct anatomical symmetry patterns. Additionally, a 1 × 1 convolutional layer is appended at the end of the network to map class outputs to corresponding feature channels. The symmetry-guided skip connections concatenate shallow feature maps directly with their corresponding deep feature maps, integrating shallow and deep information through cross-scale symmetry preservation. This is crucial for recovering fine-grained details while maintaining topological symmetry in vascular structures, further improving segmentation accuracy. The symmetric architecture ensures consistent information flow in both encoding and decoding phases, enhancing the model’s ability to capture and reproduce anatomical symmetry patterns inherent in retinal vessel networks.

3.4. Pseudo-Label Generation and Filtering

Feeding unlabeled data into the trained network architecture and weights yields a two-channel probability matrix, corresponding to vessel and background channels respectively.
Here we denote x as the input pixel value and y as the output class label. The training dataset consists of labeled and unlabeled images. The labeled dataset is D L = { x n , y n } n = 1 N , and the unlabeled dataset is D U = { x m } m = 1 M , where y n { 0 , 1 } represents the labels for the labeled dataset, y n = 1 denotes vessel structures, y n = 0 denotes background, n [ 1 , N ] , N is the number of pixels in the labeled dataset, x n , x m [ 0 , 1 ] represents the pixel values in the input image, y m { 0 , 1 } is the label for the unlabeled dataset, m { 1 M } , and M is the number of pixels in the unlabeled dataset.
Our designed pseudo-label pixel-level filtering strategy comprises four components.

3.4.1. Filtering Strategy Based on Output Pixel Probabilities f θ ( x m )

If f θ ( x m ) > T , x i is considered more likely to belong to the vessel class. If f θ ( x m ) < T , x i is considered more likely to belong to the background class, where T denotes the threshold. Count the number of pixels satisfying f θ ( x i ) > T as N 1 , and the number of pixels satisfying f θ ( x i ) < T as N 2 . Let I represent the total number of iterations and I d represent the current iteration count. Sort all probabilities f θ ( x i ) , select the top N 1 × I d / I pixels with the highest probabilities, assign y m = 1 to these pixels (deemed reliable vessel pixels), and select the bottom N 2 × I d / I pixels with the lowest probabilities, assign y m = 0 to these pixels (deemed reliable background pixels). For the remaining pixels, set y m = 1 , y m = 1 indicating these pixels have low reliability and should be excluded from subsequent training.
Due to the larger proportion of background compared to vessels during training, the learner’s response to the background gradually intensifies as the number of iterations increases. Consequently, pixels that should belong to vessels are increasingly mispredicted as background, leading to a continuous decrease in f θ ( x m ) for these vessel pixels until they fall below the threshold T. To address this, we designed a dynamic threshold T through experiments [58], which decreases as f θ ( x m ) of these pixels diminishes. The threshold T is determined by the following formula:
T = 0.5 , if I i = 1 ,   2 ( 0.28 + ( 4 / 3 ) ( I i 3 ) · 0.2 ) + z · ( ( I i 2 ) 2 1 ) · 0.00055 , if I i > 2 ,   0 < z < 20
Here, z denotes the number of labeled images in the original unprocessed dataset (We apply Formula (5) to both the DRIVE and STARE datasets, with the only difference being that the value of z is determined based on the actual number of training samples in each dataset) The threshold strategy was empirically determined through experiments, which effectively suppresses the gradual decrease of predicted values f θ ( x m ) for certain vessel pixels caused by the increasing number of iterations, thereby improving segmentation accuracy.

3.4.2. Filtering Strategy Based on Edge Detection

In the filtering strategy based on output pixel probabilities f θ ( x m ) , pixels at the boundary between vessels and background often have ambiguous classifications, as their probabilities tend to cluster around the threshold T, causing them to be filtered out during the selection process. This leads to the loss of edge information in vessel segmentation, which ultimately compromises the segmentation accuracy.
Therefore, we employ the Sobel operator to perform edge detection on the network’s output probability maps during the filtering process. Let the edge detection result be P m . In the edge detection result, if P m > λ , the pixel is considered an edge pixel. Here, we set λ = 0.5 . Count the number of pixels satisfying P m > λ as N 3 . Sort all pixel values P m , and select the top N 3 × I d / I pixels with the highest P m . For these pixels, if f θ ( x m ) > T , set y m = 1 ; if f θ ( x m ) < T , set y m = 0 . For the remaining pixels, retain the y m values obtained from the filtering strategy based on output pixel probabilities f θ ( x m ) .

3.4.3. Filtering Strategy Based on Median Filtering

Due to noise in the images, some isolated background pixels are erroneously predicted as vessels in the segmentation results. Therefore, we apply median filtering to the pixel-wise probabilities f θ ( x m ) . Let the filtered pixel value be f θ ( x m ) z . If f θ ( x m ) z f θ ( x m ) > α (The value of parameter α in the median filtering was set to 0.10, following the reference to the work proposed by Gour et al. [59]), the pixel is considered a potential noise point and excluded from training in subsequent stages, i.e., set y m = 1 for such pixels. The remaining y m values are retained as previously defined.
We define a variable S m , where S m = 1 indicates that the pseudo-label of the m-th pixel is of high quality and should be retained for subsequent training, while S m = 0 denotes that the pseudo-label is unreliable and should be discarded. Specifically:
S m = 0 , y m = 1 1 , y m = 0 , 1
Figure 4 displays the retained (white regions, i.e., S m = 1 ) and discarded (black regions, i.e., S m = 0 ) pixels obtained through the three filtering strategies across five iterations. The fourth row represents the combined retained or discarded pixels from the first three strategies. The final row shows the pseudo-labels obtained by integrating all three filtering processes, where white regions denote retained vessel pseudo-labels, black regions denote retained background pseudo-labels, and gray regions represent eliminated pseudo-labels during iterations.

3.4.4. Filtering Strategy Based on Erosion

For unlabeled training data, due to the distinct boundary between the circular vessel regions in the original image and the peripheral regions without vessels, pixels at this boundary are often erroneously predicted as vessels. If these mispredictions are repeatedly reinforced during iterative training, errors will accumulate and severely degrade the learning performance and model accuracy. To address this issue, we apply symmetry-aware morphological erosion to the provided masks, shrink the central circular region, and set the central region pixels to 1 and peripheral region pixels to 0 (The erosion method we adopted is Python’s built-in function, with the code being: erosion = cv2.erode (image, kernel, iterations = 1). In this code snippet, image refers to the size of the input images from our dataset, kernel indicates the size of the convolution kernel used during the erosion process, and we set the kernel size to 3 × 3 with the number of iterations being 1. Figure 5 are the comparison images generated after applying the erosion process. The images from left to right are: the predicted result after processing, the mask before processing, and the mask after processing). This symmetry-preserving mask is then multiplied with the original image and its labels, zeroing out pixels in the peripheral and boundary regions of the original image and labels. Subsequently, the aforementioned filtering strategies are applied to exclude these boundary and peripheral regions from training (i.e., S m = 0 ).
In summary, we implemented seven distinct retinal image vessel segmentation methods based on different combinations of the pseudo-label filtering strategies described in this section, namely:
(1) Fully supervised vessel segmentation method (SS);
(2) Semi-supervised vessel segmentation method based on conventional self-training (SSS);
(3) Fixed-threshold semi-supervised vessel segmentation method incorporating the pseudo-label filtering strategy based on output pixel probabilities (SSS1);
(4) Fixed-threshold semi-supervised vessel segmentation method combining the pseudo-label filtering strategy based on output pixel probabilities and the erosion-based filtering strategy (SSS2);
(5) Fixed-threshold semi-supervised vessel segmentation method integrating the pseudo-label filtering strategy based on output pixel probabilities, the erosion-based filtering strategy, and the edge detection-based filtering strategy (SSS3);
(6) Fixed-threshold semi-supervised vessel segmentation method combining the pseudo-label filtering strategy based on output pixel probabilities, the erosion-based filtering strategy, the edge detection-based filtering strategy, and the median filtering-based filtering strategy (SSS4);
(7) Dynamic-threshold semi-supervised vessel segmentation method integrating the pseudo-label filtering strategy based on output pixel probabilities, the erosion-based filtering strategy, the edge detection-based filtering strategy, and the median filtering-based filtering strategy (SSS5), which is the improved semi-supervised vessel segmentation method proposed in this paper.
Additionally, due to the overwhelming majority of background pixels over vessel pixels in images, to balance the training between background and vessels and reduce the dominance of background pixels, we directly exclude images with insufficient vessel pixels in pseudo-labels (images with fewer than 100 predicted vessel pixels) from participating in pixel filtering and subsequent training. As shown in Figure 6, the first row displays pseudo-labels, the second row shows corresponding ground-truth labels, and white squares indicate excluded pseudo-label images. It can be observed that images with insufficient vessels are indeed excluded, while images with sufficient vessels are retained as pseudo-labels.

3.5. Update and Retraining of the Labeled Training Set

We merge the aforementioned pseudo-labels and their corresponding original images with the original labeled training set to form a new labeled training set.
Before training the network, we input the original image and its corresponding label into the network. Since our vessel segmentation task is a binary classification (vessel and background), the labels input into the network should be two one-hot encoded matrices: one representing vessels and the other representing the background. The loss function is:
L ( θ ) = 1 N i = 1 N y i , 1 log f θ ( x i , 1 ) + y i , 2 log f θ ( x i , 2 )
Here, y i , 1 denotes the i-th element value in the one-hot encoded vector corresponding to the vessel label, and y i , 2 denotes the i-th element value in the one-hot encoded vector corresponding to the background label. If filtering is not applied, then one must be 1 and the other must be 0, and the loss function becomes the traditional cross-entropy loss, as shown in Equation (8).
L ( θ ) = 1 N i = 1 N y i log f θ ( x i )
After incorporating the filtering strategy, the designed loss function evolves into the form shown in Equation (9):
L ( θ ) = 1 N i = 1 N y i , 1 log f θ ( x i , 1 ) + y i , 2 log f θ ( x i , 2 ) S m
At this point, if a pixel is retained (i.e., S m = 1 ), the loss function remains equivalent to Equation (8). If a pixel is discarded, the loss value becomes 0. According to the principle of backpropagation, during neural network training, such pixels will not contribute to weight updates. This achieves dynamic filtering of unreliable pixels during iterative training.
Here, the parameter θ is derived from Equation (10), and the Adaptive Moment Estimation (Adam) optimizer is employed for parameter updates.
θ * = arg min L ( θ )
For a more intuitive understanding of the proposed algorithm workflow, we summarize the aforementioned methods using pseudocode, Algorithm 1 describes in detail the implementation steps of the proposed method
Algorithm 1 Improved Semi-Supervised Vessel Semantic Segmentation Method Based on Pixel-Level Filtering Strategy
  • Require:
      1:
    D L : Labeled dataset D L = { ( x n , y n ) } n = 1 N , where N is the number of pixels in the labeled dataset;
      2:
    D U : Unlabeled dataset D U = { x m } m = 1 M , where M is the number of pixels in the unlabeled dataset;
      3:
    x n : The value of the n-th pixel in the labeled image, x n [ 0 , 1 ] ;
      4:
    y n : The label of the n-th pixel in the labeled image, y n { 0 , 1 } ;
      5:
    x m : The value of the m-th pixel in the unlabeled image, x m [ 0 , 1 ] ;
  • Ensure:
      6:
    θ o p t : The final network weight parameters obtained after training;
      7:
    procedure  Initialization
      8:
         Preprocess the training datasets D L and D U ; Initialize the U-Net network structure; Randomly initialize the U-Net network weights θ ; Initialize network parameters: epoch, batchsize, learning rate, dropout ratio, number of iterations;
      9:
    end procedure
    10:
    procedure  Step 2
    11:
         Train the U-Net network using the labeled dataset D L and update the U-Net network weights θ ;
    12:
    end procedure
    13:
    procedure  Step 3
    14:
         Use the updated network weights θ to predict the unlabeled dataset D U and obtain the pseudo-labeled dataset D U * = { x m , y m * } m = 1 M ;
    15:
    end procedure
    16:
    procedure  Step 4
    17:
         Apply the screening strategy to D U * to filter out unreliable predictions, resulting in D U * = { x s , y s } s = 1 S + { x b , y b } b = S + 1 B , where S + B = M . Here, x s , y s are the retained pixels and their pseudo-labels, while x b , y b are the discarded pixels and their pseudo-labels;
    18:
    end procedure
    19:
    procedure  Step 5
    20:
         Merge the filtered pseudo-labeled dataset D U * with the original labeled dataset D L , i.e., D L = D L + D U * . Retrain the U-Net network using the new labeled dataset and update the U-Net network weights θ . During training, use the loss function formula (9) to exclude the discarded pixels { x b , y b } b = S + 1 B from the training process;
    21:
    end procedure
    22:
    procedure  Step 6
    23:
          while termination condition not met do
    24:
                Go back to Step 3;
    25:
          end while
    26:
    end procedure
    27:
    procedure  Step 7
    28:
          Obtain the final network model parameters θ o p t .
    29:
    end procedure

4. Experimental Results and Analysis

4.1. Dataset and Evaluation Metrics

We validate and analyze the proposed method on two public retinal vessel datasets: DRIVE and STARE. The DRIVE dataset [60] contains 40 color retinal images, with 20 images allocated for training and 20 for testing. The STARE dataset [61] consists of 20 original retinal images, equally split between healthy and pathological cases. In experiments, the first 5 images from both healthy and pathological subsets were selected as the training set, while the remaining images formed the testing set. The study used publicly available, de-identified datasets; no additional IRB approval was required.
We employ the following four metrics to evaluate model performance: Accuracy (Acc), Sensitivity (Sen), Specificity (Spe), Dice and the Area Under the Curve (AUC) metric.
Accuracy (Acc): The proportion of correctly classified pixels among all pixels, which reflects the neural network’s ability to correctly identify vessels and background. This metric is defined in Equation (11).
Acc = TP + TN TP + TN + FP + FN
where TP (true positive) represents the positive cases judged to be positive, FN (false negative) represents the positive cases judged to be negative, TN (true negative) represents the negative cases judged to be negative, and FP (false positive) represents the negative cases judged to be positive.
Sensitivity (Sen): The proportion of correctly classified pixels among all predicted vessel pixels, which reflects the neural network’s reliability in identifying vessels. This metric is defined in Equation (12).
Sen = TP TP + FN
Specificity (Spe): The proportion of correctly classified pixels among all predicted background pixels, which reflects the neural network’s reliability in identifying background. This metric is defined in Equation (13).
Spe = TN TN + FP
AUC Metric: Since neural networks output probabilities rather than discrete classification results, different thresholds affect the values of variables in the confusion matrix. Therefore, we generate a set of confusion matrices using varying thresholds and plot the Receiver Operating Characteristic (ROC) curve by taking False Positive Rate (FPR) as the x-axis and True Positive Rate (TPR) as the y-axis. The Area Under the Curve (AUC) is computed to measure classifier quality. A higher AUC (closer to 1) indicates better performance, while AUC = 0.5 corresponds to random guessing. An effective iterative training framework should aim to maximize AUC toward 1.

4.2. Parameter Settings

All experiments were conducted on a single NVIDIA GeForce RTX 2080 Ti GPU. (NVIDIA is a multinational technology company headquartered in Santa Clara, CA, USA and it designs and develops the GeForce RTX series of GPUs, including the RTX 2080 Ti.) The code was implemented in Python 3.12 and trained using the TensorFlow framework.
In this experiment, we randomly initialized the U-Net model without any pretraining. The Adaptive Moment Estimation (Adam) optimizer was employed for model optimization. Unlike stochastic gradient descent (SGD), which requires a fixed learning rate, Adam adaptively adjusts the learning rate during training. Each iteration involved training the full dataset for 60 epochs, with a batch size of 32 images. To prevent overfitting, a dropout rate of 20% was applied between consecutive convolutional layers using 3 × 3 kernels. Additionally, 10% of the input images were allocated as the validation set. All input images underwent preprocessing and data augmentation. For the DRIVE dataset, each training image was randomly cropped into 400 patches of size 96 × 96 pixels [62]. The 20 test images from DRIVE were reserved for final evaluation. A total of 5 iterative cycles were performed, with 20% of unlabeled pixels and their corresponding pseudo-labels incorporated into the training process in each cycle.

4.3. Quantitative Experimental Results and Analysis

4.3.1. Combined Filtering Strategies for Vessel Segmentation Methods Comparison

The primary objective of this experiment is to evaluate vessel segmentation methods based on combinations of pseudo-label filtering strategies on the DRIVE dataset. These methods include the seven approaches introduced in Section 3.4 (SS, SSS, SSS1, SSS2, SSS3, SSS4, SSS5). The DRIVE dataset used in this study contains 20 training and 20 test images. In all semi-supervised methods, the training set is partitioned into a labeled subset (n images) and an unlabeled subset (20-n images). In SS method experiments, only n labeled images were used as the training set for fully supervised methods. The four evaluation metrics (Acc, Sen, Spe, AUC) are presented in Table 2, where two experimental results are reported: (1) n = 1, labeled image ratio = 5%; (2) n = 2, labeled image ratio = 10%.
From Table 2, the following conclusions can be drawn:
(1) Semi-supervised methods outperform fully supervised approaches;
(2) When only output probability-based filtering is applied, overall performance fails to improve due to the lack of vessel boundary information, misclassification of circular region boundaries, and accumulated noise during iterative training;
(3) The integration of erosion-based, edge detection-based, and median filtering-based strategies leads to significant performance gains, validating the effectiveness of these filtering mechanisms;
(4) The incorporation of dynamic thresholding further enhances performance, demonstrating the importance of adaptive thresholding and confirming the validity of the proposed semi-supervised framework with multi-strategy filtering.
To further evaluate the segmentation performance of each filtering algorithm, we conducted related tests based on the DRIVE dataset as well. These methods include the four filtering strategies introduced in the paper and their various combinations. The DRIVE dataset used in this study consists of 20 training images and 20 test images. In all semi-supervised methods, the training set is divided into a labeled training set (n images) and an unlabeled training set (20-n images). In our experiments, only 2 images from the labeled training set (representing 10% of the labeled images) are used as the training set for the fully supervised method. We present the comparative experimental results in the Table 3. It can be observed from Table 3 that there is a significant gap in the Acc, Spe and Dice metrics between individual filtering strategies and the combined Strategy 1 + 2 + 3 + 4. Furthermore, as more strategies are combined, the overall segmentation performance continues to improve. Among them, Strategy 1, 2, 3, and 4 correspond sequentially to the four strategies mentioned in the paper: probability, edge, median, and erosion.

4.3.2. Comparison of Segmentation Performance Across Varying Numbers of Labeled Samples

In this experiment, we compared the improved semi-supervised segmentation method, the semi-supervised segmentation method, and the typical fully supervised segmentation method. For fair comparison, we used the same U-Net architecture, loss function, data preprocessing, and postprocessing procedures across all supervised, semi-supervised, and improved semi-supervised segmentation methods. In the DRIVE training set, we randomly selected n retinal images as labeled samples and the remaining 20-n images as unlabeled samples. In fully supervised learning, only the n labeled retinal images were used to train the U-Net network. In semi-supervised and improved semi-supervised learning, both n labeled and 20-n unlabeled retinal images were utilized for training.
We report accuracy (Acc), sensitivity (Sen), specificity (Spe), and AUC metrics for n = 1 to 18. For each n, ten distinct experiments were conducted, where n retinal images were randomly selected as labeled samples in each experiment, and the remaining 20-n images served as unlabeled samples. Table 4 summarizes the average performance of the SS, SSS, and SSS5 methods across these ten experiments. From Table 4, we observe that the SSS5 method outperforms the SSS method, and the SSS method outperforms the SS method, validating the effectiveness of the improved semi-supervised learning framework for retinal vessel segmentation.
For clarity, the experimental results in Table 4 are visualized as boxplots in Figure 7. The Y-axis represents the evaluation metrics, while the X-axis denotes the number of labeled images (n = 1 to 18). For each n, ten experiments were conducted. The boxplots comprehensively illustrate the distribution of the 10 experimental results. From top to bottom in these plots, the statistical characteristics of the results are clearly observable, including the upper extreme, upper quartile, median, mean, lower quartile, and lower extreme.
The boxplots lead to the following conclusion:
(1) Since the SSS5 method consistently outperforms the SSS method across all n values, and the SSS method consistently outperforms the SS method, this validates the superior performance and generalization capability of the improved semi-supervised learning framework.
(2) When n is small, our improved semi-supervised learning framework (SSS5) significantly enhances vessel segmentation performance in retinal images. Thus, even with minimal labeled samples, the method achieves high performance. In our semi-supervised framework, when n exceeds 8, segmentation performance plateaus without substantial improvement as n increases. Therefore, labeling only 8 retinal images yields satisfactory segmentation results, reducing labeling costs by 60%.

4.3.3. Performance Evolution of the Proposed Semi-Supervised Algorithm with Increasing Iterations

In this experiment, our objective is to analyze how the performance of the SSS5 method evolves with increasing training iterations. We recorded the experimental results of SSS5 under the condition of only one labeled image (n = 1), as visualized in the boxplot of Figure 8. Ten random experiments were conducted, and the boxplot comprehensively illustrates the distribution of these ten results in Figure 8.
In the boxplot, the vertical axis represents the evaluation metrics, while the horizontal axis denotes the training iterations, where “S” denotes supervised learning and the numbers 1–5 indicate specific iteration stages. From the experimental results, it is evident that the mean values of precision (Acc) and sensitivity (Sen)—the two most critical metrics—gradually increase with the number of iterations. This leads to the conclusion that our improved semi-supervised learning framework enhances segmentation performance as iterations progress, demonstrating its convergence and stability.

4.3.4. Qualitative Experimental Analysis

For clarity and to further evaluate the proposed SSS5 improved semi-supervised algorithm, we conducted qualitative comparisons between the supervised method and several improved semi-supervised algorithms, as shown in Figure 9. The qualitative comparison results demonstrate the role of each filtering strategy. In this experiment, we set the labeled training set to 1 image and the unlabeled training set to 19 images. The qualitative comparison is divided into five steps. After each step, we provide the qualitative test results and zoomed-in views. On the left is an original retinal image from the test set, along with its ground-truth annotation and segmentation mask.
Step 1: Test results after supervised training using labeled data.
Step 2: Test results trained with the improved semi-supervised method SSS1. As shown in the figure, compared to the supervised method, SSS1 predicts a more complete vascular tree, connecting the fragmented segments predicted by the fully supervised method.
Step 3: Test results trained with the improved semi-supervised method SSS2. As shown in the figure, this method effectively eliminates the circular region boundary mispredictions present in SSS1.
Step 4: Test results trained with the improved semi-supervised method SSS3. Due to the absence of edge information in SSS2, thin vessels were erroneously predicted as thick vessels. After incorporating an edge-aware filtering strategy in SSS3, the network learned rich vascular boundary details, resolving the overestimation of thin vessel thickness.
Step 5: Test results trained with the improved semi-supervised method SSS4. During SSS3 iterations, progressive noise accumulation occurred, leading to increased artifacts in testing. Adding median filtering in SSS4 effectively mitigated this issue.
The qualitative analysis demonstrates that the four filtering strategies we introduced are meaningful and each contributes uniquely to improving prediction results. After applying each filtering strategy, distinct improvements in segmentation accuracy are observed. The best prediction performance is achieved when all four filtering strategies are combined.

4.3.5. Comparison with Existing Methods

To further validate the overall performance of the proposed pseudo-label filtering semi-supervised method, comparative experiments were conducted on multiple public datasets and compared with state-of-the-art vessel segmentation model.
In the experiments, we used the results with 50% labeled training samples as the baseline for comparative analysis.
Figure 10 shows partial retinal vessel segmentation results on the DRIVE dataset. The first column displays the input images. The second column presents the segmentation results of our method, the third column shows results based on U-Net++, and the fourth column illustrates results from the DeepLab method. From Figure 10, it is evident that our semi-supervised vessel segmentation method achieves superior vessel segmentation, particularly in distinguishing thin vessels, compared to the classical fully supervised U-Net++ method. This improvement arises from the adoption of a pseudo-label filtering training strategy, which avoids generating low-quality pseudo-labels during model training. Additionally, the slicing operation expands the training samples, enabling the network to accurately identify fine vessels.
Figure 11 presents partial retinal vessel segmentation results on the STARE dataset. The first column displays the input images. The second column presents the segmentation results of our method, the third column shows results based on U-Net++, and the fourth column illustrates results from the DeepLab method. As shown in Figure 11, our method and the fully supervised U-Net++ method produce comparable segmentation outcomes, both effectively segmenting vessels correctly.
For some of the more severely affected retinal images in the STARE dataset, we have also included segmentation results in Figure 12. In these figures, the first row shows three grayscale retinal input images, the second row presents the corresponding ground truths, and the third row displays the segmentation results achieved by our proposed method. It can be observed that for those severely affected retinal images, where the vessel pixels are distorted due to interference from lesion areas, our method can still segment the vessels. However, the overall segmentation performance on these diseased images is not as good as on healthy images.
Table 5 and Table 6 quantitatively compare the proposed semi-supervised method with existing vessel segmentation approaches on the DRIVE and STARE datasets, respectively. The “Category” column in the tables specifies the category of each method: “S” for fully supervised, “U” for unsupervised, and “SS” for semi-supervised methods. From Table 5, our semi-supervised method achieves the highest performance among semi-supervised and unsupervised methods in terms of Accuracy (Acc), Specificity (Spe), and AUC metrics. It also surpasses most fully supervised methods in Acc but underperforms CE-Net and Park et al.’s method in Sen. CE-Net introduces complex modules with dilated convolutions to expand the receptive field, enhancing real vessel detection and achieving a Sen of 83.09%. Park et al. construct an attention mechanism based on GANs, improving thin vessel recognition rates. As shown in Table 6, our method again attains the top performance in semi-supervised and unsupervised categories across Accuracy (Acc), Specificity (Spe) and approaches the performance of fully supervised methods. Overall, our method, trained with only partially labeled datasets, employs a pseudo-label filtering strategy for retinal vessel segmentation and outperforms most fully supervised approaches.

4.4. Discussion

Based on the above experimental results, the following conclusions can be drawn:
(1) From the experiments on different filtering strategies: the segmentation performance of the semi-supervised method SS surpasses the fully supervised method S. Using only pixel-level filtering yields suboptimal results. However, combining all pseudo-label filtering strategies leads to the best vascular segmentation performance (SS5), outperforming traditional self-training semi-supervised method SS. These experiments validate the effectiveness of the proposed filtering strategies.
(2) From the experiments with varying training samples: the proposed pseudo-label filtering semi-supervised vessel segmentation method (SS5) demonstrates excellent generalization. When trained with limited pixel-level labels, it significantly improves segmentation accuracy. With more than 8 labeled samples, SS5 achieves performance comparable to fully supervised methods, reducing labor and resource costs in pixel-level annotation by 60%.
(3) From qualitative experiments: each pseudo-label filtering strategy contributes meaningfully. Different strategies improve pseudo-label quality to varying degrees, while the combination of multiple strategies (SS5) achieves the best vascular segmentation results.
(4) From comparisons with existing methods: the proposed pseudo-label filtering semi-supervised retinal vessel segmentation method (SS5) achieves strong performance with limited labeled samples. It surpasses most fully supervised methods in Accuracy (Acc) and other metrics on DRIVE and STARE datasets. The superior results demonstrate robustness and generalization, enabling SS5 to assist physicians in vascular segmentation research.
(5) Our method fully utilizes unlabeled samples. While slight performance gaps remain compared to some advanced fully supervised methods, SS5 significantly reduces the labor and resource costs caused by the need for large-scale labeled datasets in traditional supervised approaches.

4.5. Limitations

The proposed method has achieved good results, but it still has some limitations: To obtain efficient pseudo-labels, we added multiple label information to a single image, which indirectly increased the code execution time. In the future, we will focus on further reducing the label information, enabling the model to achieve satisfactory performance even with only a single-label image or image-level weak labels. Additionally, the segmentation performance varies across different datasets. We aim to improve the model’s generalization ability so that it can achieve good results not only on retinal images but also on industrial and natural scene images.

5. Conclusions

The proposed algorithm addresses the challenges of labor-intensive annotation and insufficient labeled retinal vessel samples in deep learning by adopting a semi-supervised learning approach. To mitigate the negative impact of erroneous predictions in pseudo-labels, we propose a pixel-level pseudo-label filtering strategy that removes unreliable pixels from pseudo-labels. Additionally, our method integrates multiple pseudo-label filtering strategies. Experimental results demonstrate that our approach achieves strong segmentation performance using only a subset of labeled samples, surpassing most fully supervised methods in multiple performance metrics while significantly reducing annotation workload.

Author Contributions

Conceptualization, Z.L. (Zheng Lu) and J.L.; methodology, Z.L. (Zheng Lu) and Q.C.; software, Z.L. (Zheng Lu) and T.T.; validation, Z.H. and X.W.; formal analysis, Q.C.; investigation, Z.L. (Zheng Lu); resources, Q.C.; data curation, Z.L. (Zhenyu Liu); writing—original draft preparation, Z.L. (Zheng Lu); writing—review and editing, Z.L. (Zheng Lu); visualization, Z.L. (Zhenyu Liu); supervision, Q.C.; project administration, Q.C.; funding acquisition, Q.C. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by Natural Science Research Project of Anhui Higher Education Institutions (2022AH051713, 2023AH052101, 2023AH052097, 2024AH051337), Excellent Research and Innovation Teams in Universities of Anhui Province (2024AH010022).

Data Availability Statement

The DRIVE is available at https://opendatalab.org.cn/OpenDataLab/DRIVE (accessed on 15 March 2025). The STARE is available at https://opendatalab.org.cn/OpenDataLab/STARE (accessed on 15 March 2025).

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Cai, P.; Li, B.; Sun, G.; Yang, B.; Wang, X.; Lv, C.; Yan, J. Deaf-net: Detail-enhanced attention feature fusion network for retinal vessel segmentation. J. Digit. Imaging 2025, 38, 496–519. [Google Scholar] [CrossRef]
  2. Su, H.; Gao, L.; Wang, Z.; Yu, Y.; Hong, J.; Gao, Y. A Hierarchical Full-Resolution Fusion Network and Topology-Aware Connectivity Booster for Retinal Vessel Segmentation. IEEE Trans. Instrum. Meas. 2024, 73, 1–16. [Google Scholar] [CrossRef]
  3. Khandouzi, A.; Ariafar, A.; Mashayekhpour, Z.; Pazira, M.; Baleghi, Y. Retinal vessel segmentation, a review of classic and deep methods. Ann. Biomed. Eng. 2022, 50, 1292–1314. [Google Scholar] [CrossRef]
  4. Toptaş, B.; Hanbay, D. Retinal blood vessel segmentation using pixel-based feature vector. Biomed. Signal Process. Control 2021, 70, 103053. [Google Scholar] [CrossRef]
  5. Li, L.Z.; Verma, M.; Nakashima, Y.; Nagahara, H.; Kawasaki, R. IterNet: Retinal image segmentation utilizing structural redundancy in vessel networks. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision, Snowmass Village, CO, USA, 1–5 March 2020; pp. 3645–3654. [Google Scholar]
  6. Wang, K.; Zhang, X.H.; Huang, S.; Wang, Q.L.; Chen, F.Y. CTF-net: Retinal vessel segmentation via deep coarse-to-fine supervision network. In Proceedings of the 2020 IEEE 17th International Symposium on Biomedical Imaging, Iowa City, IA, USA, 4–7 April 2020; pp. 1237–1241. [Google Scholar]
  7. Guo, C.L.; Szemenyei, M.; Yi, Y.G.; Wang, W.L.; Chen, B.E.; Fan, C.Q. SA-Unet: Spatial attention U-Net for retinal vessel segmentation. In Proceedings of the 25th International Conference on Pattern Recognition, Milan, Italy, 10–11 January 2020; pp. 1236–1242. [Google Scholar]
  8. Wu, H.; Wang, W.; Zhong, J.; Lei, B.; Qin, J. Scs-net: A scale and context sensitive network for retinal vessel segmentation. Med. Image Anal. 2021, 70, 102025. [Google Scholar] [CrossRef]
  9. Zhong, X.; Zhang, H.; Li, G.; Ji, D. Do you need sharpened details? Asking MMDC-Net: Multi-layer multi-scale dilated convolution network for retinal vessel segmentation. Comput. Biol. Med. 2022, 150, 106198. [Google Scholar] [CrossRef]
  10. Li, Y.; Zhang, Y.; Cui, W.G.; Lei, B.Y.; Kuang, X.H.; Zhang, T. Dual encoder-based dynamic-channel graph convolutional network with edge enhancement for retinal vessel segmentation. IEEE Trans. Med. Imaging 2022, 41, 1975–1989. [Google Scholar] [CrossRef]
  11. Li, J.Y.; Gao, G.; Yang, L.; Bian, G.B.; Liu, Y.H. DPF-Net: A dual-path progressive fusion network for retinal vessel segmentation. IEEE Trans. Instrum. Meas. 2023, 72, 1–17. [Google Scholar] [CrossRef]
  12. Tan, Y.B.; Yang, K.F.; Zhao, S.X.; Li, Y.J. Retinal vessel segmentation with skeletal prior and contrastive loss. IEEE Trans. Med. Imaging 2022, 41, 2238–2251. [Google Scholar] [CrossRef]
  13. Yang, K.; Chang, S.; Yuan, J.; Fu, S.; Qin, G.; Liu, S.; Liu, K.; Zhao, Q.; Xue, L. Robust vessel segmentation in laser speckle contrast images based on semi-weakly supervised learning. Phys. Med. Biol. 2023, 68, 145008. [Google Scholar] [CrossRef]
  14. Li, C.; Ma, W.; Sun, L.; Ding, X.; Huang, Y.; Wang, G.; Yu, Y. Hierarchical deep network with uncertainty-aware semi-supervised learning for vessel segmentation. Neural Comput. Appl. 2022, 34, 3151–3164. [Google Scholar] [CrossRef]
  15. Shen, N.; Xu, T.; Bian, Z.; Huang, S.; Mu, F.; Huang, B.; Xiao, Y.; Li, J. SCANet: A unified semi-supervised learning framework for vessel segmentation. IEEE Trans. Med. Imaging 2023, 42, 2476–2489. [Google Scholar] [CrossRef]
  16. Ran, L.; Li, Y.; Liang, G.; Zhang, Y. Pseudo labeling methods for semi-supervised semantic segmentation: A review and future perspectives. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 3054–3080. [Google Scholar] [CrossRef]
  17. Lu, Z.; Chen, D. Weakly supervised and semi-supervised semantic segmentation for optic disc of fundus image. Symmetry 2020, 12, 145. [Google Scholar] [CrossRef]
  18. Chaudhuri, S.; Chatterjee, S.; Katz, N.; Nelson, M.; Goldbaum, M. Detection of blood vessels in retinal images using two-dimensional matched filters. IEEE Trans. Med. Imaging 1989, 8, 263–269. [Google Scholar] [CrossRef]
  19. Soares, J.V.B.; Leandro, J.J.G.; Cesar, R.M.; Jelinek, H.F.; Cree, M.J. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE Trans. Med. Imaging 2006, 25, 1214–1222. [Google Scholar] [CrossRef]
  20. Zhang, B.; Zhang, L.; Zhang, L.; Karray, F. Retinal vessel extraction by matched filter with first-order derivative of gaussian. Comput. Biol. Med. 2010, 40, 438–445. [Google Scholar] [CrossRef]
  21. Nguyen, U.T.V.; Bhuiyan, A.; Park, L.A.F.; Ramamohanarao, K. An effective retinal blood vessel segmentation method using multi-scale line detection. Pattern Recognit. 2013, 46, 703–715. [Google Scholar] [CrossRef]
  22. Azzopardi, G.; Strisciuglio, N.; Vento, M.; Petkov, N. Trainable cosfire filters for vessel delineation with application to retinal images. Med. Image Anal. 2015, 19, 46–57. [Google Scholar] [CrossRef]
  23. Ali, A.; Wan, M.; Hussain, A. Blood vessel segmentation from color retinal images using kmeans clustering and 2d gabor wavelet. In Proceedings of the International Conference on Applied Physics, System Science and Computers, Dubrovnik, Croatia, 26–28 September 2018; pp. 221–227. [Google Scholar]
  24. Ghosh, T.K.; Saha, S.; Rahaman, G.M.A.; Sayed, M.A.; Kanagasingam, Y. Retinal blood vessel segmentation: A semi-supervised approach. In Iberian Conference on Pattern Recognition and Image Analysis, Proceedings of the 9th Iberian Conference, IbPRIA 2019, Madrid, Spain, 1–4 July 2019; Springer: Cham, Switzerland, 2019; pp. 98–107. [Google Scholar]
  25. Xu, R.; Liu, T.T.; Ye, X.C.; Lin, L.; Chen, Y.W. Boosting connectivity in retinal vessel segmentation via a recursive semantics-guided network. In Proceedings of the 23rd International Conference on International Conference on Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 786–795. [Google Scholar]
  26. Liskowski, P.; Krawiec, K. Segmenting retinal blood vessels with deep neural networks. IEEE Trans. Med. Imaging 2016, 35, 2369–2380. [Google Scholar] [CrossRef] [PubMed]
  27. Park, K.B.; Choi, S.H.; Lee, J.Y. M-gan: Retinal blood vessel segmentation by balancing losses through stacked deep fully convolutional networks. IEEE Access 2020, 8, 146308–146322. [Google Scholar] [CrossRef]
  28. Gu, Z.; Cheng, J.; Fu, H.; Zhou, K.; Hao, H.; Zhao, Y.; Zhang, T.; Gao, S.; Liu, J. Ce-net: Context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 2019, 38, 2281–2292. [Google Scholar] [CrossRef] [PubMed]
  29. Jin, Q.; Meng, Z.; Pham, T.D.; Chen, Q.; Wei, L.; Su, R. Dunet: A deformable network for retinal vessel segmentation. Knowl.-Based Syst. 2019, 178, 149–162. [Google Scholar] [CrossRef]
  30. Zhang, Y.; He, M.; Chen, Z.; Hu, K.; Li, X.; Gao, X. Bridge-Net: Context-involved U-net with patch-based loss weight map for retinal blood vessel segmentation. Expert Syst. Appl. 2022, 195, 116526. [Google Scholar] [CrossRef]
  31. Huo, Q. Particle swarm optimization for great enhancement in semi-supervised retinal vessel segmentation with generative adversarial networks. arXiv 2019, arXiv:1906.07084. [Google Scholar] [CrossRef]
  32. Ruan, J.; Xiang, S.; Xie, M.; Liu, T.; Fu, Y. Malunet: A multi-attention and light-weight unet for skin lesion segmentation. In Proceedings of the 2022 IEEE International Conference on Bioinformatics and Biomedicine, Las Vegas, NV, USA, 6–9 December 2022; pp. 1150–1156. [Google Scholar]
  33. Wu, Y.C.; Xia, Y.; Song, Y.; Zhang, Y.N.; Cai, W.D. Multiscale network followed network model for retinal vessel segmentation. In Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Granada, Spain, 4–6 July 2018; pp. 119–126. [Google Scholar]
  34. Yang, L.; Wang, H.X.; Zeng, Q.S.; Liu, Y.H.; Bian, G.B. A hybrid deep segmentation network for fundus vessels via deep-learning framework. Neurocomputing 2021, 448, 168–178. [Google Scholar] [CrossRef]
  35. Wang, D.Y.; Haytham, A.; Pottenburgh, J.; Saeedi, O.; Tao, Y. Hard attention net for automatic retinal vessel segmentation. IEEE J. Biomed. Health Inform. 2020, 24, 3384–3396. [Google Scholar] [CrossRef]
  36. Ronneberger, O.; Fischer, P.; Brox, T. U-Net: Convolutional networks for biomedical image segmentation. In Proceedings of the 18th International Conference on Medical Image Computing and Computer Assisted Intervention, Munich, Germany, 5–9 October 2015; pp. 234–241. [Google Scholar]
  37. Zhou, T.; Dong, Y.L.; Huo, B.Q.; Liu, S.; Ma, Z.J. U-Net and its applications in medical image segmentation: A review. J. Image Graph. 2021, 26, 2058–2077. [Google Scholar] [CrossRef]
  38. Zhou, Z.W.; Siddiquee, M.M.R.; Tajbakhsh, N.; Liang, J.M. UNet++: A nested U-Net architecture for medical image segmentation. In Proceedings of the 21st International Conference on Medical Image Computing and Computer Assisted Intervention, Granada, Spain, 4–6 July 2018; pp. 3–11. [Google Scholar]
  39. Huang, H.; Lin, L.; Tong, R.; Hu, H.; Zhang, Q.; Iwamoto, Y.; Han, X.; Chen, Y.W.; Wu, J. UNet 3+: A full-scale connected UNet for medical image segmentation. In Proceedings of the 2020 IEEE International Conference on Acoustics, Speech and Signal Processing, Barcelona, Spain, 4–8 May 2020; pp. 1055–1059. [Google Scholar]
  40. Zhang, S.; Fu, H.; Yan, Y.; Zhang, Y.; Wu, Q.; Yang, M.; Tan, M.; Xu, Y. Attention guided network for retinal image segmentation. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 797–805. [Google Scholar]
  41. Lü, J.; Ma, C.; Cheng, C. Improved U-Net network for retinal vascular segmentation. J. Front. Comput. Sci. Technol. 2023, 17, 657–666. [Google Scholar]
  42. Mou, L.; Zhao, Y.; Fu, H.; Liu, Y.; Cheng, J.; Zheng, Y.; Su, P.; Yang, J.; Chen, L.; Frangi, A.F.; et al. CS2-Net: Deep learning segmentation of curvilinear structures in medical imaging. Med. Image Anal. 2021, 67, 101874. [Google Scholar] [CrossRef]
  43. Zhang, M.; Yu, F.; Zhao, J.; Zhang, L.; Li, Q.Z. BEFD: Boundary enhancement and feature denoising for vessel segmentation. In Proceedings of the 23rd International Conference on International Conference on Medical Image Computing and Computer Assisted Intervention, Lima, Peru, 4–8 October 2020; pp. 775–785. [Google Scholar]
  44. Wu, Y.; Xia, Y.; Song, Y.; Zhang, D.; Cai, W. Vessel-Net: Retinal vessel segmentation under multi-path supervision. In Proceedings of the 22nd International Conference on Medical Image Computing and Computer Assisted Intervention, Shenzhen, China, 13–17 October 2019; pp. 264–272. [Google Scholar]
  45. Zhou, J.C.; Hao, M.L.; Zhang, D.H.; Zou, P.Y.; Zhang, W.S. Fusion PSPnet image segmentation based method for multi-focus image fusion. IEEE Photonics J. 2019, 11, 1–12. [Google Scholar] [CrossRef]
  46. Laibacher, T.; Weyde, T.; Jalali, S. M2U-Net: Effective and efficient retinal vessel segmentation for real-world applications. In Proceedings of the 2019 IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–20 June 2019; pp. 115–124. [Google Scholar]
  47. Liu, Y.H.; Shen, J.; Yang, L.; Yu, H.N.; Bian, G.B. Wave-Net: A lightweight deep network for retinal vessel segmentation from fundus images. Comput. Biol. Med. 2023, 152, 106341. [Google Scholar] [CrossRef]
  48. Li, M.X.; Zhou, S.L.; Chen, C.; Zhang, Y.Y.; Liu, D.; Xiong, Z.W. Retinal vessel segmentation with pixel-wise adaptive filters. In Proceedings of the 19th IEEE International Symposium on Biomedical Imaging, Kolkata, India, 28–31 March 2022; pp. 1–5. [Google Scholar]
  49. Hung, W.; Tsai, Y.; Liou, Y.; Lin, Y.; Yang, M. Adversarial learning for semi-supervised semantic segmentation. arXiv 2018, arXiv:1802.07934. [Google Scholar] [CrossRef]
  50. Tarvainen, A.; Valpola, H. Mean teachers are better role models: Weight-averaged consistency targets improve semi-supervised deep learning results. Adv. Neural Inf. Process Syst. 2017, 30, 1195–1204. [Google Scholar]
  51. Sajjadi, M.; Javanmardi, M.; Tasdizen, T. Mutual exclusivity loss for semi-supervised deep learning. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; pp. 1908–1912. [Google Scholar]
  52. Shi, W.; Gong, Y.; Ding, C.; Tao, Z.M.X.; Zheng, N. Transductive semi-supervised deep learning using min-max features. In Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany, 8–14 September 2018; pp. 299–315. [Google Scholar]
  53. Yu, L.; Wang, S.; Li, X.; Fu, C.W.; Heng, P.A. Uncertainty-aware self-ensembling model for semi-supervised 3D left atrium segmentation. In Proceedings of the Medical Image Computing and Computer Assisted Intervention–MICCAI, Shenzhen, China, 13–17 October 2019; pp. 605–613. [Google Scholar]
  54. Wu, Y.; Ge, Z.; Zhang, D.; Xu, M.; Zhang, L.; Xia, Y.; Cai, J. Mutual consistency learning for semi supervised medical image segmentation. Med. Image Anal. 2022, 81, 102530. [Google Scholar] [CrossRef]
  55. Sheeba, J.; Parasuraman, S.; Amudha, K. Contrast enhancement and brightness preserving of digital mammograms using fuzzy clipped contrast-limited adaptive histogram equalization algorithm. Appl. Soft Comput. 2016, 42, 167–177. [Google Scholar] [CrossRef]
  56. Long, J.; Shelhamer, E.; Darrell, T. Fully convolutional networks for semantic segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2015, 39, 640–651. [Google Scholar]
  57. Chen, L.C.; Papandreou, G.; Kokkinos, I.; Murphy, K.; Yuille, A.L. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs. IEEE Trans. Pattern Anal. Mach. Intell. 2018, 40, 834–848. [Google Scholar] [CrossRef] [PubMed]
  58. Li, H.; Sun, P. Image-Based Fire Detection Using Dynamic Threshold Grayscale Segmentation and Residual Network Transfer Learning. Mathematics 2023, 11, 21. [Google Scholar] [CrossRef]
  59. Gour, N.; Khanna, P. Blood Vessel Segmentation Using Hybrid Median Filtering and Morphological Transformation. In Proceedings of the 13th International Conference on Signal-Image Technology & Internet-Based Systems, Jaipur, India, 4–17 December 2017; pp. 151–157. [Google Scholar]
  60. Staal, J.; Abràmoff, M.D.; Niemeijer, M.; Viergever, M.A.; Van Ginneken, B. Ridge-based vessel segmentation in color images of the retina. IEEE Trans. Med. Imaging 2004, 23, 501–509. [Google Scholar] [CrossRef] [PubMed]
  61. Hoover, A.; Kouznetsova, V.; Goldbaum, M. Locating blood vessels in retinal images by piecewise threshold probing of a matched filter response. IEEE Trans. Med. Imaging 2000, 19, 203–210. [Google Scholar] [CrossRef] [PubMed]
  62. Chen, D.; Ao, Y.; Liu, S. Semi-Supervised Learning Method of U-Net Deep Learning Network for Blood Vessel Segmentation in Retinal Images. Symmetry 2020, 12, 1067. [Google Scholar] [CrossRef]
  63. Hou, J.; Ding, X.; Deng, J. Semi-supervised semantic segmentation of vessel images using leaking perturbations. In Proceedings of the IEEE Winter Conference on Applications of Computer Vision 2022, Waikoloa, HI, USA, 4–8 January 2022; pp. 2625–2634. [Google Scholar]
Figure 1. The overview of the proposed general framework. As shown in the figure, The detailed implementation procedure can be summarized as follows: (1) Image Preprocessing and Data Augmentation; (2) Following preprocessing, the dataset is partitioned into labeled and unlabeled training subsets; (3) The deep neural network is trained using the labeled training subset; (4) The trained model generates pseudo-labels for the unlabeled dataset; (5) A designed refinement strategy filters the pseudo-labels to retain the most valuable samples; (6) The refined pseudo-labels and their corresponding preprocessed images are merged with the original labeled dataset; (7) The augmented dataset is used to retrain the network.
Figure 1. The overview of the proposed general framework. As shown in the figure, The detailed implementation procedure can be summarized as follows: (1) Image Preprocessing and Data Augmentation; (2) Following preprocessing, the dataset is partitioned into labeled and unlabeled training subsets; (3) The deep neural network is trained using the labeled training subset; (4) The trained model generates pseudo-labels for the unlabeled dataset; (5) A designed refinement strategy filters the pseudo-labels to retain the most valuable samples; (6) The refined pseudo-labels and their corresponding preprocessed images are merged with the original labeled dataset; (7) The augmented dataset is used to retrain the network.
Symmetry 17 01462 g001
Figure 2. Image cropping operation.
Figure 2. Image cropping operation.
Symmetry 17 01462 g002
Figure 3. U-net network structure.
Figure 3. U-net network structure.
Symmetry 17 01462 g003
Figure 4. Filtering strategy.
Figure 4. Filtering strategy.
Symmetry 17 01462 g004
Figure 5. The prediction results after processing and mask comparison.
Figure 5. The prediction results after processing and mask comparison.
Symmetry 17 01462 g005
Figure 6. Directly eliminate the image containing too few vascular pixels.
Figure 6. Directly eliminate the image containing too few vascular pixels.
Symmetry 17 01462 g006
Figure 7. Box graph index of different training samples.
Figure 7. Box graph index of different training samples.
Symmetry 17 01462 g007
Figure 8. Performance analysis of different iterations.
Figure 8. Performance analysis of different iterations.
Symmetry 17 01462 g008
Figure 9. Qualitative comparison of several improved semi-supervised algorithms.
Figure 9. Qualitative comparison of several improved semi-supervised algorithms.
Symmetry 17 01462 g009
Figure 10. Segmentation result based on DRIVE dataset.
Figure 10. Segmentation result based on DRIVE dataset.
Symmetry 17 01462 g010
Figure 11. Segmentation result based on STARE dataset.
Figure 11. Segmentation result based on STARE dataset.
Symmetry 17 01462 g011
Figure 12. Segmentation results of severe lesion images based on the STARE dataset.
Figure 12. Segmentation results of severe lesion images based on the STARE dataset.
Symmetry 17 01462 g012
Table 1. Accuracy of several main segmentation networks in DRIVE dataset.
Table 1. Accuracy of several main segmentation networks in DRIVE dataset.
NetworkFcnDeepLabUnet++Unet
Accuracy0.9010.9510.9520.960
Table 2. Comparison of different Filtering methods for blood vessel segmentation.
Table 2. Comparison of different Filtering methods for blood vessel segmentation.
nAlgorithmAccSenSpeAUC
1S0.94430.62870.99150.8105
SS0.94560.63030.99270.8114
SS10.94510.80220.95900.8806
SS20.94590.79100.96200.8765
SS30.94750.66280.98870.8257
SS40.94770.65130.99150.8213
SS50.94840.67700.98750.8322
2S0.94900.66520.99020.8277
SS0.95290.68580.99100.8386
SS10.95170.81920.96350.8914
SS20.95140.81540.96500.8902
SS30.95170.67730.99110.8342
SS40.95180.67460.99110.8332
SS50.95380.70320.98890.8463
Table 3. Comparison of Segmentation Results Using the Four Filtering Strategies.
Table 3. Comparison of Segmentation Results Using the Four Filtering Strategies.
MethodAccSenSpeAUCDice
Strategy 10.95170.81920.96350.89140.8821
Strategy 20.95140.81940.96210.89010.8819
Strategy 30.95150.81930.96360.89170.8813
Strategy 40.95170.81880.96260.89220.8791
Strategy 1 + 20.95140.81540.96500.89020.8824
Strategy 1 + 30.95180.81660.96430.88620.8795
Strategy 1 + 40.95120.81640.96510.89130.8817
Strategy 2 + 30.95110.80780.97530.88020.8820
Strategy 2 + 40.95150.80510.96500.87020.8822
Strategy 3 + 40.95140.81540.96650.88200.8816
Strategy 1 + 2 + 30.95170.67730.99110.83420.8823
Strategy 1 + 2 + 40.95200.68760.98110.84540.8834
Strategy 1 + 3 + 40.95230.69330.98320.82990.8856
Strategy 2 + 3 + 40.95250.69370.99100.84420.8863
Strategy 1 + 2 + 3 + 40.95380.70320.98890.84630.8915
Table 4. Comparison of segmentation performance with different number of training samples.
Table 4. Comparison of segmentation performance with different number of training samples.
nAlgorithmAccSenSpeAUC
1S0.94620.66610.98670.8265
SS0.94780.64960.98950.8208
SS50.94980.69490.98620.8397
3S0.95230.69060.98920.8397
SS0.95670.72420.98860.8560
SS50.95590.73020.98590.8579
5S0.95470.69750.99260.8445
SS0.95660.72430.89820.8558
SS50.95930.75040.98610.8683
7S0.95610.70450.99120.8479
SS0.95830.72040.99040.8553
SS50.96060.76340.98570.8745
9S0.95790.71710.99090.8537
SS0.95930.73150.99030.8607
SS50.96140.75830.98750.8729
11S0.95830.70810.99180.8505
SS0.95930.72910.99090.8602
SS50.96160.75630.98730.8724
13S0.95850.71480.99220.8533
SS0.95960.72770.99110.8594
SS50.96130.74910.98870.8693
15S0.95830.71410.99250.8532
SS0.96130.74620.98950.8678
SS50.96250.76320.98780.8755
17S0.95930.72310.99170.8574
SS0.96060.74340.98950.8666
SS50.96330.76670.98820.8785
18S0.95940.72420.99160.8576
SS0.96150.74950.98930.8694
SS50.96260.76510.98780.8764
Table 5. Comparison of segmentation results of different methods based on DRIVE dataset.
Table 5. Comparison of segmentation results of different methods based on DRIVE dataset.
MethodCategoryAccSpeSenAUC
Park et al. [27]S0.97060.98360.83460.9868
U-Net [36]S0.96530.97830.77560.9794
M2UNet [46]S0.9630---
CE-Net [28]S0.9545-0.83090.9779
DUNet [29]S0.95660.98000.79630.9802
Bridge-Net [30]S0.95650.98180.78530.9834
Zhang et al. [20]U0.93820.97240.7121-
Nguyen et al. [21]U0.9407---
Azzopardi et al. [22]U0.94420.97040.76550.9614
Ali et al. [23]U0.94250.97570.7206-
Huo et al. [31]SS---0.9550
Ghosh et al. [24]SS0.9610.9810.7370.859
Hou et al. [63]SS0.95740.86760.9750-
Li et al. [14]SS0.9450.9340.955-
OursSS0.96140.98740.75650.8721
Table 6. Comparison of segmentation results of different methods based on STARE dataset.
Table 6. Comparison of segmentation results of different methods based on STARE dataset.
MethodCategoryAccSpeSenAUC
Park et al. [27]S0.98780.99380.83240.9873
MALU-Net [32]S0.92240.96050.58910.9143
U-Net [36]S0.97160.98920.76000.9781
Zhang et al. [20]U0.91310.94700.5716-
Nguyen et al. [21]U0.9324---
Azzopardi et al. [22]U0.94970.97010.77160.9563
Ghosh et al. [24]SS0.9600.9720.75080.889
Hou et al. [63]SS0.95650.91860.9102-
Li et al. [14]SS0.9430.9540.913-
OursSS0.96540.98140.76150.8921
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Lu, Z.; Li, J.; Liu, Z.; Cao, Q.; Tian, T.; Wang, X.; Huang, Z. Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering. Symmetry 2025, 17, 1462. https://doi.org/10.3390/sym17091462

AMA Style

Lu Z, Li J, Liu Z, Cao Q, Tian T, Wang X, Huang Z. Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering. Symmetry. 2025; 17(9):1462. https://doi.org/10.3390/sym17091462

Chicago/Turabian Style

Lu, Zheng, Jiaguang Li, Zhenyu Liu, Qian Cao, Tao Tian, Xianchao Wang, and Zanjie Huang. 2025. "Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering" Symmetry 17, no. 9: 1462. https://doi.org/10.3390/sym17091462

APA Style

Lu, Z., Li, J., Liu, Z., Cao, Q., Tian, T., Wang, X., & Huang, Z. (2025). Semi-Supervised Retinal Vessel Segmentation Based on Pseudo Label Filtering. Symmetry, 17(9), 1462. https://doi.org/10.3390/sym17091462

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop