Next Article in Journal
Symmetry Classification of Antiferromagnets with Four Types of Multipoles
Next Article in Special Issue
UAV Tracking via Saliency-Aware and Spatial–Temporal Regularization Correlation Filter Learning
Previous Article in Journal
Ideals in Bipolar Quantum Linear Algebra
Previous Article in Special Issue
Research on LFD System of Humanoid Dual-Arm Robot
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Spherical Superpixel Segmentation with Context Identity and Contour Intensity

1
Institute of Intelligent Control and Image Engineering, Xidian University, Xi’an 710071, China
2
Xi’an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi’an 710119, China
*
Author to whom correspondence should be addressed.
Symmetry 2024, 16(7), 925; https://doi.org/10.3390/sym16070925
Submission received: 10 June 2024 / Revised: 11 July 2024 / Accepted: 18 July 2024 / Published: 19 July 2024
(This article belongs to the Special Issue Symmetry Applied in Computer Vision, Automation, and Robotics)

Abstract

:
Superpixel segmentation is a popular preprocessing tool in the field of image processing. Nevertheless, conventional planar superpixel generation algorithms are inadequately suited for segmenting symmetrical spherical images due to the distinctive geometric differences. In this paper, we present a novel superpixel algorithm termed context identity and contour intensity (CICI) that is specifically tailored for spherical scene segmentation. By defining a neighborhood range and regional context identity, we propose a symmetrical spherical seed-sampling method to optimize both the quantity and distribution of seeds, achieving evenly distributed seeds across the panoramic surface. Additionally, we integrate the contour prior to superpixel correlation measurements, which could significantly enhance boundary adherence across different scales. By implementing the two-fold optimizations on the non-iterative clustering framework, we achieve synergistic CICI to generate higher-quality superpixels. Extensive experiments on the public dataset confirm that our work outperforms the baselines and achieves comparable results with state-of-the-art superpixel algorithms in terms of several quantitative metrics.

1. Introduction

The concept of a superpixel was introduced by Ren et al. [1] in 2003, with the aim of grouping similar pixels in a localized context of an image to extract coherent region-level features. From then on, superpixel segmentation gradually became a popular preprocessing tool for various advanced computer vision tasks so as to boost the running efficiency. The emerging superpixel algorithms have found applications in diverse fields, including semantic segmentation [2,3,4], object detection and tracking [5,6,7,8], robot vision [9], depth estimation [10], etc. Some interesting superpixel segmentation algorithms are equally worthy of attention, such as those based on metaheuristic techniques [11], agents [12], and cellular automata [13].
With the advancement of imaging technology, 360° panoramic cameras have unlocked novel visual representations of real-world scenarios. These cameras can capture virtual reality (VR) images, providing users with a comprehensive observation of surrounding information through a 360° horizontal and 180° vertical view. Nevertheless, most image processing algorithms have traditionally been designed for planar images, posing significant challenges in directly applying conventional approaches to spherical image analysis, especially for superpixel segmentation. In recent years, the increasing significance of spherical image processing in VR has prompted researchers to investigate superpixel applications. Zhao et al. introduce the spherical SLIC (SphSLIC) algorithm in their work [14]. They initialize superpixel seeds using Hammersley sampling [15] to narrow the search range for K-means clustering and employ cosine similarity as the distance metric between pixels and seeds. Both optimizations enhance the suitability of SphSLIC superpixels for spherical images. Similarly, spherical mean shift (SphMS) and spherical ETPS (SphETPS) [16] are proposed based on SphSLIC, with emphasis on distance metrics, neighborhood extent, and boundary issues. Another representative work is spherical shortest path-based superpixels (SphSPS) [17] proposed by Giraud et al., which trace the shortest path between a pixel and a seed on a spherical image. It utilizes color information and contour weights along the path to enhance segmentation accuracy and regularity. To meet the demand for real-time performance, Silveira et al. optimized two planar image superpixel algorithms: simple non-iterative clustering (SNIC) [18] and superpixel hierarchy (SH) [19]. They then propose two novel spherical image superpixel algorithms, spherical SNIC (SSNIC) and spherical SH (SSH) [20], which inherit the balanced performance of accuracy and efficiency.
Typically, spherical images are commonly subjected to equirectangular projection (ERP) [21] or cube map projection (CMP) [22], followed by relevant calculations, and the results are subsequently projected back onto the spherical image. Most of the current studies primarily focus on ERP images after equirectangular projection. However, it is worth noting that spherical images inherently represent a closed geometric sphere. When these images undergo ERP, they artificially introduce open regions and significant distortions, particularly at the top and bottom sides. Directly applying planar superpixel algorithms to ERP images cannot generate superpixels with continuous contour boundaries in their original closed regions when projecting the result back onto the spherical image, especially near its poles. As depicted in Figure 1a,c, simple linear clustering (SLIC) [23] performed on planar images, and the spherical superpixel algorithm (SphSLIC) [14] specialized for spherical images on ERP projections, respectively, demonstrate their respective segmentation results for comparison purposes on ERP images. In Figure 1b,d, the label maps of SLIC and SphSLIC projected back onto the spherical image show the disparities. Specifically, Figure 1b displays irregular superpixels in the black rectangular box, with open and discontinuous contours in the black elliptical box, whereas Figure 1d demonstrates superior performance.
Some researchers have applied superpixel algorithms to spherical image processing. Cabral et al. [24] introduced a superpixel algorithm for processing spherical images in 3D reconstruction. Hao et al. [25] developed a change detection algorithm that leverages superpixels to mitigate resolution degradation issues. However, these approaches usually overlook the inherent differences between spherical and planar images, resulting in superpixels that retain the aforementioned limitations and consequently impact performance. In addition, artificially processed spherical projection maps typically exhibit non-uniform content distribution, with a concentration of content in the equatorial region [18]. Existing superpixel algorithms tend to neglect this aspect and fail to adaptively adjust superpixel sizes based on content sparsity, leading to inadequate segmentation in smaller regions. Moreover, when different objects exhibit similar colors, clear contour boundaries may not always be present, posing challenges to segmentation accuracy.
Based on these considerations, a structurally optimized superpixel segmentation algorithm is proposed, termed context identity and contour intensity (CICI). In this work, we have adopted the fundamental framework of simple non-iterative clustering (SNIC). The key idea revolves around optimizing the conventional framework, with particular emphasis on enhancing the initialization phase and refining the similarity measurement stage. In the initialization phase, we consider the distinctive geometric attributes of spherical images and the context identity, thereby redefining the merging and emerging operations for seeds. By utilizing the proposed inter-pixel similarity measurement, we then employed a contour intensity constraint to efficiently discriminate between two pixels with different labels. The integration of the aforementioned strategies proves effectiveness not only in the context of spherical images but also extends to yielding accurate clustering results for planar images. This further exemplifies that our proposal encompasses a comprehensive optimization strategy for the framework rather than being solely an algorithmic refinement.
In the context of prior research, our contributions can be summarized as follows:
  • An efficient seed-sampling method is proposed by defining a neighborhood range and regional context identity, which could optimize both the quantity and distribution of seeds, leading to evenly distributing seeds across the panoramic surface.
  • A subtle inter-pixel correlation measurement is put forward to enhance boundary adherence across different scales, thereby integrating the contour intensity to pixel-superpixel correlation measurements.
  • A context identity and contour intensity strategy is introduced to enhance the overall performance within a non-iterative clustering framework. Extensive experiments on two datasets were conducted, confirming its feasibility and comparable results.
The outline of this work is organized as follows: In Section 2, the preliminaries on SNIC superpixel algorithm are reviewed in brief. In Section 3, the implementation of CICI method is explained systematically. Experiments and analyses are explicated in Section 4. Finally, we give the conclusions in Section 5.

2. Preliminaries on SNIC

In this section, we will introduce the SNIC algorithm in detail. Since the proposed CICI can be regarded as a variant of SNIC, we briefly introduce the latter before entering into the subject. Given a visual image Ι = { p i } i = 1 N consisting of N pixels and a pre-defined number of superpixel K , SNIC first divides I into K uniform grids, each approximately covering an area of N / K and the initial seeds are placed at the center of each grid with an interval s = N / K . Let us denote by C i = [ l i , a i , b i ] and P i = [ x i , y i ] the CIELAB color and Euclidean position vector of each pixel p i I , respectively. The color and spatial information of k - th seed s k are denoted as C k s and P k s , respectively (note that the index k in C k s and P k s is the index of the seed s k rather than the index of pixel p k ). SNIC measures the correlation of a pixel and a seed by calculating the feature distance d i , k between p i and s k , and then assigning labels according to the global minimal distance as follows:
D ( p i , s k ) = C i C k s 2 2 s + P i P k s 2 2 m
where s = N / K and the parameter m are pre-set to control the compactness of superpixels. A greater m enhances the significance of spatial distance, resulting in more compact superpixels. Conversely, when color differences play a more dominant role, superpixels become more sensitive to color changes, leading to diminished compactness. The aforementioned process is shared between SNIC and SLIC algorithms. On the other hand, a key distinction lies in the utilization of a priority queue in SNIC instead of multiple iterations in SLIC, which enables the former to effectively reduce the time complexity of the process. The non-iterative label assignment steps of the SNIC algorithm can be summarized as follows:
Step 1: Initialize a minimum priority queue Q to store the vector nodes corresponding to all pixels and seeds, and a label map L = { L ( p i ) } i = 1 N to store the label of each pixel within image I . For each vector node e , it records the color and spatial information, as well as the pixel-seed distance calculated by Equation (1), which acts as the key value for sorting in Q . For each seed s k , a vector node e k s = { P k s , C k s , k , d } is defined and pushed onto Q . Since each seed is initially an image pixel, it acquires an initial label k and establishes that d = 0 .
Step 2: Pop the element e i with the minimum d from Q if it exists. If the pixel p i corresponding to e i = { P i , C i , k , D ( p i , s k ) } is not labeled, then assign L ( s k ) = k to it and dynamically update the seed s k with C i and P i of p i .
Step 3: Traverse the 8-neighboring pixel p j of p i successively. Determine if p j has been labeled. Then create a set e j = { P j , C j , k , D ( p j , s k ) } for pixel p j if it is not labeled. e j that corresponds to p j is then pushed onto Q .
Step 4: Repeat steps 2 and 3 until Q is empty, and then output the label map L .

3. Method

The detailed introduction of the proposed CICI framework is presented in this section, and the overall processing flow is depicted in Figure 2.
Specifically, for an input ERP image in Figure 2a, CICI starts by extracting the corresponding contour map in Figure 2b. This process is implemented by the structured forest edge detection algorithm [26] to compute the regional context identity. A set of seeds is then generated through the Fibonacci sampling method [27], followed by the optimization of positions and existences, which is shown in Figure 2d,e. Finally, the refined seeds as well as the contour-perceptive distance metric are employed for superpixel segmentation within a non-iterative clustering framework.

3.1. Sampling Strategies for Spherical Image

It is tough to directly initialize all seeds on the spherical image, according to previous research. Conventional approaches usually adopt a three-step strategy: converting the spherical image into an ERP image, sampling, and mapping back to the spherical image. It should be noted that there are an equal number of seeds on each latitude line of the spherical image. However, the radius of the latitude circle decreases when approaching the poles, resulting in a higher seed density near these regions. As shown in Figure 3a, if we initialize seeds on the ERP image in a SNIC-like manner, it would be unattainable to achieve a uniform distribution of seeds on the spherical image.
Currently, there are various sampling methods for spherical images, including geodesic sampling, Hammersley sampling, and Halton sampling [15]. The geodesic sampling method enables an approximately uniform distribution of seeds on spherical images. However, it is limited to generating only a specific number of subdivision-level-based sampling points rather than a specified number of seeds. On the other hand, despite Hammersley and Halton samplings supporting a fixed number of seeds, these seeds are randomly and uniformly distributed, as illustrated in Figure 3b,d.
To overcome the limitations of the aforementioned sampling methods, we introduce Fibonacci sampling that enables us to generate approximately uniform seeds on a spherical image with adjustable numbers. The procedure begins with the distribution of K seeds on the ERP image, and the coordinates ( x i , y i ) of these seeds are calculated as follows:
( x i , y i ) = ( i φ , i K ) , i = 0 , 1 , 2 , , K 1
where φ = ( 1 + 5 ) / 2 . Next, these seeds are mapped back to the spherical image by the following:
{ X = sin ( ϕ ) cos ( θ ) Y = sin ( θ ) sin ( ϕ ) Z = cos ( ϕ )
where θ = 2 π x and ϕ = arccos ( 1 2 y ) . ( x , y ) and ( X , Y , Z ) are the coordinates of the seeds on the ERP and spherical image, respectively. Figure 3c demonstrates the distribution result.

3.2. Optimized Initialization by Context Identity

This part presents a novel strategy for seed distribution based on Fibonacci sampling, which is shown in Figure 4. The local region context identity is computed based on the prior contour image, providing insight into the complexity of the region’s content. It is worth mentioning that context identity is reflected by content density, and there is no difference in its essence. To dynamically adjust the number of seeds in the local area according to content complexity, a five-step implementation is employed, which can be briefed as follows:
Step 1: Initialize seeds using Fibonacci sampling to achieve a uniform distribution on the spherical image (assuming the number of seeds is K ).
Step 2: Determine the range of the region { R i } i = 1 K where the seeds { s k } k = 1 K are located.
R i = { [ x , y ] | x i S λ s i n ϕ x x i + S λ s i n ϕ , y i S λ y y i + S λ }
where S = w / K π is the initial size of each superpixel, ϕ = y π / h is the pole angle on y - th row of the ERP image corresponding to the spherical image, w and h are the image width and height, respectively, λ is a factor that regulates the overlap degree of adjacent areas, and λ = 1.5 is taken in this paper.
Step 3: Calculate the context identity of region R i where seed C i is located. We utilize the structured forest edge detection algorithm to extract the contour of the original image, and the contour map is used as input in the form of a grayscale map. The context identity of region R i is calculated by the following:
D i = P C i P N i
where P C i represents the number of pixels in region R i , with grayscale values less than V t h , with a default value of 150, and P N i is the total number of pixels in R i . The context density of the whole image is D a l l , and the average context density of all regions { R i } i = 1 K is D ¯ , which are calculated by the following:
D a l l = P C a l l N , D ¯ = 1 K i = 1 K D i
where P C a l l is the number of pixels in the image with grayscale values less than V t h .
Step 4: The content complexity of all regions in the image can be divided into different levels according to the density information, which then guides the adjustment of seed number in the region. Define D max = max { D a l l , D ¯ } and D min = min { D a l l , D ¯ } ; if D i D m a x , it indicates that the content complexity of the area is high; if D m i n < D i < D m a x , the content complexity is moderate; if D i D m i n , the content complexity is low.
Step 5: Adjust the number of seeds in regions according to the determined complexity level in Step 4.

3.3. Optimized Correlation Measurement

This part presents an optimized correlation measurement. The correlation measurement between pixels plays an important role in the quality of the superpixel segmentation algorithm. In this paper, the distance measurement is improved by using the contour information in the contour map, so that the algorithm can accurately segment regions with less apparent boundaries.
The distance D ( p i , s i ) between pixel p i and seed s i is shown as follows:
D ( p i , s i ) = ( D l a b + η D x y z ) ( 1 + θ D c )
where D l a b is a measure of color difference between pixels, D x y z stands for measuring the spatial position between pixels, η is the parameter that controls the distance weight, D c is a new metric component introduced by the contour diagram to indicate whether there is a contour between pixels, and θ is a weight parameter that controls the weight of the contour item. When θ = 0 , D ( p i , s i ) is similar to the distance metric of SNIC, each of the three metric components is described below:
  • Distance measurement  D l a b . The color component of pixel p i is [ l p i , a p i , b p i ] . The color component of seed s i is [ l s i , a s i , b s i ] , then D l a b is defined as follows:
    D l a b = ( l s i l p i ) 2 + ( a s i a p i ) 2 + ( b s i b p i ) 2
  • Spatial distance metric  D x y z . The position mark of pixel p i on the spherical image is [ X p i , Y p i , Z p i ] . Similarly, the coordinate of seed s i on the spherical image is [ X s i , Y s i , Z s i ] . Therefore, D x y z is defined as follows:
    D x y z = ( X s i X p i ) 2 + ( Y s i Y p i ) 2 + ( Z s i Z p i ) 2
  • Contour term component  D c . The contour term is solved on the contour diagram of the original image. On the ERP image, traverse the shortest path P ( p i , s i ) between pixel p i in the contour map and seed s i , as shown in Figure 5. If the gray value Φ ( p j ) of pixel p j P ( p i , s i ) on the shortest path is less than V t h , ( V t h = 150 ) . Then it means that there is a contour line between p i and s i . Therefore, D x y z is defined as follows:
    D c = { 1 ,   if   p j P ( p i , s i ) ,   s . t .   V t h Φ ( p j ) 0 ,   if     p j P ( p i , s i ) ,   s . t .   V t h < Φ ( p j )
The definition of distance measure and the calculation method of the contour term are described in detail. The visual effect of the algorithm after adding contour items is shown in Figure 6. It can be clearly seen that the algorithm has a qualitative improvement in the segmentation effect of the region with a small color difference after adding the contour term to the distance measurement.

3.4. Boundary Neighborhood

The presented algorithm is designed based on the non-iterative framework of SNIC, which is extensively described in Section 2. Once an element e i is popped from the priority queue in the non-iterative framework, a label is first assigned to its corresponding pixel p i . The neighboring pixels of p i are then traversed within the 8 neighborhoods. If a label p j has not been assigned, further operations are executed. It is worth mentioning in Figure 7a that pixels p 6 , p 7 , and p 8 do not belong to the 8-neighborhood pixels of pixel p 0 . In a planar image, the relative positions between pixels align with those in real-world scenarios. Nevertheless, since spherical image projection transformation is employed for ERP images, artificial boundaries are formed along their left and right sides. Consequently, while these boundary pixels appear distant from each other on ERP images, they are actually adjacent in both real-world and spherical representations.
Therefore, when applying non-iterative framework to ERP images, more care should be applied to the neighborhood range of the boundary pixels on both sides of the ERP image. As shown in Figure 7b, when traversing the pixels in the 8-neighborhood of pixel p 0 , this work considers p 6 , p 7 , and p 8 as the 8-neighborhood pixels of p 0 . Similarly, p 0 , p 1 , and p 5 are the 8-neighborhood pixels of p 7 .
The optimization strategies of each part of the algorithm in this paper are introduced in detail. Each optimization strategy plays a different role in the CICI algorithm. A pseudocode summary of the framework is presented in Algorithm 1. Additionally, a CICI step flow diagram will increase the feasibility of algorithm implementation. See Figure 8.
Algorithm 1: CICI spherical superpixel segmentation framework
Input: the EPR image I , the contour map G , the expected superpixel number K
Output: Assigned label map L = { L ( p i ) } i = 1 N
/*Initialization*/
Initialize cluster seeds { s i } i = 1 K by Fibonacci sampling.
Initialize a priority queue Q with a small root.
Divided the area of each seed { R i } i = 1 K
/*Seeds redistribution*/
for each region R i do
 Calculate the context identity D i of the current region R i .
end for
Calculate the context identity D a l l and the regional average context identity D ¯ of I
Adjust the number of initial seeds to K according to the context identity.
Determine D max and D min .
for each region R i do
if D i D max then
  Add two new seeds to area R i .
else if D min < D i < D max then
  Keep the seeds in region R i unchanged.
else if D i D min then
  Delete the seeds in area R i .
end if
end for
for k [ 1 , 2 , , K ]  do
 Create element e k through seeds and push in priority queue Q .
end for
/*label map update*/
while  Q is not empty do
 Pop the element e i from queue Q .
if  p i is not labeled before then
  Assign the label to p i .
  Update the corresponding cluster.
  for traversing pixel p i new 8-neighborhood pixel p j do
   if p j is not labeled before then
    Update the distance D ( p j , s k ) and create the corresponding node e j .
    Push e j onto Q .
   end if
  end for
end if
end while
Return the label map L .

4. Experiments

In this section, the proposed CICI is evaluated to substantiate its superiority. Firstly, two datasets are introduced along with the benchmarks, including the Spherical Panorama Segmentation Dataset (SPSDataset75) [16] and the Berkeley Segmentation Data Set 500 (BSDS500) [28]. The qualitative and quantitative performance is then systematically tested and demonstrated from specific aspects to the overall evaluation. Subsequently, the computational efficiency is analyzed. The software and all algorithms are performed on an Intel Core i7 4.2 GHz with 16 GB RAM, without any parallelization or GPU processing.

4.1. SPSDataset75

The SPSDataset75 comprises 75 ERP images, accompanied by corresponding manual annotated ground truth, which usually serves as a comprehensive benchmark for evaluating spherical image segmentation quantitatively. A portion of the visual representations is illustrated in Figure 9, wherein each ERP image has a size of 1024 × 512.

4.1.1. Qualitative Result Analysis

The proposed algorithm CICI is qualitatively compared and analyzed with several up-to-date spherical superpixel algorithms, including SphSLIC [14], SphSPS [17], SSNIC [20], SLIC [23], and SNIC [18]. To ensure the optimal performance of the comparative algorithm, the experimental parameters are set to their default values as specified in the literature. The iterative algorithms were configured with a maximum iteration count of 10, and the initial number of superpixels was set to 500.
To demonstrate the regularity of the algorithm in generating superpixels at the polar regions of the spherical image, the viewpoints in the local are present. As can be seen in the details image of Figure 10, the contours of wall junctions lack clarity and there is minimal color contrast between adjacent regions. It is also difficult for observers to distinguish the boundary of the contour in the region. On the other hand, the proposed CICI can accurately capture the true contour boundary of wall junctions, resulting in a more regular segmentation result within the polar region of the spherical image.
Despite the regular distribution in polar regions, the segmentation results of SphSLIC, SphSPS, and SSNIC exhibit poor performance in accurately delineating the contour boundaries of wall junctions. If the conventional SNIC is directly applied to spherical images, there are irregular superpixels in polar regions. As shown in Figure 10, the contours at the wall junctions are inaccurately segmented where the color of the box and the wall is nearly identical. In this scenario, CICI outperforms other algorithms by accurately segmenting toilet boundaries even when boundary colors are not distinct. It achieves adaptability and regularity in superpixel shapes on spherical images by adjusting their sizes based on content density distribution. Conversely, SphSLIC, SphSPS, and SSNIC generate uniformly sized superpixels but fail to capture finer details in content-dense areas, resulting from a lack of adaptive adjustment.

4.1.2. Quantitative Evaluation by Metrics

In qualitative analysis, it is obvious that the segmentation effect of CICI is better. In order to objectively evaluate the performance of the algorithm, five common evaluation metrics are used in the quantitative analysis [29]: boundary recall (BR), precision recall (PR), under-segmentation error (UE), achievable segmentation accuracy (ASA), F-measure, and the execution time of the algorithm. Quantitative evaluations are listed in Appendix A. In order to further illustrate the influence of CICI’s strategies, the initialization-optimized SNIC (IO-SNIC) based on seed distribution and the distance-optimized SNIC (DO-SNIC) based on an improved measurement are designed to be the baselines, respectively. Similarly, in the literature on the SphSLIC algorithm, two distance measures are proposed: the Avg-SphSLIC algorithm based on Euclidean distance and the Cos-SphSLIC algorithm based on cosine similarity. In the literature of the SphSPS algorithm, the path color term is added to the distance metric, so the method of adding the path color term is denoted as C-SphSPS.
Figure 11 quantitatively analyzes the ten SOTA methods. It is obvious that CICI performs the best in all four metrics. Both the boundary fitting degree and the segmentation accuracy are in the leading position. When the number of superpixels is the same, it can be seen from Figure 11a that CICI has the best boundary fit, followed by IO-SNIC, indicating that the seed optimization strategy plays an important role in the boundary fit of CICI. The reason is that this optimization strategy makes CICI generate smaller superpixels in the content-dense region and segment more contour boundaries. In Figure 11b, the curves of CICI and IO-SNIC gradually coincide with the increase in the number of superpixels, which indicates that the number of superpixels generated by CICI in the region with simple content is reduced by the seed optimization strategy. Figure 11c,d show that the contour intensity strategy has a large contribution so that the algorithm can accurately fit the real image boundary.
The F-measure value is the harmonic mean of BR and PR. Figure 11e shows the F-measure curves of all algorithms under different numbers of superpixels. When the number of superpixels is the same, the higher the F-measure value and the more effective the algorithm is. The F-measure value increases gradually when the number of superpixels increases from 250 to 500 and decreases gradually when the number of superpixels exceeds 500. The time consumption of the CICI mainly comes from the content perception strategy because there are trigonometric functions in the relevant calculation, which increases the computational cost. The algorithm in this paper has better application value than other algorithms. Although it is not at the top, CICI has more prominent advantages when considering other indicators, and its performance in terms of efficiency and accuracy is more balanced.
In summary, the proposed algorithm, CICI, has a significant performance improvement compared with the existing algorithms. The four indicators are increased by 17.78%, 13.4%, 8.85%, and 51.77% in turn (compared with the best existing algorithm under each indicator), and the F-measure is also increased by 15.19%. Through the above analysis, it is proven that the proposed algorithm has better performance.

4.2. BSDS500

In order to demonstrate that CICI is not only competitive on spherical panoramic images but also that the segmentation framework composed of the two optimization strategies can achieve excellent results on planar images, we deploy the optimization strategy on SNIC and conduct quantitative and qualitative experiments on the BSDS500 dataset. BSDS500 contains 500 images with a size of 481 × 321 or 321 × 481. The data set was provided by the computer vision group at the University of Berkeley. The 500 images consist of 200 training images, 100 validation images, and 200 test images. It is a common benchmark in image segmentation and detection. The sections are shown in Figure 12.

4.2.1. Qualitative Result Analysis

Figure 13 illustrates the visual performance of five methods evaluated on the BSDS500 with K = 100. SNIC is considered to be the ancestor of superpixel algorithms. IBIS [30], SCALE [31], and BACA [32] are all famous superpixel algorithms from the last two years. CICI inherits the compactness of SNIC while paying more attention to weak boundaries in images. Compared with CICI, the boundary perception of IBIS and BACA is slightly inadequate. The overall visual comfort of SCALE is slightly stronger, but compared with CICI, the segmentation accuracy is poor, mainly when the foreground and background colors are similar.
From the perspective of the overall segmentation effect, CICI and BACA can adaptively match the size and number of superpixel blocks according to the characteristics of the image itself, which is thanks to the strategy based on context identity. On this basis, the boundary degree fit of CICI is more competitive.

4.2.2. Quantitative Evaluation by Metrics

In order to objectively compare the performance of different superpixel algorithms, BR, UE, ASA, and CO are selected for quantitative evaluation. The number of superpixels ranges from 50 to 500. The four comparison algorithms selected in the experiment have been excellent in the past two years and have been compared with the commonly used superpixel algorithms like SLIC [23], SEEDS [33], ERS [34], LSC [35], FLIC [36], etc., so it is not necessary to go into details again. It is worth mentioning that because the four algorithms differ little from each other, we enumerate real values rather than plot curves. BR represents the boundary recall rate, which actually describes the degree of fit between the superpixel contour and the object boundary. A larger BR means that the superpixel boundary is closer to the true boundary of the image. It can be seen from Table 1 that CICI is in the first tier of the dataset. When the number of superpixels does not exceed 400, it is always at the leading level. In Table 2, the UE values of the five algorithms continue to decrease as the number of superpixels increases. According to its definition, the performance of UE is negatively related to segmentation accuracy. The UE value of CICI reaches 0.0338 when k = 500, which is the best among the five algorithms. Combined with the comprehensive analysis in Table 3, CICI outperforms all other algorithms in UE and ASA, indicating that the generated superpixels are almost overlapping with a ground truth value object, which enables it to achieve good results in some practical segmentation applications. In the experiment of some algorithms, it pursues the considerable BR and loses its shape quality (poor visual regularity), that is, CO. However, in the actual segmentation task, the balance between precision and regularity is also the focus of research.
Table 4 shows the compactness performance of five algorithms on the data set. The larger the value of the indicator, the more uniform and regular its segmentation shape, and CICI’s indicator can correspond to its uniform segmentation visual display.
Table 5 shows the average number of superpixels actually segmented by five algorithms in 500 pictures on BSDS500. It is obvious that BACA has the smallest number of actual segmentations, which sacrifices a certain degree of segmentation accuracy. SCALE behaves exactly as the user-expected number and is controllable. CICI also performs well on the actual segmentation number of images; it achieves a balance of complexity and accuracy. It is worth noting that the best-performing algorithms are shown in red, followed by blue and green.

5. Conclusions

In this study, we propose a context identity and contour intensity (CICI) framework for superpixel segmentation in spherical images. Firstly, we optimize the number and placement of seeds using local context identity. Additionally, we incorporate a contour intensity prior strategy into the correlation measurement to effectively enhance segmentation accuracy. Furthermore, we compare CICI with several state-of-the-art superpixel algorithms on SPSDataset75. Qualitative analysis demonstrates that the proposed CICI achieves content-adaptive adjustment in superpixel size and delivers superior segmentation accuracy. To further demonstrate the effectiveness of our optimization strategy, we integrate CICI with SNIC, evaluate its performance on the BSDS500 dataset and yield satisfactory results.
Future research will primarily focus on applying CICI to advanced computational vision tasks. In virtual reality (VR) and augmented reality (AR), CICI can bring many potential advantages, including improved rendering efficiency, reduced data transfer and processing costs, and enhanced user interaction experiences, which are described in detail as follows:
(1) The CICI segmentation results are used to optimize the rendering process of virtual reality and augmented reality scenes. By reducing the number of primitives that need to be rendered, the storage and transmission costs are effectively reduced while maintaining visual quality.
(2) In AR applications, CICI can assist in extracting and identifying significant feature points or areas in the environment, enhancing user interaction experiences. For instance, identifying and tracking specific objects or landmarks can facilitate natural interaction between virtual information and the real world.

Author Contributions

Conceptualization and methodology, N.L. and B.G.; software and validation, H.L.; formal analysis and investigation, F.H.; resources, C.L.; data curation, W.L.; writing—original draft preparation, N.L.; writing—review and editing, C.L. and B.G.; visualization, H.L. and F.H. supervision, B.G. and W.L.; project administration and funding acquisition, B.G. and C.L. All authors have read and agreed to the published version of the manuscript.

Funding

This research is supported financially by the National Natural Science Foundation of China (Grant No. 62171341) and Photon Plan in the Xi’an Institute of Optics and Precision Mechanics of the Chinese Academy of Sciences (Grant No. S24-025-III).

Data Availability Statement

The data presented in this study are available on request from the authors.

Acknowledgments

Thanks to the editor and anonymous reviewers for their suggestions and comments that helped improve the quality of our work.

Conflicts of Interest

The authors declare no conflicts of interest.

Appendix A

Table A1. Quantitative comparison of ten algorithms in terms of six index on SPSDataset75.
Table A1. Quantitative comparison of ten algorithms in terms of six index on SPSDataset75.
AlgorithmExpected Superpixel Number
250500750100012501500
BRSLIC0.56370.68930.76070.80490.85100.8651
SNIC0.56780.68870.75580.80030.84830.8792
SSNIC0.45120.60650.70720.77930.83240.8728
IO-SNIC0.63380.82070.91600.95850.97890.9878
DO-SNIC0.52970.66260.74430.80360.84680.8810
SphSPS0.59520.69830.76290.81020.84500.8728
C-SphSPS0.58630.68870.75260.80160.83620.8649
Cos-SphSLIC0.41080.54040.62770.69710.75090.7960
Avg-SphSLIC0.45130.57880.66510.73410.78780.8292
CICI0.69220.846770.92550.96210.97980.9877
PRSLIC0.6211 0.57550.55050.52640.50080.4933
SNIC0.63130.57710.55040.52420.49930.4766
SSNIC0.60080.57760.55560.53610.51670.4984
IO-SNIC0.70120.68250.64600.60640.57060.5408
DO-SNIC0.65630.60580.57040.54400.51990.4993
SphSPS0.63650.59140.56220.53730.51760.4996
C-SphSPS0.63650.59170.56260.53830.51740.5009
Cos-SphSLIC0.56450.54450.52830.51470.50120.4881
Avg-SphSLIC0.57470.54900.53270.51610.50060.4866
CICI0.73630.69450.64990.60870.57230.5418
UESLIC0.39510.31990.28080.25860.23560.2289
SNIC0.37620.30440.27380.25030.22960.2155
SSNIC0.41090.32440.27940.24950.22940.2129
IO-SNIC0.34640.25990.21820.19370.17690.1639
DO-SNIC0.29870.21560.17450.14940.13510.1222
SphSPS0.36140.29190.25790.23460.21780.2057
C-SphSPS0.34720.28430.25320.23290.21730.2065
Cos-SphSLIC0.44220.36620.32470.29420.27260.2551
Avg-SphSLIC0.46490.38540.34120.31020.28650.2689
CICI0.22060.14580.11740.10220.09240.0863
ASASLIC0.77270.82330.84780.86120.87490.8790
SNIC0.78320.83120.85120.86530.87790.8862
SSNIC0.76690.82300.85070.86800.87960.8889
IO-SNIC0.81310.86370.88700.90020.90910.9161
DO-SNIC0.83320.88430.90810.92210.93010.9372
SphSPS0.79520.84090.86180.87560.88540.8925
C-SphSPS0.80330.84490.86420.87660.88580.8919
Cos-SphSLIC0.74590.79790.82450.84260.85540.8657
Avg-SphSLIC0.73350.78720.81510.83390.84800.8580
CICI0.88280.92470.93990.94800.95310.9563
F-measureSLIC0.59100.62720.63870.63650.63050.6283
SNIC0.59780.62800.63690.63340.62850.6181
SSNIC0.51530.59160.62230.63520.63750.6344
IO-SNIC0.66570.74520.75770.74280.72090.6989
DO-SNIC0.58620.63290.64580.64880.64420.6373
SphSPS0.61510.64040.64740.64610.64200.6354
C-SphSPS0.61040.63650.64380.64410.63920.6343
Cos-SphSLIC0.47550.54240.57370.59210.60110.6051
Avg-SphSLIC0.50560.56390.59150.60610.61210.6132
CICI0.71350.76310.76360.74560.72250.6997
Execution Time (lg (ms))SLIC3.293.323.293.293.293.29
SNIC3.263.273.283.283.283.29
SSNIC3.323.323.333.343.333.34
IO-SNIC3.313.323.323.323.353.32
DO-SNIC3.433.393.403.393.393.40
SphSPS3.743.743.693.753.663.71
C-SphSPS3.733.683.683.673.683.68
Cos-SphSLIC3.183.243.233.233.243.22
Avg-SphSLIC3.263.263.313.323.333.30
CICI3.463.453.453.433.453.43

References

  1. Ren, X.; Malik, J. Learning a classification model for segmentation. In Proceedings of the Ninth IEEE International Conference on Computer Vision, Nice, France, 13–16 October 2003; pp. 10–17. [Google Scholar]
  2. Raine, S.; Marchant, R.; Kusy, B.; Maire, F.; Fischer, T. Point label aware superpixels for multi-species segmentation of underwater imagery. IEEE Robot. Autom. Lett. 2022, 7, 8291–9298. [Google Scholar] [CrossRef]
  3. Sheng, Y.; Ma, H.; Wang, X.; Hu, T.; Li, X.; Wang, Y. Weakly-supervised semantic segmentation with superpixel guided local and global consistency. Pattern Recognit. 2022, 124, 108504. [Google Scholar]
  4. Eliasof, M.; Zikri, N.B.; Treister, E. Unsupervised Image Semantic Segmentation through Superpixels and Graph Neural Networks. arXiv 2022, arXiv:2210.11810. [Google Scholar] [CrossRef]
  5. Zhou, Z.; Guo, Y.; Huang, J.; Dai, M.; Deng, M.; Yu, Q. Superpixel attention guided network for accurate and real-time salient object detection. Multimed. Tools Appl. 2022, 81, 38921–38944. [Google Scholar] [CrossRef]
  6. Lin, J.; Yan, Z.; Wang, S.; Chen, M.; Lin, H.; Qian, Z. Aerial image object detection based on superpixel-related patch. In Image and Graphics; Springer: Cham, Switzerland, 2021; Volume 12888, pp. 256–268. [Google Scholar]
  7. Xu, G.-C.; Lee, P.-J.; Bui, T.-A.; Chang, B.-H.; Lee, K.-M. Superpixel algorithm for objects tracking in satellite video. In Proceedings of the IEEE International Conference on Consumer Electronics-Taiwan (ICCE-TW), Penghu, Taiwan, 15–17 September 2021; pp. 1–2. [Google Scholar]
  8. Zhang, H.; Wang, H.; He, P. Correlation filter tracking based on superpixel and multifeature fusion. Optoelectron. Lett. 2021, 17, 47–52. [Google Scholar] [CrossRef]
  9. Nawaz, M.; Yan, H. Saliency detection via multiple-morphological and superpixel based fast fuzzy C-mean clustering network. Expert Syst. Appl. 2020, 16, 113654. [Google Scholar] [CrossRef]
  10. Nam, D.Y.; Han, J.K. Improved Depth Estimation Algorithm via Superpixel Segmentation and Graph-cut. In Proceedings of the IEEE International Conference on Consumer Electronics (ICCE), Las Vegas, NV, USA, 10–12 January 2021; pp. 1–7. [Google Scholar]
  11. Miao, Y.; Yang, B. Multilevel Reweighted Sparse Hyperspectral Unmixing Using Superpixel Segmentation and Particle Swarm Optimization. IEEE Geosci. Remote Sens. Lett. 2022, 19, 6013605. [Google Scholar] [CrossRef]
  12. Boulfelfel, S.; Nouboud, F. Multi-agent medical image segmentation: A survey. Comput. Methods Programs Biomed. 2023, 232, 107444. [Google Scholar]
  13. Sandler, M.; Zhmoginov, A.; Luo, L.; Mordvintsev, A.; Randazzo, E.; Arcas, B.A.Y. Image segmentation via cellular automata. arXiv 2020, arXiv:2008.04965. [Google Scholar]
  14. Zhao, Q.; Wan, L.; Zhang, J. Spherical superpixel segmentation. IEEE Trans. Multimed. 2017, 20, 1406–1417. [Google Scholar] [CrossRef]
  15. Wong, T.T.; Luk, W.S.; Heng, P.A. Sampling with Hammersley and Halton points. J. Graph. Tools 2012, 2, 9–24. [Google Scholar] [CrossRef]
  16. Wan, L.; Xu, X.; Zhao, Q.; Feng, W. Spherical Superpixels: Benchmark and Evaluation. In Computer Vision—ACCV 2018; Springer: Cham, Switzerland, 2018; Volume 11366, pp. 703–717. [Google Scholar]
  17. Giraud, R.; Pinheiro, R.B.; Berthoumieu, Y. Generalized shortest path-based superpixels for accurate segmentation of spherical images. In Proceedings of the 2020 25th International Conference on Pattern Recognition (ICPR), Milan, Italy, 10–15 January 2021; pp. 2650–2656. [Google Scholar]
  18. Achanta, R.; Susstrunk, S. Superpixels and polygons using simple non-iterative clustering. In Proceedings of the IEEE Con-ference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; pp. 4895–4904. [Google Scholar]
  19. Wei, X.; Yang, Q.; Gong, Y.; Ahuja, N.; Yang, M.H. Superpixel hierarchy. IEEE Trans. Image Process. 2018, 27, 4838–4849. [Google Scholar] [CrossRef]
  20. Silveira, D.; Oliveira, A.; Walter, M.; Jung, C.R. Fast and accurate superpixel algorithms for 360 images. Signal Process. 2021, 189, 108277. [Google Scholar] [CrossRef]
  21. Yuan, M.; Richardt, C. 360° optical flow using tangent images. arXiv 2021, arXiv:2112.14331. [Google Scholar]
  22. Huang, M.; Liu, Z.; Li, G.; Zhou, X.; Meur, O.L. FANet: Features Adaptation Network for 360° Omnidirectional Salient Object Detection. IEEE Signal Process. Lett. 2020, 27, 1819–1823. [Google Scholar] [CrossRef]
  23. Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 2274–2282. [Google Scholar] [CrossRef]
  24. Cabral, R.; Furukawa, Y. Piecewise Planar and Compact Floorplan Reconstruction from Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 628–635. [Google Scholar]
  25. Hao, M.; Zhou, M.; Jin, J.; Shi, W. An Advanced Superpixel-Based Markov Random Field Model for Unsupervised Change Detection. IEEE Geosci. Remote Sens. Lett. 2019, 17, 1401–1405. [Google Scholar] [CrossRef]
  26. Dollar, P.; Zitnick, C.L. Structured forests for fast edge detection. In Proceedings of the IEEE international conference on computer vision, Sydney, NSW, Australia, 1–8 December 2013; pp. 1841–1848. [Google Scholar]
  27. Frisch, D.; Hanebeck, U.D. Deterministic gaussian sampling with generalized fibonacci grids. In Proceedings of the IEEE 24th International Conference on Information Fusion (FUSION), Sun City, South Africa, 1–4 November 2021; pp. 1–8. [Google Scholar]
  28. Arbeláez, P.; Maire, M.; Fowlkes, C.; Malik, J. Contour Detection and Hierarchical Image Segmentation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 898–916. [Google Scholar] [CrossRef]
  29. Wang, M.; Liu, X.; Gao, Y.; Ma, X.; Soomro, N. Superpixel segmentation: A benchmark. Signal Process. Image Commun. 2017, 56, 28–39. [Google Scholar]
  30. Bobbia, S.; Macwan, R.; Benezeth, Y. Iterative Boundaries implicit Identification for superpixels Segmentation: A real-time approach. IEEE Access 2021, 9, 77250–77263. [Google Scholar] [CrossRef]
  31. Li, C.; He, W.; Liao, N.; Gong, J.; Hou, S.; Guo, B. Superpixels with contour adherence via label expansion for image decomposition. Neural Comput. Appl. 2022, 34, 16223–16237. [Google Scholar] [CrossRef]
  32. Liao, N.; Guo, B.; Li, C.; Liu, H.; Zhang, C. BACA: Superpixel segmentation with boundary awareness and content adaptation. Remote Sens. 2022, 14, 4572. [Google Scholar] [CrossRef]
  33. Van den Bergh, M.; Boix, X.; Roig, G.; Van Gool, L. SEEDS: Superpixels extracted via energy-driven sampling. Int. J. Comput. Vis. 2015, 111, 298–314. [Google Scholar] [CrossRef]
  34. Liu, M.; Tuzel, O.; Ramalingam, S.; Chellappa, R. Entropy rate superpixel segmentation. In Proceedings of the IEEE Confer-ence on Computer Vision and Pattern Recognition (CVPR), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2097–2104. [Google Scholar]
  35. Chen, J.; Li, Z.; Huang, B. Linear spectral clustering superpixel. IEEE Trans. Image Process. 2017, 26, 3317–3330. [Google Scholar] [CrossRef]
  36. Zhao, J.; Hou, Q.; Ren, B.; Cheng, M.; Rosin, P. FLIC: Fast linear iterative clustering with active search. In Proceedings of the AAAI Conference on Artificial Intelligence (AAAI), New Orleans, LA, USA, 2–7 February 2018; pp. 7574–7581. [Google Scholar]
Figure 1. Segmentation result of superpixel algorithm for planar images and spherical images.
Figure 1. Segmentation result of superpixel algorithm for planar images and spherical images.
Symmetry 16 00925 g001
Figure 2. Schematic diagram of the proposed CICI framework. (a) Input ERP image; (b) Contour intensity map of (a); (c) Context identity map; (d) Initial seeds distribution; (e) Optimized seeds distribution; (f) ERP image segmentation results; (g) Spherical image of the initial seeds; (h) Spherical image of optimized seeds; (i) Spherical image segmentation label map.
Figure 2. Schematic diagram of the proposed CICI framework. (a) Input ERP image; (b) Contour intensity map of (a); (c) Context identity map; (d) Initial seeds distribution; (e) Optimized seeds distribution; (f) ERP image segmentation results; (g) Spherical image of the initial seeds; (h) Spherical image of optimized seeds; (i) Spherical image segmentation label map.
Symmetry 16 00925 g002
Figure 3. Diagram of four sampling methods. The expected number of seeds is 500 in each method.
Figure 3. Diagram of four sampling methods. The expected number of seeds is 500 in each method.
Symmetry 16 00925 g003
Figure 4. Schematic diagram of the seed optimization strategy. (a) Contour diagram; (b) Context identity map; (c) Before seeds optimization; (d) After seeds optimization; (eg) Schematic diagram of three different degree regions.
Figure 4. Schematic diagram of the seed optimization strategy. (a) Contour diagram; (b) Context identity map; (c) Before seeds optimization; (d) After seeds optimization; (eg) Schematic diagram of three different degree regions.
Symmetry 16 00925 g004
Figure 5. Diagram of the traversal path. (a) Traversal paths on ERP images; (b) Traversal paths on spherical images.
Figure 5. Diagram of the traversal path. (a) Traversal paths on ERP images; (b) Traversal paths on spherical images.
Symmetry 16 00925 g005
Figure 6. Comparison of correlation measurement before and after optimization.
Figure 6. Comparison of correlation measurement before and after optimization.
Symmetry 16 00925 g006
Figure 7. Schematic of the boundary neighborhood. (a) Neighborhood extent of the ERP image; (b) Neighborhood extent of the spherical image.
Figure 7. Schematic of the boundary neighborhood. (a) Neighborhood extent of the ERP image; (b) Neighborhood extent of the spherical image.
Symmetry 16 00925 g007
Figure 8. The CICI step flow diagram.
Figure 8. The CICI step flow diagram.
Symmetry 16 00925 g008
Figure 9. Part of the SPSDataset75 images.
Figure 9. Part of the SPSDataset75 images.
Symmetry 16 00925 g009
Figure 10. Segmentation results of each algorithm for image No. 6 And No. 9. These include local detail images and split label mappings.
Figure 10. Segmentation results of each algorithm for image No. 6 And No. 9. These include local detail images and split label mappings.
Symmetry 16 00925 g010
Figure 11. Quantitative evaluation of different algorithms on four evaluation indicators. (a) Boundary recall; (b) Precision recall; (c) Under-segmentation error; (d) Achievable segmentation accuracy; (e) F-measure; (f) Execution time. The expected number of superpixels ranges from 250 to 1500.
Figure 11. Quantitative evaluation of different algorithms on four evaluation indicators. (a) Boundary recall; (b) Precision recall; (c) Under-segmentation error; (d) Achievable segmentation accuracy; (e) F-measure; (f) Execution time. The expected number of superpixels ranges from 250 to 1500.
Symmetry 16 00925 g011
Figure 12. Part of the BSDS500 picture and the corresponding contour diagram and region segmentation effect diagram, displayed from top to bottom.
Figure 12. Part of the BSDS500 picture and the corresponding contour diagram and region segmentation effect diagram, displayed from top to bottom.
Symmetry 16 00925 g012
Figure 13. Segmentation results of each algorithm on BSDS500. Even columns correspond to local detail map.
Figure 13. Segmentation results of each algorithm on BSDS500. Even columns correspond to local detail map.
Symmetry 16 00925 g013
Table 1. Comparison of five algorithms in terms of boundary recall (↑) on BSDS500.
Table 1. Comparison of five algorithms in terms of boundary recall (↑) on BSDS500.
AlgorithmExpected Superpixel Number
50100150200250300350400450500
CICI0.78430.85920.88780.90080.91380.92290.93430.93660.93800.9442
SNIC0.70690.81120.85610.87790.90380.91340.92210.93350.94150.9505
IBIS0.65450.76220.79540.84080.86330.87550.89190.91390.91760.9286
BACA0.93400.81530.84980.86590.87580.88180.89640.90710.91020.9155
SCALE0.71830.80740.84890.87910.90000.91550.92780.93800.94710.9532
Table 2. Comparison of five algorithms in terms of under-segmentation error (↓) on BSDS500.
Table 2. Comparison of five algorithms in terms of under-segmentation error (↓) on BSDS500.
AlgorithmExpected Superpixel Number
50100150200250300350400450500
CICI0.07200.05080.04200.04140.03910.03720.03550.03490.03440.0338
SNIC0.11330.07070.05750.05170.04550.04320.04200.04010.03730.0360
IBIS0.13770.09540.08190.07010.06400.05970.05530.05130.04990.0473
BACA0.08530.05780.04750.04580.04300.04030.03940.03840.03820.0369
SCALE0.12940.08970.07550.06670.06000.05790.05470.05370.05120.0500
Table 3. Comparison of five algorithms in terms of achievable segmentation accuracy (↑) on BSDS500.
Table 3. Comparison of five algorithms in terms of achievable segmentation accuracy (↑) on BSDS500.
AlgorithmExpected Superpixel Number
50100150200250300350400450500
CICI0.90400.93170.94110.94420.94770.94990.95270.95390.95450.9558
SNIC0.86770.91400.92930.93510.94250.94480.94760.94960.95250.9543
IBIS0.85780.90040.91180.92340.92970.93370.93780.94240.94390.9462
BACA0.87880.91130.92680.93130.93600.93800.94280.94560.94600.9477
SCALE0.87300.90920.92270.93070.93500.94050.94340.94530.94760.9489
Table 4. Comparison of five algorithms in terms of compactness (↑) on BSDS500.
Table 4. Comparison of five algorithms in terms of compactness (↑) on BSDS500.
AlgorithmExpected Superpixel Number
50100150200250300350400450500
CICI0.40020.49070.55690.57960.58490.59200.64960.70950.71960.7295
SNIC0.34860.43210.48190.50460.54340.55360.57000.59200.60390.6233
IBIS0.33150.39960.43650.47350.50080.52110.53320.55720.56910.5818
BACA0.39170.48050.54290.56730.57530.58330.63570.68870.69830.7090
SCALE0.38450.43130.46240.48590.50000.52280.53460.54860.56150.5717
Table 5. Comparison of five algorithms in terms of generated superpixel number on BSDS500.
Table 5. Comparison of five algorithms in terms of generated superpixel number on BSDS500.
AlgorithmExpected Superpixel Number
50100150200250300350400450500
CICI4288142161186212254300309340
SNIC4096150187260294330400442504
IBIS4093125182223256291372392435
BACA3887130158188211262302314345
SCALE50100150200250300350400450500
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liao, N.; Guo, B.; He, F.; Li, W.; Li, C.; Liu, H. Spherical Superpixel Segmentation with Context Identity and Contour Intensity. Symmetry 2024, 16, 925. https://doi.org/10.3390/sym16070925

AMA Style

Liao N, Guo B, He F, Li W, Li C, Liu H. Spherical Superpixel Segmentation with Context Identity and Contour Intensity. Symmetry. 2024; 16(7):925. https://doi.org/10.3390/sym16070925

Chicago/Turabian Style

Liao, Nannan, Baolong Guo, Fangliang He, Wenxing Li, Cheng Li, and Hui Liu. 2024. "Spherical Superpixel Segmentation with Context Identity and Contour Intensity" Symmetry 16, no. 7: 925. https://doi.org/10.3390/sym16070925

APA Style

Liao, N., Guo, B., He, F., Li, W., Li, C., & Liu, H. (2024). Spherical Superpixel Segmentation with Context Identity and Contour Intensity. Symmetry, 16(7), 925. https://doi.org/10.3390/sym16070925

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop