Next Article in Journal
Analysis of the Spatiotemporal Variation in Land Subsidence on the Beijing Plain, China
Previous Article in Journal
A Survey on Situational Awareness of Ransomware Attacks—Detection and Prevention Parameters
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Letter

An Energy-Based SAR Image Segmentation Method with Weighted Feature

College of Geomatics and Geoinformation, Guilin University of Technology, Guilin 541004, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2019, 11(10), 1169; https://doi.org/10.3390/rs11101169
Submission received: 27 March 2019 / Revised: 7 May 2019 / Accepted: 10 May 2019 / Published: 16 May 2019
(This article belongs to the Section Remote Sensing Image Processing)

Abstract

:
To extract more structural features, which can contribute to segment a synthetic aperture radar (SAR) image accurately, and explore their roles in the segmentation procedure, this paper presents an energy-based SAR image segmentation method with weighted features. To precisely segment a SAR image, multiple structural features are incorporated into a block- and energy-based segmentation model in weighted way. In this paper, the multiple features of a pixel, involving spectral feature obtained from original SAR image, texture and boundary features extracted by a curvelet transform, form a feature vector. All the pixels’ feature vectors form a feature set of a SAR image. To automatically determine the roles of the multiple features in the segmentation procedure, weight variables are assigned to them. All the weight variables form a weight set. Then the image domain is partitioned into a set of blocks by regular tessellation. Afterwards, an energy function and a non-constrained Gibbs probability distribution are used to combine the feature and weight sets to build a block-based energy segmentation model with feature weighted on the partitioned image domain. Further, a reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm is designed to simulate from the segmentation model. In the RJMCMC algorithm, three move types were designed according to the segmentation model. Finally, the proposed method was tested on the SAR images, and the quantitative and qualitative results demonstrated its effectiveness.

1. Introduction

Image segmentation is a hot topic in the date processing of a synthetic aperture radar (SAR) image [1,2]. In the traditional image segmentation method, homogeneous regions are segmented according to spectral features of pixels [3]. With the rapid development of SAR technology, the resolution of a SAR image is also getting higher and higher. Compared with the middle and low resolution SAR images, the details of high resolution SAR images becomes clearer, but the differences between homogeneous regions becomes larger, and the differences between heterogeneous regions becomes smaller simultaneously [4]. These causes the traditional segmentation method to not reach the segmentation accuracy of a high resolution SAR image [5]. Therefore, high resolution SAR image segmentation becomes a difficult and hot topic. The differences among homogeneous regions in the high resolution SAR image exist not only in spectral features, but also in structural features such as boundary and texture features [6]. To segment a high resolution SAR image well, multiple structural features can be considered in the segmentation algorithm [7,8,9].
For example, Liu et al. [10] presents a SAR image segmentation method using contourlet transform and support vector machine (SVM). This method utilizes contourlet transform to define the energy, standard deviation and information entropy, and these features are used to construct the texture feature vector; Then SVM is used to segment the SAR image based on the texture feature vector. Xue et al. [11] proposes a method of SAR image segmentation based on fuzzy c-means (FCM) and wavelet transform. In this paper, the vector composed by wavelet texture features and the gray level of a SAR image filtered, on the basis, FCM is used to segment a SAR image. Hao et al. [12] presents a SAR segmentation method based on a maximization of posterior marginals (MPM) algorithm with feature extraction and context model. In this method, a Gabor wavelet and texture descriptor are used to extract features, and the MPM of each region combined with the context model is utilized to segment a SAR image. All the above methods use multiple features to segment an image, but in segmentation procedure, the roles of features are considered as the same. However, they are not necessarily the same actually. With the development of deep learning technology, Lv et al. [13] proposes a feature learning method using a stacked contractive autoencoder (sCAE) to extract the temporal change feature for a SAR image. First, an affiliated temporal change image is built by three different metrics; then the simple linear iterative clustering algorithm is utilized to generate superpixels; and a sCAE network is trained with the superpixel samples as an input to learn the change features in semantic; finally, the encoded features by this sCAE model are binary segmented to create the change result map. Although deeper leaning can segment a SAR image better, it needs much time to train before the segmentation, it reduces the efficiency.
To extract more structural features that can contribute to segmenting accurately and explore the roles of features in the segmentation procedure, this paper presents a feature weighted SAR image segmentation method based on the energy function. As spectral, texture and boundary features are three basis features of a SAR image, they are good for segmenting, so these three features of a pixel are used to form a feature vector, and all the feature vectors form a feature set. To automatically determine the roles of the features in the segmentation procedure, weight variables are defined, and all the weight variables form a weight set. Afterwards, the feature and weight sets are combined to build a feature weighted segmentation model by using an energy function and a non-constrained Gibbs probability distribution. Further, a reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm is used to simulate from the segmentation model. Finally, the proposed method is tested on the SAR images to demonstrate its effectiveness. Image segmentation is a prelude for further SAR image processing tasks such as feature extraction [14], object recognition [15] and classification [16], so the proposed method can be used to detect an oil dark spot, extract its features and so on.
This paper is organized as follows. Section 2 presents the proposed algorithm. Section 3 details the results of simulated and real SAR images. Section 4 discusses the test results; Finally, Section 5 contains conclusions and perspectives for further research.

2. Materials and Methods

Given a SAR image f = {fs = f(rs, cs); s = 1, …, S}, where s is pixel index; rs ∈ {1, …, m1} is the row of pixel s, cs ∈ {1, …, m2} is the column of pixel s, m1 and m2 are the number of rows and columns in the SAR image, respectively; (rs, cs) is the location of pixel s; fs is the intensity of pixel s, it represents the spectral feature of pixel s in this paper; S is the total number of pixels in the SAR image, and can be expressed as S = m1m2; P = {(rs, cs); s =1, …, S} is a set of locations for all the pixels in the SAR image (short for image domain).

2.1. Curvelet Transform

The second generation curvelet transform is defined in both the continuous and discrete domain, and in image processing the most commonly used one is its discrete representation. The discrete curvelet transform takes a 2D image f as an input in the form of a Cartesian array f(rs, cs). As shown in Equation (1), the outputs of the discrete curvelet transform are a collection of digital curvelet coefficients CD(j, l, k). That is, the digital curvelet coefficients CD(j, l, k) are simply the inner product between a 2D image f and a digital curvelet ϕ j , l , k D [17].
C D ( j , l , k ) : = f , ϕ j , l , k D = r s { 1 , , m 1 } c s { 1 , , m 2 } f ( r s , c s ) ϕ j , l , k D ( r s , c s )
where D stands for “digital”; < ⋅ > represents the inner operation; l is orientation (angle) parameter; k = (k1, k2) ∈ R2 is the sequence of translation parameters; ⌈ ⋅ ⌉ stands for ceil operation; j∈ {1, …, J} is a scale parameter, J is the number of scales; when j = 1, the coarse scale coefficients CD(1, l, k) are low frequency coefficients, they include the main information of the SAR image; when j = J, the finest coefficients CD(J, l, k) are high frequency coefficients, they include much detailed information of the SAR image, such as the information of speckle noise; when j ∈ {2, J − 1}, the detail coefficients CD(j, l, k) are middle-high frequency coefficients, they include much of the boundary information of the SAR image [17].
There are two different digital implementation of the second generation curvelet transforms, this paper uses the wrapping based on the curvelet transform [17], and whose architecture is then roughly as follows.
(1) For a given SAR intensity image f = {fs = f(rs, cs); s = 1, …, S}, the 2D fast Fourier transform (FFT) is applied and Fourier samples f ^ [ n 1 , n 2 ] can be obtained, where n1 and n2 are frequency domain variables, – rs /2 ≤ n1, n2cs /2;
(2) For each scale j and angle l, the frequency window U j , l ˜ [ n 1 , n 2 ] and Fourier samples f ^ [ n 1 , n 2 ] are multiplied to form the product U j , l ˜ [ n 1 , n 2 ] f ^ [ n 1 , n 2 ] is constructed;
(3) Wrapping this product around the origin, the corresponding periodization of this product W ( U j ˜ , f j , l ^ ) [ n 1 , n 2 ] can be obtained, where (n1, n2)∈ρj, ρj ={(n1, n2), n1∈[0, L1, j], n2∈[0, L2, j]} is a rectangle with sides of length L1, j × L2,j near the origin, L1, j ≈ 2j, L2, j ≈ 2j/2;
(4) The inverse 2D FFT is applied to obtain the discrete coefficients CD(j, l, k).

2.2. Feature Extraction

The differences among homogeneous regions of a SAR image exist not only in spectral features, but also in structural features, such as boundary, texture features and so on. In order to accurately segment homogeneous regions, spectral, texture and boundary features are used in the proposed segmentation method. In this paper, a spectral feature is the original SAR image f, and texture and boundary features are extracted by curvelet transform.
Texture feature is a basic image feature and good for segmentation. In this paper, curvelet transform and L1 norm are used to define energy, which is considered as a texture feature. The larger the energy is, the more the texture information is [18]. The operation is as follows. First, the curvelet transform is used to decompose a sub-image with ct × ct pixels centered on pixel s to obtain a series of curvelet coefficients. Then L1 norm is used to define the energy of pixel s can be computed [18],
x t s = 1 N t j , l , k | C D ( j , l , k ) | 2
where Nt is the total number of curvelet coefficients. In accordance with the above procedure, the texture feature of the SAR image xt = {xts; s = 1, …, S} is obtained.
Boundary is another basic feature, which is also an important factor to divide different homogeneous regions. As a common boundary detection algorithm, a Canny operator [19] can better extract the boundary features in an optical image. However, a Canny operator can not accurately extract the boundary features of a SAR image, it drives from the effect of speckle noise. In this paper, curvelet transform and a Canny operator are used to define the feature. The operation is as follows. Firstly, the curvelet transform is used to decompose the SAR image f to obtain a series of curvelet coefficients. As the detail coefficients CD(j, l, k) (j ∈ {2, J − 1}) are middle-high frequency coefficients, they include much of the boundary information of the SAR image. Therefore, the detail coefficients are unchanged, a Canny operator is just used for extracting the boundary of curvelet coefficients on the coarse and finest scales, and their non-boundary curvelet coefficients are changed to 0. That is, all the boundary curvelet coefficients are unchanged and all the non-boundary curvelet coefficients are changed to 0. Finally, all the curvelet coefficients are reconstructed to obtain the boundary feature of the SAR image xb = {xbs; s = 1, …, S},
x b = j , l , k C D ( j , l , k ) ϕ j , l , k D
In summary, a feature set of the SAR image is composed by the original spectral, extracted texture and boundary features, that is, x = {f, xt, xb} = {xs; s = 1, …, S} = {xgs; g = 1, …, G, s = 1, …, S}, where xs is the feature vector values of pixel s, g is feature index, xgs is gth feature value of pixel s. x can be considered as a realization of a random feature filed X = {Xs; s = 1, …, S} defined on the image domain P, where Xs is the feature vector of pixel s, that is, Xs = {Xgs; g = 1, …, G}, Xgs is gth feature variable of pixel s.

2.3. Energy-based Segmentation Model with Feature Weighted

In order to automatically determine the roles of the multiple features in the segmentation procedure, a weight variable ω g y s is assigned to a variable of a pixel’s feature vector Xgs in the Xs. All the weight variables form a weight set, that is, ω = { ω g y s ; g = 1, …, G, s = 1, …, S}, where ys is the label of pixel s.
As the effect of speckle noise in the SAR image, the traditional segmentation method based on pixel can not segment well in the homogeneous regions. To improve the segmentation accuracy in the homogeneous regions, the sub-region is considered as a processing unit, that is, an image domain is partitioned by the geometric partitioning technique, and region-based statistic models are built on the partitioned image domain. As the principle of regular tessellation is simple and easy to be realized, it is explored for partitioning P into a set of rectangle regular blocks (short for blocks), that is, P = {Pi; i = 1, …, I}, where i is the index of block; I is the total number of blocks, and considered as a random variable; Pi is the ith block, the number of pixels in Pi is the multiple of two and the minimum block is 2 × 2. The initial image domain can be partitioned by S/m2 blocks, where the initial value of I = S/m2, the number of pixels in every block is m × m, where m is greater than or equal to two, and m can split rs and cs. It has been selected by many experiments that the initial value of m = 8 in this paper. The number of blocks and the sizes of blocks will be changed by the following move of sampling the number of blocks.
On the partitioned image domain, the partitioned blocks are considered as processing units, it means that the labels of pixels in a block are the same, and their weights are also the same. Therefore, the weight set can be rewritten as ω = { ω g y i ; g = 1, …, G, i = 1, …, I} = {ωgo; g = 1, …, G, o = 1, …, O}, where ωgo is the weight variable of label o on the gth feature, O is the total number of classes and regarded as known, yi is the label of block Pi.
All the labels form a realization of a label field, that is, y = {yi; i = 1, …, I}, which represents attributes of blocks to different classes, and corresponds to a segmentation of the feature set x. This paper uses the energy function of a neighborhood relationship to build the model of label field [20], it can be expressed as:
U y ( y , I ) = i = 1 I { γ [ P r N P i 2 δ ( y i , y r ) 1 ] }
where γ is a constant to control the neighborhood dependences between a pair of neighbor blocks; NPi ={ Pr; Pi ~ Pr} is the set of Pi’s neighbor blocks, i, r ∈ {1, …, I} and ir, Pi ~ Pr are neighbors if and only if they have a mutual boundary; if yi = yr, the indictor function δ(yi, yr) = 1, otherwise, δ(yi, yr) = 0.
The model of weighted feature Ux(x, ω) can be written as:
U x ( x , ω , I ) = i = 1 I U x ( x i , ω i ) = i = 1 I g = 1 G U x ( x g i , ω g i )
where
U x ( x g i , ω g i ) = V ( x g i , x g y i ) log ( ω g y i )
where xgyi = {xgi; iPyi}; Pyi is the set of the blocks belonging to the label yi; V(xgi, xgyi) is heterogeneous energy function and defined by K-S distance [18], it can be expressed as
V ( x g i , x g y i ) = d K S ( x g i , x g y i ) = max h | F ^ x g i ( h ) F ^ x g y i ( h ) |
where dKS is K-S distance, that is, the maximum distance between F ^ x g i and F ^ x g y i , F ^ x g i and F ^ x g y i are cumulative sampling distribution of xgi and xgyi, respectively. F ^ x g i and F ^ x g y i can be written as [21]:
F ^ x g i ( h ) = 1 n 1 # { s | x g s h }
F ^ x g y i ( h ) = 1 n 2 # { s | x g y i h }
where # is an operation that returns the number of elements in the set, n1 and n2 are the total number of elements xgi and xgyi, respectively; h is the index of feature values; For a H bit image, h ∈ {0, 2H − 1}.
In the statistic segmentation, a complete segmentation model not only models the features of the image, but also describes the attributes of pixels to different classes [22]. To fully utilize the roles of the label field, the feature field and weight set in the segmentation procedure, a global energy function of image segmentation is defined by the label model and feature weighted model, it can be expressed as:
U ( x , ω , y , I ) = U y ( y , I ) + U x ( x , ω , I ) = i = 1 I { γ [ P r N P i 2 δ ( y i , y r ) 1 ] } + i = 1 I g = 1 G [ V ( x g i , x g y i ) log ( ω g y i ) ]
In summary, the process of solving ω, y, I given x can be computed as:
( ω ^ , y ^ , I ^ ) = min U ( x , ω , y , I )
To incorporate the solving process of Equation (11) into the Bayesian statistical framework, non-constrained Gibbs probability distribution is used to describe the global energy function,
G ( x , ω , y , I ) = 1 Z exp { U ( x , ω , y , I ) } = 1 Z i = 1 I { exp { γ [ P r N P i 2 δ ( y i , y r ) 1 ] } × g = 1 G { ω g y i exp [ V ( x g i , x g y i ) ] } }
where Z is a normalization constant.

2.4. Simulation

According to Equation (12), Let the total parameter vector Θ = (Y, I, ω). To solve their values, it is necessary to simulate from the energy-based segmentation model with the feature weighted. A RJMCMC algorithm is designed to generate samplers from Θ. In each iteration, a new candidate Θ* for Θ is drawn from a proposal distribution with density q(Θ,Θ*) (assume that the dimension of Θ* is higher than that of Θ). u is a random vector defined for accomplishing a transition from (Θ, u) to Θ* with dimension matching, that is, |Θ| + |u| = |Θ*|. The probability of accepting the candidate Θ* can be computed as [23]:
a ( Θ , Θ * ) = min { 1 , p ( Θ * | X ) r ( Θ * ) p ( Θ | X ) r ( Θ ) q ( u ) | ( Θ * ) ( Θ , u ) | }
where q(u) is the density function of u, r(Θ) and r(Θ*) are the probabilities of a given move type in the states Θ and Θ*, respectively. The Jacobian | ∂(Θ*) /∂(Θ, u)| derives from the change of variable from (Θ, u) to Θ*.
(1) Sampling label field. A block Pi with the corresponding label yi is randomly chosen. To update the label, a new label yi* is randomly drawn from {1, …, O} and yi*yi. The acceptance probability can be written as:
a o ( y i , y i * ) = min { 1 , R o }
where
R o = g = 1 G { ω g y i exp [ V ( x g i , x g y i * ) ] } g = 1 G { ω g y i exp [ V ( x g i , x g y i ) ] } × i = 1 I exp { γ [ P r N P i 2 δ ( y i * , y r ) 1 ] } i = 1 I exp { γ [ P r N P i 2 δ ( y i , y r ) 1 ] }
Random number z is uniformly drawn from [0,1], that is, z ~ U[0,1]; if zao(yi, yi *), the result of the move is accepted, and yi becomes yi *; otherwise, the result of the move is rejected, and yi is unchanged.
(2) Sampling the number of blocks. The operation is achieved by splitting or merging blocks, where the splitting move is an operation in which a block is split into two blocks with different class labels. The operation includes: (i) A block Pi is randomly chosen in the current partition of image domain P = {P1, …, Pi, …, PI}, and its corresponding label is o; (ii) If the pixel number of Pi is more than four and the number of its row or column is the multiple of two, Pi can be split. Under the constraint of minimum blocks, splitting types can be determined. (iii) One splitting is randomly chosen and splits Pi into Pi* and PI+1*, and their corresponding labels are re-allocated yi and yi *, yiyi *, respectively. The new partition of the image domain becomes P* = {P1, …, Pi*, …, PI, PI+1*}, the acceptance probability of the splitting move can be expressed as:
a f I ( P , P * ) = min { 1 , R f I }
where
R f I = i = 1 I + 1 g = 1 G { ω g y i exp [ V ( x g i , x g y i * ) ] } i = 1 I g = 1 G { ω g y i exp [ V ( x g i , x g y i ) ] } × i = 1 I + 1 exp { γ [ P r N P i 2 δ ( y i * , y r ) 1 ] } i = 1 I exp { γ [ P r N P i 2 δ ( y i , y r ) 1 ] } × r h I + 1 ( Θ * ) r f I ( Θ ) q ( u ) | Θ * ( Θ , u ) |
where r f I = fI /I, fI is the probability of choosing a split proposal; r h I + 1 = hI+1/(I+1), hI+1 is the probability of choosing a merge proposal; Θ* = (Y*, I+1, ω), and Θ = (Y, I+1, ω); Y* = {Y1, …, Yi*, …, YI, YI+1*}, and Y = {Y1, …, Yi, …, YI}; u = YI+1*; The Jacobian term in Equation (17) is equal to 1.
Random number z is uniformly drawn from [0,1]; if z a f I ( P , P * ) , the result of move is accepted, and P becomes P*; otherwise, the result of move is rejected, and P is unchanged.
A merging is designed in tandem with a splitting, so the acceptance probability of the merging move can be expressed as:
a h I + 1 ( P , P * ) = min { 1 ,   1 / R f I }
(3) Sampling weight. Two blocks Pi and Pii are randomly chosen. Their corresponding labels respectively are yi = o and yii = o*, and oo*. Then the feature g is randomly drawn from {1, …, G}, and weight variables ωgo and ωgo* are sampled. The new weight variable ωgo* is randomly drawn from [0, ωgo], another new weight variable ωgo** = ωgo + ωgo*ωgo*, and other weight variables are unchanged. The acceptance probability can be written as:
a ω ( ω , ω * ) = min { 1 , R ω }
where
R ω = P i P o { ω g o * exp [ V ( x g i , x g o ) ] } P i P o { ω g o exp [ V ( x g i , x g o ) ] } × P i P o * { ω g o * * exp [ V ( x g i , x g o * ) ] } P i P o * { ω g o * exp [ V ( x g i , x g o * ) ] }
where Po = {Pi, xi = o}, Pω* = {Pi, xi = o*}.
Random number z is uniformly drawn from [0,1]; if zaω(ω, ω*), the result of the move is accepted, and ω becomes ω*; otherwise, the result of the move is rejected, and ω is unchanged.
In summary, Figure 1 shows the flowchart of the proposed segmentation framework.

3. Results

Given a multi-look SAR image in which the intensities of pixels are characterized by a Gamma distribution [24], α is equal to the number of its looks. In this paper, since α was considered as a random variable, the value µα was set as the number of looks. For Gamma distribution with parameters α and β, the product of the two parameters, α × β, was equal to its mean. Then the value µα × µβ= E(α)E(β) = E(α × β) (the last equation is true, since α and β are independent, where E(·) is the mean operator) was taken as 128 = 256/2 (i.e., the midpoint of 256 grey levels) since the pixel intensities in a grey-scale image vary in the range of 0 and 255. Therefore, the shape parameter α and scale parameter β listed in Table 1 are assumed to respectively satisfy Gaussian distributions with means and standard deviations (μa, σα) and (μβ, σβ), where µα = 4, σα = 2, µβ = 32 and σβ = 24. Figure 2 shows an image temple and a simulated SAR image, where the simulated SAR image (see Figure 2b) is generated based on the image temple (shown in Figure 2a) and parameters listed in Table 1.
Figure 3a,b respectively show texture and boundary features extracted by using the curvelet transform. To verify the advantage of curvelet transform in extracting features, 2D harr wavelet transform is considered as a comparison algorithm to extract texture and boundary features using the same principle. These extraction results are shown in Figure 3c,d. Comparing with these texture features (see Figure 3a,c), it could be seen that they were very similar. To further compare with them, energy histograms of two transform are shown in Figure 4. From Figure 4, it could be seen that the shapes of the energy histograms distribution were similar, but the ranges of energy were different. The energy range of the curvelet transform was 0.25 × 104–6 × 104, and the energy range of the wavelet transform was 0.15 × 104–2.25 × 104. The larger the energy is, the more the image information is. Therefore, it can be found that curvelet texture feature contained more information, and curvelet transform could extract texture features more effectively. Comparing Figure 3b,d, it could be found that the boundary features extracted by the curvelet transform were more complete. From Figure 3 and Figure 4, it can be demonstrated that the information of the curvelet extracted features was more abundant and complete, and the curvelet transform could extract texture and boundary features better.
For the simulated SAR image, every pixel shows its intensity information. In this paper, the intensity information of a pixel is considered as its spectral feature, so the spectral features of all the pixels form the original spectral feature. The above extracted features and the original spectral feature form a curvelet feature set and wavelet feature set, respectively. The energy segmentation method with feature weighted was used to test on these feature sets, Figure 5a1–c1 show the visual evaluation results based on curvelet features, and Figure 5a2–c2 show the visual evaluation results by wavelet features. Figure 5a1,a2 are the results of regular tessellation, their every partitioned block is shown in a random color, and the colors of partitioned blocks in Figure 5a1 or a2 are different. Figure 5b1,b2 show the segmentation results. The outlines are extracted and overlaid on the original images, shown in Figure 5c1,c2. From Figure 5, it can be seen that there are some blocks crossing the boundaries in Figure 5b2, and the extracted outlines can not match the real boundaries precisely shown in Figure 5c2. Further, it could be illustrated that the proposed approach based on curvelet features could segment the simulated SAR image better.
To quantitatively evaluate the segmentation results of a simulated SAR image (see Figure 5b1,b2), their confusion matrixes, which are square arrays of numbers set out in rows and columns, which express the numbers of pixels assigned to a particular class in the segmentation results relative to the number of pixels assigned to a particular class in the reference image (see Figure 2a), were obtained, respectively. Then the producer’s accuracy, user’s accuracy, overall accuracy and Kappa coefficient can be computed as [25]:
Pr oducer s   accuracy q = n q q n + q
User s   accuracy b = n o o n o +
Overall   accuracy q = o = 1 O n o o S
Kappa   coefficient = S o = 1 O n o o o = 1 O n o + n + o S 2 o = 1 O n o + n + o
where no+ is the sum of noq (q∈{1, …, O}); n+q is the sum of noq (o∈{1, …, O}), denotes the number of pixels classified into class o in the segmentation result and class q in the reference image. The producer’s and user’s accuracies are used to determine individual class accuracies, overall accuracy and Kappa coefficient are used to determine the global class accuracy [25].
Table 2 lists the quantitative evaluation of two segmentation results. From Table 2, it can be seen that all the segmentation accuracies of Figure 5b1 were 100%, and its Kappa coefficient was 1.000; the producer’s accuracy of 3 class and the user’s accuracy of 2 class were below 95.0%, and Kappa coefficient was 0.947 in the quantitative evaluation of Figure 5b2. Comparing with the results of two quantitative evaluations, it could be found that all the segmentation accuracies and Kappa coefficient of Figure 5b1 were higher. From the above quantitatively and qualitatively evaluation results, it could be demonstrated that the proposed method based on curvelet features could segment a simulated SAR image better.
To explore the roles of multiple features in the segmentation procedure, weight variables of all the classes on the multiple features are defined. Their values are solved by the iteration processing. Figure 6 shows the changes of features’ weight values for all the classes, where Figure 6a1 and Figure 5a2 show the changes of spectral feature’s weight values for four classes, Figure 6b1,b2 show the changes of texture feature’s weight values for four classes and Figure 6c1,c2 show the change of boundary feature’s weight values of four classes. In Figure 6, a red point represents the weight value of features for class 1, a green point represents the weight value of every feature for class 2, a blue point represents the weight value of every feature for class 3 and a black point represents the weight value of every feature for class 4. From Figure 6, it can be seen that all the weight values quickly drive their approximate values listed in Table 3. In Table 3, numbers 1–4 represent four classes of a simulated SAR image, and they correspond to numbers 1–4 shown in Figure 2b. The greater the weight value is, the more important the role of its feature is. Therefore, it could be demonstrated that the proposed method could automatically determine the roles of features in the segmentation procedure. From Table 3, it could be computed that the total weight values of three curvelet features were 0.37, 0.32 and 0.31, and the total weight values of three wavelet features were 0.36, 0.33 and 0.31. The greater total weight value of a feature, the more important its role is. Therefore, it could be found that the same rule that the role of spectral is the most important, and the role of boundary feature is the least important. However, their total weight values were slightly different, and these weight values of four classes for the same feature were slightly different. For example, two total weight values of spectral feature were 0.37 and 0.36, and their weight values of classes 2–4 were assigned differently. Another example, two total weight values of the boundary feature were the same, but their weight values of all the classes were different. The weight values can directly affect the segmentation results. It is known that the segmentation result by using the proposed method based on curvelet features is better. Therefore, it can be illustrated that the proposed method based on curvelet features can more reasonably assign the weights of all the class on the three features from Figure 3, Figure 4, Figure 5 and Figure 6 and Table 2 and Table 3.
In summary, curvelet features are used in the segmentation tests of four RADARSAT-I/II images with 128 pixels × 128 pixels (shown in Figure 7). Figure 7a presents a RADARSAT-I image of a coastal scene with VV polarization and spatial resolution of 30 m, its scene is about 14.75 km2, which reveals three types of sea ice structures, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7b presents RADARSAT-II 4look image with HH polarization and spatial resolution of 25 m, the scene is 10.24 km2, which reveals three types of areas, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7c shows a RADARSAT-II image with HV polarization and spatial resolution of 25 m, its scene is 10.24 km2, which reveals three types of areas, from bright to dark, respectively representing living area, field area and water area; Figure 7d presents a RADARSAT-I 4look image with VV polarization and spatial resolution of 50 m, the scene is 40.96 km2, which reveals four types of sea ice structures (from left to right, representing melt ice, melt-hard ice, hard ice and an ice-water mixture) in Ungava Bay, Quebec, Canada.
The proposed extraction methods are used to extract their texture and boundary features, Figure 8a1–d1 show texture features and Figure 8a2–d2 show boundary features. From them, it can be seen that the extracted boundary features were complete, and the extracted texture features were well. Further, it could be verified that curvelet transform could extract the texture and boundaries features of the real SAR images well. The extracted texture, boundary features and the original spectral feature form their own feature sets. The proposed method was tested on these feature sets of real SAR images. Figure 8a3–d3 show segmentation results, and their outlines were extracted to overlay on the original images, shown in Figure 8a4–d4. From Figure 8a4–d4, it can be seen that the extracted outlines match real boundaries of four real SAR images well. Further, it can be verified that the proposed method can segment four real SAR images well.
In order to assess the segmented results obtained by the proposed approach quantitatively, some common measures, including the producer’s accuracy, user’s accuracy, overall accuracy and Kappa coefficient, were computed by the temples of real SAR images (see Figure 9). Table 4 lists them. From Table 4, it can be seen that only four accuracies were below 91.7%, overall accuracies were up to 95.0% and Kappa coefficients were higher than or equal to 0.916. Further, it could be demonstrated that the proposed method was feasible and effective.
Figure 10 shows the changes of features’ weight values for all the classes in real SAR images, where Figure 10a1–a4 show the changes of spectral feature’s weight values for all the classes, Figure 10b1–b4 show the changes of texture feature’s weight values for all the classes and Figure 10c1–c4 show the change of boundary feature’s weight values for all the classes. In Figure 10, a red point represents the weight value of every feature for class 1, a green point represents the weight value of every feature for class 2, a blue point represents the weight value of every feature for class 3 and a black point represents the weight value of every feature for class 4. From Figure 10, it can be seen that the weight values quickly drive their approximate values listed in Table 5. From Table 5, it can be computed that the total weight values of Figure 7a‘s three features were 0.28, 0.36 and 0.36; the total weight values of Figure 7b‘s three features were 0.25, 0.36 and 0.39; the total weight values of Figure 7c’s three features were 0.24, 0.36 and 0.40 and the total weight values of Figure 7d’s three features were 0.36, 0.34 and 0.30. Therefore, it can be found that the role of Figure 7a’s texture and boundary features were same, and the role of its spectral feature was the least important; the role of Figure 7b’s boundary was the most important, and the role of its spectral feature was the least important; and the role of Figure 7c’s boundary feature was the most important, and the role of its spectral feature was the least important; and the role of Figure 7d’s spectral feature was the most important, and the role of its boundary feature was the least important. From Figure 8 and Figure 10 and Table 4 and Table 5, it can be illustrated that the proposed method can not only explore the roles of multiple features in the segmentation procedure, but also segment the real SAR images well.
To illustrate the advantage of the proposed method, two methods were chosen as comparison methods. Method 1 was the proposed method without weight. In this method, the roles of the three same features (spectral, texture and boundary features) were the same in the segmentation procedure, it may reduce the segmentation accuracy, because of feature redundancy. Another method was a VGG16 net-based fully convolutional network (FCN) [26]. The method changes the full connection layer of VGG16 net to a full convolution layer, and incorporates multilayer high and low dimensional feature information; but this method is a pixel-based method, it does not consider the relations among pixels. Figure 11 shows the segmentation results of the comparison of methods, Figure 11a1–e1 are the segmentation results by using the proposed method without weight, Figure 11a2–e2 are the segmentation results by using FCNs. From Figure 11a1–e1, it can be seen that some incorrect segmentation pixels existed in the homogeneous regions and crossing the boundaries. In particular, the phenomenon existed in Figure 11a1,e1. Comparing the segmentation results of the proposed method and the segmentation results of the proposed method without weight, it could be seen that the proposed method could segment the SAR image better.
Table 5 illustrated that the proposed method, which researches on the roles of features in the segmentation procedure, could improve the segmentation results. From Figure 11a2–e2, it can be seen that the boundaries between regions and outer boundaries of images were very blurry. For example, the segmented boundaries between regions in Figure 11a2 were very blurry, and the outer boundaries shown in Figure 11d2,e2 were blurry. In addition, incorrect segmentation pixels existed in the regions, especially in the right of Figure 11a2, the phenomenon was very obvious. Further, it can be illustrated that the segmentation results of FCN may be influenced by the speckle noise of the SAR image, as it is a pixel-based method. Comparing with these segmentation results, it can be seen that the proposed method’s segmentation results were better. Further, the superiority of the proposed method could be demonstrated.
To further demonstrate the advantage of the proposed method, the segmentation results of the comparison methods were evaluated quantitatively. Table 6 lists their producer’s accuracies, user’s accuracies, overall accuracies and Kappa coefficients. Comparing with the quantitative results of the proposed method and comparison methods, it could be obviously found that the overall accuracies and Kappa coefficients of the proposed method were higher.

4. Discussion

In this paper, a feature weighted SAR image segmentation method based on energy function is presented.
In exploring the law of all the features’ roles in the segmentation procedure, a series of experiments were conducted (see Figure 6 and Figure 9). From them, it could be found that the weight values quickly drove their approximate values and remained unchanged during the iteration processing. Furthermore, it could be proved that the proposed approach could automatically determine the roles of all the features in the segmentation procedure.
From Figure 5 and Figure 8, it can be found that the proposed approach could segment a simulated and a real SAR image well, especially a simulated SAR, its segmentation result did not have existing incorrect segmentation blocks. From Figure 3 and Figure 4, it could be found that the image information of the extracted features by using the curvelet transform was more abundant and complete, and then the advantage of the curvelet transform in feature extraction could be demonstrated. According to the comparison with the segmentation results of a simulated SAR image by using the proposed approach based on curvelet and wavelet features, it could be found that the more image information the feature contains, the better the segmentation result will be. It could be proved that the extracted features could affect the segmentation results. Comparing Figure 5, Figure 8 and Figure 11, it can be found that the proposed method could segment SAR images better. Further, it could be demonstrated the superiority of the proposed method.

5. Conclusions

To extract more structural features that can contribute to segmenting a SAR image accurately, and explore the roles of the features in the segmentation procedure, this paper proposed a block- and energy-based segmentation method with feature weighted for the SAR image. In this paper, the curvelet transform was used to construct boundary and texture features, and they were incorporated into the statistical segmentation models in a weighted way, that is, an energy segmentation model with a weighted feature. The proposed method was tested on simulated and real SAR images, the results showed that the proposed method could not only automatically determine the roles of all the features in the segmentation procedure, but also segment SAR images well. This method needs to give the number of classes a priori. For a SAR image, it is difficult to artificially determine the number of classes. The difficulty drives from the speckle noise of a SAR image. In future work, the unknown number of classes in a SAR image will be assumed as a variable, and a move of sampling the number of the classes can be designed to solve its value in the RJMCMC algorithm.

Author Contributions

Conceptualization, Y.W.; writing—original draft preparation, Y.W.; writing—review and editing, Y.W., G.Q.Z.; visualization, Y.W.; supervision, Y.W. and H.T.Y.; project administration, Y.W.; funding acquisition, Y.W., G.Q.Z. and H.T.Y.

Funding

This research was funded by National Natural Science Foundation of China, grant numbers 41431179, 41664002, 41801071, 41801030; Guangxi Innovative Development Grand Grant, grant number GuikeAA18118038, Guangxi Natural Science Foundation, grant numbers 2015GXNSFDA139032, 2017GXNSFDA198016, 2018GXNSFBA281075; Foundation of Guilin University of Technology, grant numbers GUTQDJJ2018065, GUTQDJJ2017095; the National Key Research and Development Program of China, grant number 2016YFB0502501; the BaGuiScholars program of the provincial government of Guangxi (Guoqing Zhou).

Acknowledgments

We would like to thank Y. Li for his ideas and suggestions in relation to the experiments.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Jensen, J.R. Introductory Digital Image Processing: A Remote Sensing Perspective, 3rd ed.; Prentice Hall: Upper Saddle River, NJ, USA, 2006. [Google Scholar]
  2. Richards, J.A.; Jia, X. Remote Sensing Digital Image Analysis: An Introduction, 4th ed.; Springer: Berlin, Germany, 2006. [Google Scholar]
  3. Cao, Y.F.; Sun, H.; Xu, X. An unsupervised segmentation method based on MPM for SAR images. IEEE Geosci. Remote Sens. Lett. 2005, 2, 55–58. [Google Scholar] [CrossRef]
  4. Zhang, J.T.; Zhang, L.M. A watershed algorithm combining spectral and texture information for high resolution remote sensing image segmentation. Geomat. Inf. Sci.Wuhan Univer. 2017, 42, 449–455. [Google Scholar]
  5. Acqua, D.F.; Gamba, P.; Ferrariet, A.; Palmason, J.A.; Benediktsson, J.A.; Arnason, K. Exploiting spectral and spatial information in hyperspectral urban data with high resolution. IEEE Geosci. Remote Sens. Lett. 2004, 1, 322–326. [Google Scholar] [CrossRef]
  6. Chen, Y.X.; Qin, K.; Gan, S.Z.; Wu, T. Structural feature modeling of high resolution remote sensing images using directional spatial correlation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 1727–1731. [Google Scholar] [CrossRef]
  7. Xue, X.R.; Wang, H.F.; Xiang, F.; Wang, J.P. A new method of SAR image segmentation based on FCM and wavelet transform. In Proceedings of the 2012 5th International Congress on Image and Signal Processing, Chongqing, China, 16–18 October 2012. [Google Scholar]
  8. Dey, S.; Goswami, R.S. A morphological segmentation and curvelet features extraction on text region classification using SVM. In Proceedings of the 2015 IEEE International Conference on Computational Intelligence and Computing Research (ICCIC), Madurai, India, 10–12 December 2015. [Google Scholar]
  9. Rahmani, M.; Akbarizadeh, G. Unsupervised feature learning based on sparse coding and spectral clustering for segmentation of synthetic aperture radar images. IET Comput. Vis. 2015, 9, 629–638. [Google Scholar] [CrossRef]
  10. Liu, Z.; Fan, X.W.; Lv, F.Y. SAR Image Segmentation using contourlet and Support Vector Machine. In Proceedings of the 2009 Fifth International Conference on Natural Computation, Tianjin, China, 14–16 August 2009. [Google Scholar]
  11. Wang, M.; Zhou, S.D.; Bai, H.; Ma, N.; Ye, S. SAR water image segmentation based on GLCM and wavelet textures. In Proceedings of the 2010 6th International Conference on Wireless Communications Networking and Mobile Computing (WiCOM), Chengdu, China, 23–25 September 2010. [Google Scholar]
  12. Hao, B.; Zhang, X.R.; Li, N. MPM SAR Image Segmentation Using Feature Extraction and Context Model. IEEE Geosci. Remote Sens. Lett. 2012, 9, 1041–1045. [Google Scholar]
  13. Lv, N.; Chen, C.; Qiu, T.; Sangaiah, A.K. Deep learning and superpixel feature extraction based on Contractive Autoencoder for change detection in SAR images. IEEE Trans. Ind. Inform. 2018, 14, 5530–5538. [Google Scholar] [CrossRef]
  14. Zhao, Q.H.; Li, Y.; Liu, Z.G. SAR image segmentation using Voronoi tessellation and Bayesian inference applied to dark spot feature extraction. Sensors 2013, 13, 14484–14499. [Google Scholar] [CrossRef] [PubMed]
  15. Chen, Y.H.; Zhu, L.; Yuille, A.; Zhang, H.J. Unsupervised learning of Probabilistic Object Models (POMs) for object classification, segmentation, and recognition using knowledge propagation. IEEE Trans. Pattern Anal. 2009, 31, 1747–1761. [Google Scholar] [CrossRef] [PubMed]
  16. Wu, Y.H.; Ji, K.F.; Yu, W.X.; Su, Y. Region-based classification of Polarimetric SAR images using Wishart MRF. IEEE Geosci. Remote Sens. Lett. 2008, 5, 668–672. [Google Scholar] [CrossRef]
  17. Candes, E.; Demanet, L.; Donoho, D.; Ying, L.X. Fast discrete curvelet transform. Multiscale Model. Sim. 2006, 5, 861–899. [Google Scholar] [CrossRef]
  18. Rahulkar, A.D.; Jadhav, D.V.; Holambe, R.S. Fast discrete curvelet transform based anisotropic feature extraction for IRIS recognition. ICTACT J. Image Video Process 2010, 11, 69–75. [Google Scholar] [CrossRef]
  19. Zhang, Y.; Rockett, P.I. The Bayesian operating point of the Canny edge detector. IEEE Trans. Image Process. 2006, 15, 3409–3416. [Google Scholar] [CrossRef]
  20. Duan, Y.P.; Liu, F.; Jiao, L.C. Skecching model and higher order neighborhood markov random field-based SAR image segmentation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 1686–1690. [Google Scholar] [CrossRef]
  21. Kervrrann, C.; Heitz, F. A markov random field model-based approach to unsupervised texture segmentation using local and global spatial statistics. IEEE Trans. Image Process. 1995, 4, 856–862. [Google Scholar] [CrossRef] [PubMed]
  22. Zhao, Q.H.; Zhang, H.Y.; Li, Y. Regionalized image segmentation using Kolmogorov-Smirnov statistics. J. Image Graph. 2015, 20, 678–686. [Google Scholar]
  23. Green, P.J. Reversible jump markov chain monte carlo computation and Bayesian model determination. Biometrika 1995, 82, 711–732. [Google Scholar] [CrossRef]
  24. Li, Y.; Li, J.; Chapman, M.A. Segmentation of SAR intensity imagery with a Voronoi tessellation, Bayesian inference, and Reversible Jump MCMC algorithm. IEEE Trans. Geosci. Remote Sens. 2010, 48, 1872–1881. [Google Scholar] [CrossRef]
  25. Congalton, R.G.; Green, K. Assessing the Accuracy of Remotely Sensed Data: Principles and Practices; CRC Press: Boca Raton, FL, USA, 2008. [Google Scholar]
  26. Long, J.; Shelhamer, E.; Darrell, T. Fully Convolutional Networks for semantic segmentation. IEEE T Pattern Anal. 2014, 39, 640–651. [Google Scholar]
Figure 1. Flowchart of the proposed segmentation framework.
Figure 1. Flowchart of the proposed segmentation framework.
Remotesensing 11 01169 g001
Figure 2. Temple and simulated synthetic aperture radar (SAR) images: (a) Temple image, (b) simulated SAR image.
Figure 2. Temple and simulated synthetic aperture radar (SAR) images: (a) Temple image, (b) simulated SAR image.
Remotesensing 11 01169 g002
Figure 3. Texture and boundary features of a simulated SAR image: (a) Texture feature by using the curvelet transform, (b) boundary feature by using the curvelet transform, (c) texture feature by using the wavelet transform, (d) boundary feature by using the wavelet transform.
Figure 3. Texture and boundary features of a simulated SAR image: (a) Texture feature by using the curvelet transform, (b) boundary feature by using the curvelet transform, (c) texture feature by using the wavelet transform, (d) boundary feature by using the wavelet transform.
Remotesensing 11 01169 g003
Figure 4. Energy histograms of a simulated SAR image: (a) Curvelet, (b) wavelet.
Figure 4. Energy histograms of a simulated SAR image: (a) Curvelet, (b) wavelet.
Remotesensing 11 01169 g004
Figure 5. Visual evaluation results of a simulated SAR image: (a1,a2) Regular tessellation, (b1,b2) segmentation result, (c1,c2) extracted outlines on the original image.
Figure 5. Visual evaluation results of a simulated SAR image: (a1,a2) Regular tessellation, (b1,b2) segmentation result, (c1,c2) extracted outlines on the original image.
Remotesensing 11 01169 g005
Figure 6. Changes of features’ weight values for all the classes by using curvelet and wavelet feature sets: (a1,a2) Feature 1, (b1,b2) feature 2, (c1,c2) feature 3.
Figure 6. Changes of features’ weight values for all the classes by using curvelet and wavelet feature sets: (a1,a2) Feature 1, (b1,b2) feature 2, (c1,c2) feature 3.
Remotesensing 11 01169 g006
Figure 7. Four real SAR images (ad).
Figure 7. Four real SAR images (ad).
Remotesensing 11 01169 g007
Figure 8. Results of real SAR images: (a1d1) Texture features, (a2d2) boundary features, (a3d3) segmentation results, (a4d4) overlaying the extracted outlines on the original images.
Figure 8. Results of real SAR images: (a1d1) Texture features, (a2d2) boundary features, (a3d3) segmentation results, (a4d4) overlaying the extracted outlines on the original images.
Remotesensing 11 01169 g008
Figure 9. Temples of real SAR images: (ad) Temple images.
Figure 9. Temples of real SAR images: (ad) Temple images.
Remotesensing 11 01169 g009
Figure 10. Changes of features’ weight values for all the classes: (a1a4) Feature 1, (b1b4) feature 2, (c1c4) feature 3.
Figure 10. Changes of features’ weight values for all the classes: (a1a4) Feature 1, (b1b4) feature 2, (c1c4) feature 3.
Remotesensing 11 01169 g010
Figure 11. Segmentation results of comparison methods: (a1e1) The proposed method without weight, (a2e2) FCN-based method.
Figure 11. Segmentation results of comparison methods: (a1e1) The proposed method without weight, (a2e2) FCN-based method.
Remotesensing 11 01169 g011
Table 1. Gamma parameters of a simulated SAR image.
Table 1. Gamma parameters of a simulated SAR image.
Parameter1234
α4.06647.68916.38322.2705
β33.771224.102713.570448.3081
Table 2. Quantitative evaluation of a simulated SAR image.
Table 2. Quantitative evaluation of a simulated SAR image.
FeatureProducer’s Accuracy (%)User’s Accuracy (%)Overall Accuracy (%)Kappa Coefficient
12341234
Curvelet1001001001001001001001001001.000
Wavelet96.998.291.098.095.194.097.897.496.00.947
Table 3. Approximate feature weight values for a simulated SAR image.
Table 3. Approximate feature weight values for a simulated SAR image.
FeatureSpectralTextureBoundary
123412341234
Curvelet0.120.070.120.060.100.060.110.050.100.060.100.05
Wavelet0.120.060.080.100.110.050.080.090.110.050.070.08
Table 4. Quantitative evaluation of a simulated SAR image.
Table 4. Quantitative evaluation of a simulated SAR image.
ImageProducer’s Accuracy (%)User’s Accuracy (%)Overall Accuracy (%)Kappa Coefficient
12341234
7(a)95.396.394.2 96.196.892.9 95.70.918
7(b)91.794.498.0 87.395.998.8 95.40.928
7(c)94.198.596.0 87.289.999.3 96.30.916
7(d)97.786.595.598.093.995.596.398.696.10.938
Table 5. Approximate feature weight values for real SAR images.
Table 5. Approximate feature weight values for real SAR images.
ImagesSpectralTextureBoundary
123412341234
6(a)0.010.160.11 0.010.220.13 0.010.220.13
6(b)0.120.050.08 0.140.070.15 0.150.070.17
6(c)0.120.050.07 0.170.080.11 0.200.080.12
6(d)0.020.030.110.200.020.030.100.190.020.030.080.17
Table 6. Quantitative evaluation of the comparison of methods.
Table 6. Quantitative evaluation of the comparison of methods.
MethodImageProducer’s Accuracy(%)User’s Accuracy(%)Overall Accuracy(%)Kappa Coefficient
12341234
Method 12(b)95.997.197.110095.897.596.810097.50.966
7(a)95.593.792.8 93.296.288.2 93.70.882
7(b)85.090.798.0 84.291.497.8 92.80.886
7(c)84.297.793.6 74.384.199.1 93.60.857
7(d)92.889.097.291.495.094.284.398.392.90.898
Method 22(b)77.985.075.190.884.972.178.096.682.20.763
7(a)91.388.263.9 75.484.779.8 82.40.661
7(b)81.380.170.2 51.976.698.2 76.10.637
7(c)68.458.690.7 42.372.292.6 82.90.608
7(d)64.925.687.996.396.860.265.587.679.00.690

Share and Cite

MDPI and ACS Style

Wang, Y.; Zhou, G.; You, H. An Energy-Based SAR Image Segmentation Method with Weighted Feature. Remote Sens. 2019, 11, 1169. https://doi.org/10.3390/rs11101169

AMA Style

Wang Y, Zhou G, You H. An Energy-Based SAR Image Segmentation Method with Weighted Feature. Remote Sensing. 2019; 11(10):1169. https://doi.org/10.3390/rs11101169

Chicago/Turabian Style

Wang, Yu, Guoqing Zhou, and Haotian You. 2019. "An Energy-Based SAR Image Segmentation Method with Weighted Feature" Remote Sensing 11, no. 10: 1169. https://doi.org/10.3390/rs11101169

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop