An Energy-Based SAR Image Segmentation Method with Weighted Feature

: To extract more structural features, which can contribute to segment a synthetic aperture radar (SAR) image accurately, and explore their roles in the segmentation procedure, this paper presents an energy-based SAR image segmentation method with weighted features. To precisely segment a SAR image, multiple structural features are incorporated into a block- and energy-based segmentation model in weighted way. In this paper, the multiple features of a pixel, involving spectral feature obtained from original SAR image, texture and boundary features extracted by a curvelet transform, form a feature vector. All the pixels’ feature vectors form a feature set of a SAR image. To automatically determine the roles of the multiple features in the segmentation procedure, weight variables are assigned to them. All the weight variables form a weight set. Then the image domain is partitioned into a set of blocks by regular tessellation. Afterwards, an energy function and a non-constrained Gibbs probability distribution are used to combine the feature and weight sets to build a block-based energy segmentation model with feature weighted on the partitioned image domain. Further, a reversible jump Markov Chain Monte Carlo (RJMCMC) algorithm is designed to simulate from the segmentation model. In the RJMCMC algorithm, three move types were designed according to the segmentation model. Finally, the proposed method was tested on the SAR images, and the quantitative and qualitative results demonstrated its e ﬀ ectiveness.


Introduction
Image segmentation is a hot topic in the date processing of a synthetic aperture radar (SAR) image [1,2]. In the traditional image segmentation method, homogeneous regions are segmented according to spectral features of pixels [3]. With the rapid development of SAR technology, the resolution of a SAR image is also getting higher and higher. Compared with the middle and low resolution SAR images, the details of high resolution SAR images becomes clearer, but the differences between homogeneous regions becomes larger, and the differences between heterogeneous regions becomes smaller simultaneously [4]. These causes the traditional segmentation method to not reach the segmentation accuracy of a high resolution SAR image [5]. Therefore, high resolution SAR image segmentation becomes a difficult and hot topic. The differences among homogeneous regions in the high resolution SAR image exist not only in spectral features, but also in structural features such as boundary and texture features [6]. To segment a high resolution SAR image well, multiple structural features can be considered in the segmentation algorithm [7][8][9].
For example, Liu et al. [10] presents a SAR image segmentation method using contourlet transform and support vector machine (SVM). This method utilizes contourlet transform to define the energy, standard deviation and information entropy, and these features are used to construct the texture feature where D stands for "digital"; < · > represents the inner operation; l is orientation (angle) parameter; k = (k 1 , k 2 ) ∈ R 2 is the sequence of translation parameters; · stands for ceil operation; j∈ {1, . . . , J} is a scale parameter, J is the number of scales; when j = 1, the coarse scale coefficients C D (1, l, k) are low frequency coefficients, they include the main information of the SAR image; when j = J, the finest coefficients C D (J, l, k) are high frequency coefficients, they include much detailed information of the SAR image, such as the information of speckle noise; when j ∈ {2, J − 1}, the detail coefficients C D (j, l, k) are middle-high frequency coefficients, they include much of the boundary information of the SAR image [17]. There are two different digital implementation of the second generation curvelet transforms, this paper uses the wrapping based on the curvelet transform [17], and whose architecture is then roughly as follows.

Feature Extraction
The differences among homogeneous regions of a SAR image exist not only in spectral features, but also in structural features, such as boundary, texture features and so on. In order to accurately segment homogeneous regions, spectral, texture and boundary features are used in the proposed segmentation method. In this paper, a spectral feature is the original SAR image f , and texture and boundary features are extracted by curvelet transform.
Texture feature is a basic image feature and good for segmentation. In this paper, curvelet transform and L 1 norm are used to define energy, which is considered as a texture feature. The larger the energy is, the more the texture information is [18]. The operation is as follows. First, the curvelet transform is used to decompose a sub-image with c t × c t pixels centered on pixel s to obtain a series of curvelet coefficients. Then L 1 norm is used to define the energy of pixel s can be computed [18], x ts = 1 N t j,l,k C D ( j, l, k) 2 (2) where N t is the total number of curvelet coefficients. In accordance with the above procedure, the texture feature of the SAR image x t = {x ts ; s = 1, . . . , S} is obtained. Boundary is another basic feature, which is also an important factor to divide different homogeneous regions. As a common boundary detection algorithm, a Canny operator [19] can better extract the boundary features in an optical image. However, a Canny operator can not accurately extract the boundary features of a SAR image, it drives from the effect of speckle noise. In this paper, curvelet transform and a Canny operator are used to define the feature. The operation is as follows.
Remote Sens. 2019, 11, 1169 4 of 18 Firstly, the curvelet transform is used to decompose the SAR image f to obtain a series of curvelet coefficients. As the detail coefficients C D (j, l, k) (j ∈ {2, J − 1}) are middle-high frequency coefficients, they include much of the boundary information of the SAR image. Therefore, the detail coefficients are unchanged, a Canny operator is just used for extracting the boundary of curvelet coefficients on the coarse and finest scales, and their non-boundary curvelet coefficients are changed to 0. That is, all the boundary curvelet coefficients are unchanged and all the non-boundary curvelet coefficients are changed to 0. Finally, all the curvelet coefficients are reconstructed to obtain the boundary feature of the SAR image x b = {x bs ; s = 1, . . . , S}, In summary, a feature set of the SAR image is composed by the original spectral, extracted texture and boundary features, that is, x s is the feature vector values of pixel s, g is feature index, x gs is g th feature value of pixel s. x can be considered as a realization of a random feature filed X = {X s ; s = 1, . . . , S} defined on the image domain P, where X s is the feature vector of pixel s, that is, X s = {X gs ; g = 1, . . . , G}, X gs is g th feature variable of pixel s.

Energy-based Segmentation Model with Feature Weighted
In order to automatically determine the roles of the multiple features in the segmentation procedure, a weight variable ω gy s is assigned to a variable of a pixel's feature vector X gs in the X s . All the weight variables form a weight set, that is, ω = {ω gy s ; g = 1, . . . , G, s = 1, . . . , S}, where y s is the label of pixel s.
As the effect of speckle noise in the SAR image, the traditional segmentation method based on pixel can not segment well in the homogeneous regions. To improve the segmentation accuracy in the homogeneous regions, the sub-region is considered as a processing unit, that is, an image domain is partitioned by the geometric partitioning technique, and region-based statistic models are built on the partitioned image domain. As the principle of regular tessellation is simple and easy to be realized, it is explored for partitioning P into a set of rectangle regular blocks (short for blocks), that is, P = {P i ; i = 1, . . . , I}, where i is the index of block; I is the total number of blocks, and considered as a random variable; P i is the ith block, the number of pixels in P i is the multiple of two and the minimum block is 2 × 2. The initial image domain can be partitioned by S/m 2 blocks, where the initial value of I = S/m 2 , the number of pixels in every block is m × m, where m is greater than or equal to two, and m can split r s and c s . It has been selected by many experiments that the initial value of m = 8 in this paper. The number of blocks and the sizes of blocks will be changed by the following move of sampling the number of blocks.
On the partitioned image domain, the partitioned blocks are considered as processing units, it means that the labels of pixels in a block are the same, and their weights are also the same. Therefore, the weight set can be rewritten as ω = {ω gy i ; g = 1, . . . , where ω go is the weight variable of label o on the g th feature, O is the total number of classes and regarded as known, y i is the label of block P i .
All the labels form a realization of a label field, that is, y = {y i ; i = 1, . . . , I}, which represents attributes of blocks to different classes, and corresponds to a segmentation of the feature set x. This paper uses the energy function of a neighborhood relationship to build the model of label field [20], it can be expressed as: where γ is a constant to control the neighborhood dependences between a pair of neighbor blocks; NP i ={ P r ; P i~Pr } is the set of P i 's neighbor blocks, i, r ∈ {1, . . . , I} and i r, P i~Pr are neighbors if and only if they have a mutual boundary; if y i = y r , the indictor function δ(y i , y r ) = 1, otherwise, δ(y i , y r ) = 0.
The model of weighted feature U x (x, ω) can be written as: where where x gyi = {x gi ; i ∈ P yi }; P yi is the set of the blocks belonging to the label y i ; V(x gi , x gyi ) is heterogeneous energy function and defined by K-S distance [18], it can be expressed as where d KS is K-S distance, that is, the maximum distance betweenF x gi andF x gy i ,F x gi andF x gy i are cumulative sampling distribution of x gi and x gyi , respectively.F x gi andF x gy i can be written as [21]: where # is an operation that returns the number of elements in the set, n 1 and n 2 are the total number of elements x gi and x gyi , respectively; h is the index of feature values; For a H bit image, h ∈ {0, 2 H − 1}.
In the statistic segmentation, a complete segmentation model not only models the features of the image, but also describes the attributes of pixels to different classes [22]. To fully utilize the roles of the label field, the feature field and weight set in the segmentation procedure, a global energy function of image segmentation is defined by the label model and feature weighted model, it can be expressed as: U(x, ω, y, I) = U y (y, I) + U x (x, ω, I) In summary, the process of solving ω, y, I given x can be computed as: (ω,ŷ,Î) = minU(x, ω, y, I) To incorporate the solving process of Equation (11) into the Bayesian statistical framework, non-constrained Gibbs probability distribution is used to describe the global energy function, where Z is a normalization constant.

Simulation
According to Equation (12), Let the total parameter vector Θ = (Y, I, ω). To solve their values, it is necessary to simulate from the energy-based segmentation model with the feature weighted.
A RJMCMC algorithm is designed to generate samplers from Θ. In each iteration, a new candidate Θ * for Θ is drawn from a proposal distribution with density q(Θ,Θ * ) (assume that the dimension of Θ * is higher than that of Θ). u is a random vector defined for accomplishing a transition from (Θ, u) to Θ * with dimension matching, that is, |Θ| + |u| = |Θ*|. The probability of accepting the candidate Θ * can be computed as [23]: where q(u) is the density function of u, r(Θ) and r(Θ * ) are the probabilities of a given move type in the states Θ and Θ * , respectively. The Jacobian | ∂(Θ * ) /∂(Θ, u)| derives from the change of variable from (Θ, u) to Θ * .
(1) Sampling label field. A block P i with the corresponding label y i is randomly chosen. To update the label, a new label y i * is randomly drawn from {1, . . . , O} and y i * y i . The acceptance probability can be written as: where Random number z is uniformly drawn from [0,1], that is, , the result of the move is accepted, and y i becomes y i * ; otherwise, the result of the move is rejected, and y i is unchanged.
(2) Sampling the number of blocks. The operation is achieved by splitting or merging blocks, where the splitting move is an operation in which a block is split into two blocks with different class labels. The operation includes: (i) A block P i is randomly chosen in the current partition of image domain P = {P 1 , . . . , P i , . . . , P I }, and its corresponding label is o; (ii) If the pixel number of P i is more than four and the number of its row or column is the multiple of two, P i can be split. Under the constraint of minimum blocks, splitting types can be determined. (iii) One splitting is randomly chosen and splits P i into P i * and P I+1 * , and their corresponding labels are re-allocated y i and y i * , y i y i * , respectively. The new partition of the image domain becomes P * = {P 1 , . . . , P i * , . . . , P I , P I+1 * }, the acceptance probability of the splitting move can be expressed as: where where r f I = f I /I, f I is the probability of choosing a split proposal; r h I+1 = h I+1 /(I+1), h I+1 is the probability of choosing a merge proposal; Θ * = (Y*, I+1, ω), and Θ = (Y, I+1, ω); The Jacobian term in Equation (17) is equal to 1.
Random number z is uniformly drawn from [0,1]; if z ≤ a f I (P, P * ), the result of move is accepted, and P becomes P * ; otherwise, the result of move is rejected, and P is unchanged.
A merging is designed in tandem with a splitting, so the acceptance probability of the merging move can be expressed as: (3) Sampling weight. Two blocks P i and P ii are randomly chosen. Their corresponding labels respectively are y i = o and y ii = o * , and o o * . Then the feature g is randomly drawn from {1, . . . , G}, and weight variables ω go and ω go* are sampled. The new weight variable ω go * is randomly drawn from [0, ω go ], another new weight variable ω go* * = ω go + ω go* − ω go * , and other weight variables are unchanged. The acceptance probability can be written as: where where , the result of the move is accepted, and ω becomes ω * ; otherwise, the result of the move is rejected, and ω is unchanged.
In summary, Figure 1 shows the flowchart of the proposed segmentation framework. (3) Sampling weight. Two blocks Pi and Pii are randomly chosen. Their corresponding labels respectively are yi = o and yii = o * , and o ≠ o * . Then the feature g is randomly drawn from {1, …, G}, and weight variables ωgo and ωgo* are sampled. The new weight variable ωgo * is randomly drawn from [0, ωgo], another new weight variableωgo* * = ωgo + ωgo* -ωgo * , and other weight variables are unchanged.
The acceptance probability can be written as: where , the result of the move is accepted, and ω becomes ω * ; otherwise, the result of the move is rejected, and ω is unchanged.
In summary, Figure 1 shows the flowchart of the proposed segmentation framework.

Results
Given a multi-look SAR image in which the intensities of pixels are characterized by a Gamma distribution [24], α is equal to the number of its looks. In this paper, since α was considered as a random variable, the value µα was set as the number of looks. For Gamma distribution with parameters α and β, the product of the two parameters, α × β, was equal to its mean. Then the value

Results
Given a multi-look SAR image in which the intensities of pixels are characterized by a Gamma distribution [24], α is equal to the number of its looks. In this paper, since α was considered as a random variable, the value µ α was set as the number of looks. For Gamma distribution with parameters α and β, the product of the two parameters, α × β, was equal to its mean. Then the value µ α × µ β = E(α)E(β) = E(α × β) (the last equation is true, since α and β are independent, where E(·) is the mean operator) was taken as 128 = 256/2 (i.e., the midpoint of 256 grey levels) since the pixel intensities in a grey-scale image vary in the range of 0 and 255. Therefore, the shape parameter α and scale parameter β listed in Table 1 are assumed to respectively satisfy Gaussian distributions with means and standard deviations (µ a , σ α ) and (µ β , σ β ), where µ α = 4, σ α = 2, µ β = 32 and σ β = 24. Figure 2 shows an image temple and a simulated SAR image, where the simulated SAR image (see Figure 2b) is generated based on the image temple (shown in Figure 2a) and parameters listed in Table 1.
Remote Sens. 2019, 11, x FOR PEER REVIEW 8 of 18 mean operator) was taken as 128 = 256/2 (i.e., the midpoint of 256 grey levels) since the pixel intensities in a grey-scale image vary in the range of 0 and 255. Therefore, the shape parameter α and scale parameter β listed in Table 1 are assumed to respectively satisfy Gaussian distributions with means and standard deviations (µa, σα) and (µβ, σβ), where µα = 4, σα = 2, µβ = 32 and σβ = 24. Figure 2 shows an image temple and a simulated SAR image, where the simulated SAR image (see Figure  2(b)) is generated based on the image temple (shown in Figure 2(a)) and parameters listed in Table 1.   Figure 3a,b respectively show texture and boundary features extracted by using the curvelet transform. To verify the advantage of curvelet transform in extracting features, 2D harr wavelet transform is considered as a comparison algorithm to extract texture and boundary features using the same principle. These extraction results are shown in Figure 3c,d. Comparing with these texture features (see Figure 3a,c), it could be seen that they were very similar. To further compare with them, energy histograms of two transform are shown in Figure 4. From Figure 4, it could be seen that the shapes of the energy histograms distribution were similar, but the ranges of energy were different. The energy range of the curvelet transform was 0.25 × 10 4 -6 × 10 4 , and the energy range of the wavelet transform was 0.15 × 10 4 -2.25 × 10 4 . The larger the energy is, the more the image information is. Therefore, it can be found that curvelet texture feature contained more information, and curvelet transform could extract texture features more effectively. Comparing Figure 3b,d, it could be found that the boundary features extracted by the curvelet transform were more complete. From Figures 3  and 4, it can be demonstrated that the information of the curvelet extracted features was more abundant and complete, and the curvelet transform could extract texture and boundary features better. For the simulated SAR image, every pixel shows its intensity information. In this paper, the intensity information of a pixel is considered as its spectral feature, so the spectral features of all the   Figure 3a,b respectively show texture and boundary features extracted by using the curvelet transform. To verify the advantage of curvelet transform in extracting features, 2D harr wavelet transform is considered as a comparison algorithm to extract texture and boundary features using the same principle. These extraction results are shown in Figure 3c,d. Comparing with these texture features (see Figure 3a,c), it could be seen that they were very similar. To further compare with them, energy histograms of two transform are shown in Figure 4. From Figure 4, it could be seen that the shapes of the energy histograms distribution were similar, but the ranges of energy were different. The energy range of the curvelet transform was 0.25 × 10 4 -6 × 10 4 , and the energy range of the wavelet transform was 0.15 × 10 4 -2.25 × 10 4 . The larger the energy is, the more the image information is. Therefore, it can be found that curvelet texture feature contained more information, and curvelet transform could extract texture features more effectively. Comparing Figure 3b,d, it could be found that the boundary features extracted by the curvelet transform were more complete. From Figures 3 and 4, it can be demonstrated that the information of the curvelet extracted features was more abundant and complete, and the curvelet transform could extract texture and boundary features better. mean operator) was taken as 128 = 256/2 (i.e., the midpoint of 256 grey levels) since the pixel intensities in a grey-scale image vary in the range of 0 and 255. Therefore, the shape parameter α and scale parameter β listed in Table 1 are assumed to respectively satisfy Gaussian distributions with means and standard deviations (µa, σα) and (µβ, σβ), where µα = 4, σα = 2, µβ = 32 and σβ = 24. Figure 2 shows an image temple and a simulated SAR image, where the simulated SAR image (see Figure  2(b)) is generated based on the image temple (shown in Figure 2(a)) and parameters listed in Table 1.   Figure 3a,b respectively show texture and boundary features extracted by using the curvelet transform. To verify the advantage of curvelet transform in extracting features, 2D harr wavelet transform is considered as a comparison algorithm to extract texture and boundary features using the same principle. These extraction results are shown in Figure 3c,d. Comparing with these texture features (see Figure 3a,c), it could be seen that they were very similar. To further compare with them, energy histograms of two transform are shown in Figure 4. From Figure 4, it could be seen that the shapes of the energy histograms distribution were similar, but the ranges of energy were different. The energy range of the curvelet transform was 0.25 × 10 4 -6 × 10 4 , and the energy range of the wavelet transform was 0.15 × 10 4 -2.25 × 10 4 . The larger the energy is, the more the image information is. Therefore, it can be found that curvelet texture feature contained more information, and curvelet transform could extract texture features more effectively. Comparing Figure 3b,d, it could be found that the boundary features extracted by the curvelet transform were more complete. From Figures 3  and 4, it can be demonstrated that the information of the curvelet extracted features was more abundant and complete, and the curvelet transform could extract texture and boundary features better. For the simulated SAR image, every pixel shows its intensity information. In this paper, the intensity information of a pixel is considered as its spectral feature, so the spectral features of all the For the simulated SAR image, every pixel shows its intensity information. In this paper, the intensity information of a pixel is considered as its spectral feature, so the spectral features of all the pixels form the original spectral feature. The above extracted features and the original spectral feature form a curvelet feature set and wavelet feature set, respectively. The energy segmentation method with feature weighted was used to test on these feature sets, Figure 5a1-c1 show the visual evaluation results based on curvelet features, and Figure 5a2-c2 show the visual evaluation results by wavelet features. Figure 5a1,a2 are the results of regular tessellation, their every partitioned block is shown in a random color, and the colors of partitioned blocks in Figure 5a1 or a2 are different. Figure 5b1,b2 show the segmentation results. The outlines are extracted and overlaid on the original images, shown in Figure 5c1,c2. From Figure 5, it can be seen that there are some blocks crossing the boundaries in Figure 5b2, and the extracted outlines can not match the real boundaries precisely shown in Figure 5c2. Further, it could be illustrated that the proposed approach based on curvelet features could segment the simulated SAR image better.  To quantitatively evaluate the segmentation results of a simulated SAR image (see Figure  5b1,b2), their confusion matrixes, which are square arrays of numbers set out in rows and columns, which express the numbers of pixels assigned to a particular class in the segmentation results relative to the number of pixels assigned to a particular class in the reference image (see Figure 2a), were obtained, respectively. Then the producer's accuracy, user's accuracy, overall accuracy and Kappa coefficient can be computed as [25]:  (a1) (b1) (c1) (a2) (b2) (c2) To quantitatively evaluate the segmentation results of a simulated SAR image (see Figure  5b1,b2), their confusion matrixes, which are square arrays of numbers set out in rows and columns, which express the numbers of pixels assigned to a particular class in the segmentation results relative to the number of pixels assigned to a particular class in the reference image (see Figure 2a), were obtained, respectively. Then the producer's accuracy, user's accuracy, overall accuracy and Kappa coefficient can be computed as [25]: To quantitatively evaluate the segmentation results of a simulated SAR image (see Figure 5b1,b2), their confusion matrixes, which are square arrays of numbers set out in rows and columns, which express the numbers of pixels assigned to a particular class in the segmentation results relative to the number of pixels assigned to a particular class in the reference image (see Figure 2a), were obtained, respectively. Then the producer's accuracy, user's accuracy, overall accuracy and Kappa coefficient can be computed as [25]: Producer s accuracy q = n qq n +q (21) User s accuracy b = n oo n o+ (22) Overall accuracy q = where n o+ is the sum of n oq (q∈{1, . . . , O}); n +q is the sum of n oq (o∈{1, . . . , O}), denotes the number of pixels classified into class o in the segmentation result and class q in the reference image. The producer's and user's accuracies are used to determine individual class accuracies, overall accuracy and Kappa coefficient are used to determine the global class accuracy [25]. Table 2 lists the quantitative evaluation of two segmentation results. From Table 2, it can be seen that all the segmentation accuracies of Figure 5b1 were 100%, and its Kappa coefficient was 1.000; the producer's accuracy of 3 class and the user's accuracy of 2 class were below 95.0%, and Kappa coefficient was 0.947 in the quantitative evaluation of Figure 5b2. Comparing with the results of two quantitative evaluations, it could be found that all the segmentation accuracies and Kappa coefficient of Figure 5b1 were higher. From the above quantitatively and qualitatively evaluation results, it could be demonstrated that the proposed method based on curvelet features could segment a simulated SAR image better. To explore the roles of multiple features in the segmentation procedure, weight variables of all the classes on the multiple features are defined. Their values are solved by the iteration processing.  Figure 6, a red point represents the weight value of features for class 1, a green point represents the weight value of every feature for class 2, a blue point represents the weight value of every feature for class 3 and a black point represents the weight value of every feature for class 4. From Figure 6, it can be seen that all the weight values quickly drive their approximate values listed in Table 3. In Table 3, numbers 1-4 represent four classes of a simulated SAR image, and they correspond to numbers 1-4 shown in Figure 2b. The greater the weight value is, the more important the role of its feature is. Therefore, it could be demonstrated that the proposed method could automatically determine the roles of features in the segmentation procedure. From Table 3, it could be computed that the total weight values of three curvelet features were 0.37, 0.32 and 0.31, and the total weight values of three wavelet features were 0.36, 0.33 and 0.31. The greater total weight value of a feature, the more important its role is. Therefore, it could be found that the same rule that the role of spectral is the most important, and the role of boundary feature is the least important. However, their total weight values were slightly different, and these weight values of four classes for the same feature were slightly different. For example, two total weight values of spectral feature were 0.37 and 0.36, and their weight values of classes 2-4 were assigned differently. Another example, two total weight values of the boundary feature were the same, but their weight values of all the classes were different. The weight values can directly affect the segmentation results. It is known that the segmentation result by using the proposed method based on curvelet features is better. Therefore, it can be illustrated that the proposed method based on curvelet features can more reasonably assign the weights of all the class on the three features from Figures 3-6 and Tables 2 and 3.
found that the same rule that the role of spectral is the most important, and the role of boundary feature is the least important. However, their total weight values were slightly different, and these weight values of four classes for the same feature were slightly different. For example, two total weight values of spectral feature were 0.37 and 0.36, and their weight values of classes 2-4 were assigned differently. Another example, two total weight values of the boundary feature were the same, but their weight values of all the classes were different. The weight values can directly affect the segmentation results. It is known that the segmentation result by using the proposed method based on curvelet features is better. Therefore, it can be illustrated that the proposed method based on curvelet features can more reasonably assign the weights of all the class on the three features from Figures 3-6 and Tables 2-3.  In summary, curvelet features are used in the segmentation tests of four RADARSAT-I/II images with 128 pixels × 128 pixels (shown in Figure 7). Figure 7a presents a RADARSAT-I image of a coastal scene with VV polarization and spatial resolution of 30 m, its scene is about 14.75 km 2 , which reveals three types of sea ice structures, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7b presents RADARSAT-II 4look image with HH polarization and spatial resolution of 25 m, the scene is 10.24 km 2 , which reveals three types of areas, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7c shows a RADARSAT-II image with HV polarization and spatial resolution of 25 m, its scene is 10.24 km 2 , which reveals three types of areas, from bright to dark, respectively representing living area, field area and water area; Figure  7d presents a RADARSAT-I 4look image with VV polarization and spatial resolution of 50 m, the scene is 40.96 km 2 , which reveals four types of sea ice structures (from left to right, representing melt ice, melt-hard ice, hard ice and an ice-water mixture) in Ungava Bay, Quebec, Canada.  In summary, curvelet features are used in the segmentation tests of four RADARSAT-I/II images with 128 pixels × 128 pixels (shown in Figure 7). Figure 7a presents a RADARSAT-I image of a coastal scene with VV polarization and spatial resolution of 30 m, its scene is about 14.75 km 2 , which reveals three types of sea ice structures, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7b presents RADARSAT-II 4look image with HH polarization and spatial resolution of 25 m, the scene is 10.24 km 2 , which reveals three types of areas, from bright to dark, respectively representing hard ice, melt ice and water; Figure 7c shows a RADARSAT-II image with HV polarization and spatial resolution of 25 m, its scene is 10.24 km 2 , which reveals three types of areas, from bright to dark, respectively representing living area, field area and water area; Figure 7d presents a RADARSAT-I 4look image with VV polarization and spatial resolution of 50 m, the scene is 40.96 km 2 , which reveals four types of sea ice structures (from left to right, representing melt ice, melt-hard ice, hard ice and an ice-water mixture) in Ungava Bay, Quebec, Canada. The proposed extraction methods are used to extract their texture and boundary features, Figure 8a1-d1 show texture features and Figure 8a2-d2 show boundary features. From them, it can be seen that the extracted boundary features were complete, and the extracted texture features were well. Further, it could be verified that curvelet transform could extract the texture and boundaries features of the real SAR images well. The extracted texture, boundary features and the original spectral feature form their own feature sets. The proposed method was tested on these feature sets of real SAR images. Figure 8a3-d3 show segmentation results, and their outlines were extracted to overlay on the original images, shown in Figure 8a4-d4. From Figure 8a4-d4, it can be seen that the The proposed extraction methods are used to extract their texture and boundary features, Figure 8a1-d1 show texture features and Figure 8a2-d2 show boundary features. From them, it can be seen that the extracted boundary features were complete, and the extracted texture features were well. Further, it could be verified that curvelet transform could extract the texture and boundaries features of the real SAR images well. The extracted texture, boundary features and the original spectral feature form their own feature sets. The proposed method was tested on these feature sets of real SAR images.  In order to assess the segmented results obtained by the proposed approach quantitatively, some common measures, including the producer's accuracy, user's accuracy, overall accuracy and Kappa coefficient, were computed by the temples of real SAR images (see Figure 9). Table 4 lists them. From Table 4, it can be seen that only four accuracies were below 91.7%, overall accuracies were up to 95.0% and Kappa coefficients were higher than or equal to 0.916. Further, it could be demonstrated that the proposed method was feasible and effective. In order to assess the segmented results obtained by the proposed approach quantitatively, some common measures, including the producer's accuracy, user's accuracy, overall accuracy and Kappa coefficient, were computed by the temples of real SAR images (see Figure 9). Table 4 lists them. From Table 4, it can be seen that only four accuracies were below 91.7%, overall accuracies were up to 95.0% and Kappa coefficients were higher than or equal to 0.916. Further, it could be demonstrated that the proposed method was feasible and effective.    Figure 10, a red point represents the weight value of every feature for class 1, a green point represents the weight value of every feature for class 2, a blue point represents the weight value of every feature for class 3 and a black point represents the weight value of every feature for class 4. From Figure 10, it can be seen that the weight values quickly drive their approximate values listed in Table 5. From Table 5 Tables 4, 5, it can be illustrated that the proposed method can not only explore the roles of multiple features in the segmentation procedure, but also segment the real SAR images well.
To illustrate the advantage of the proposed method, two methods were chosen as comparison methods. Method 1 was the proposed method without weight. In this method, the roles of the three    Figure 10, a red point represents the weight value of every feature for class 1, a green point represents the weight value of every feature for class 2, a blue point represents the weight value of every feature for class 3 and a black point represents the weight value of every feature for class 4. From Figure 10, it can be seen that the weight values quickly drive their approximate values listed in Table 5. From Table 5 Tables 4 and 5, it can be illustrated that the proposed method can not only explore the roles of multiple features in the segmentation procedure, but also segment the real SAR images well. the relations among pixels. Figure 11 shows the segmentation results of the comparison of methods, Figure 11a1-e1 are the segmentation results by using the proposed method without weight, Figure  11a2-e2 are the segmentation results by using FCNs. From Figure 11a1-e1, it can be seen that some incorrect segmentation pixels existed in the homogeneous regions and crossing the boundaries. In particular, the phenomenon existed in Figure 11a1 and e1. Comparing the segmentation results of the proposed method and the segmentation results of the proposed method without weight, it could be seen that the proposed method could segment the SAR image better.   To illustrate the advantage of the proposed method, two methods were chosen as comparison methods. Method 1 was the proposed method without weight. In this method, the roles of the three same features (spectral, texture and boundary features) were the same in the segmentation procedure, it may reduce the segmentation accuracy, because of feature redundancy. Another method was a VGG16 net-based fully convolutional network (FCN) [26]. The method changes the full connection layer of VGG16 net to a full convolution layer, and incorporates multilayer high and low dimensional feature information; but this method is a pixel-based method, it does not consider the relations among pixels. Figure 11 shows the segmentation results of the comparison of methods, Figure 11a1-e1 are the segmentation results by using the proposed method without weight, Figure 11a2-e2 are the segmentation results by using FCNs. From Figure 11a1-e1, it can be seen that some incorrect segmentation pixels existed in the homogeneous regions and crossing the boundaries. In particular, the phenomenon existed in Figure 11a1,e1. Comparing the segmentation results of the proposed method and the segmentation results of the proposed method without weight, it could be seen that the proposed method could segment the SAR image better. Table 5 illustrated that the proposed method, which researches on the roles of features in the segmentation procedure, could improve the segmentation results. From Figure 11a2-e2, it can be seen that the boundaries between regions and outer boundaries of images were very blurry. For example, the segmented boundaries between regions in Figure 11a2 were very blurry, and the outer boundaries shown in Figure 11d2,e2 were blurry. In addition, incorrect segmentation pixels existed in the regions, especially in the right of Figure 11a2, the phenomenon was very obvious. Further, it can be illustrated that the segmentation results of FCN may be influenced by the speckle noise of the SAR image, as it is a pixel-based method. Comparing with these segmentation results, it can be seen that the proposed method's segmentation results were better. Further, the superiority of the proposed method could be demonstrated.    Table 5 illustrated that the proposed method, which researches on the roles of features in the segmentation procedure, could improve the segmentation results. From Figure 11a2-e2, it can be seen that the boundaries between regions and outer boundaries of images were very blurry. For example, the segmented boundaries between regions in Figure 11a2 were very blurry, and the outer boundaries shown in Figure 11d2 and e2 were blurry. In addition, incorrect segmentation pixels existed in the regions, especially in the right of Figure 11a2, the phenomenon was very obvious. Further, it can be illustrated that the segmentation results of FCN may be influenced by the speckle noise of the SAR image, as it is a pixel-based method. Comparing with these segmentation results, it can be seen that the proposed method's segmentation results were better. Further, the superiority of the proposed method could be demonstrated. To further demonstrate the advantage of the proposed method, the segmentation results of the comparison methods were evaluated quantitatively. Table 6 lists their producer's accuracies, user's accuracies, overall accuracies and Kappa coefficients. Comparing with the quantitative results of the proposed method and comparison methods, it could be obviously found that the overall accuracies and Kappa coefficients of the proposed method were higher.  To further demonstrate the advantage of the proposed method, the segmentation results of the comparison methods were evaluated quantitatively. Table 6 lists their producer's accuracies, user's accuracies, overall accuracies and Kappa coefficients. Comparing with the quantitative results of the proposed method and comparison methods, it could be obviously found that the overall accuracies and Kappa coefficients of the proposed method were higher.

Discussion
In this paper, a feature weighted SAR image segmentation method based on energy function is presented.
In exploring the law of all the features' roles in the segmentation procedure, a series of experiments were conducted (see Figures 6 and 9). From them, it could be found that the weight values quickly drove their approximate values and remained unchanged during the iteration processing. Furthermore, it could be proved that the proposed approach could automatically determine the roles of all the features in the segmentation procedure.
From Figures 5 and 8, it can be found that the proposed approach could segment a simulated and a real SAR image well, especially a simulated SAR, its segmentation result did not have existing incorrect segmentation blocks. From Figures 3 and 4, it could be found that the image information of the extracted features by using the curvelet transform was more abundant and complete, and then the advantage of the curvelet transform in feature extraction could be demonstrated. According to the comparison with the segmentation results of a simulated SAR image by using the proposed approach based on curvelet and wavelet features, it could be found that the more image information the feature contains, the better the segmentation result will be. It could be proved that the extracted features could affect the segmentation results. Comparing Figures 5, 8 and 11, it can be found that the proposed method could segment SAR images better. Further, it could be demonstrated the superiority of the proposed method.

Conclusions
To extract more structural features that can contribute to segmenting a SAR image accurately, and explore the roles of the features in the segmentation procedure, this paper proposed a block-and energy-based segmentation method with feature weighted for the SAR image. In this paper, the curvelet transform was used to construct boundary and texture features, and they were incorporated into the statistical segmentation models in a weighted way, that is, an energy segmentation model with a weighted feature. The proposed method was tested on simulated and real SAR images, the results showed that the proposed method could not only automatically determine the roles of all the features in the segmentation procedure, but also segment SAR images well. This method needs to give the number of classes a priori. For a SAR image, it is difficult to artificially determine the number of classes. The difficulty drives from the speckle noise of a SAR image. In future work, the unknown number of classes in a SAR image will be assumed as a variable, and a move of sampling the number of the classes can be designed to solve its value in the RJMCMC algorithm.