A Local Neighborhood Robust Fuzzy Clustering Image Segmentation Algorithm Based on an Adaptive Feature Selection Gaussian Mixture Model.

Since the fuzzy local information C-means (FLICM) segmentation algorithm cannot take into account the impact of different features on clustering segmentation results, a local fuzzy clustering segmentation algorithm based on a feature selection Gaussian mixture model was proposed. First, the constraints of the membership degree on the spatial distance were added to the local information function. Second, the feature saliency was introduced into the objective function. By using the Lagrange multiplier method, the optimal expression of the objective function was solved. Neighborhood weighting information was added to the iteration expression of the classification membership degree to obtain a local feature selection based on feature selection. Each of the improved FLICM algorithm, the fuzzy C-means with spatial constraints (FCM_S) algorithm, and the original FLICM algorithm were then used to cluster and segment the interference images of Gaussian noise, salt-and-pepper noise, multiplicative noise, and mixed noise. The performances of the peak signal-to-noise ratio and error rate of the segmentation results were compared with each other. At the same time, the iteration time and number of iterations used to converge the objective function of the algorithm were compared. In summary, the improved algorithm significantly improved the ability of image noise suppression under strong noise interference, improved the efficiency of operation, facilitated remote sensing image capture under strong noise interference, and promoted the development of a robust anti-noise fuzzy clustering algorithm.


Image Segmentation Algorithms
Existing image segmentation methods are mainly divided into the following categories: the edge-based image segmentation methods, the region-based image segmentation methods, and the image segmentation methods based on a specific theory. Cluster segmentation, as a typical unsupervised segmentation method, has attracted the attention of many scholars and has been widely used and studied in many fields [1,2].
Clustering algorithms can be divided into hard partition clustering algorithms and soft partition clustering algorithms. Hard partition clustering algorithms are used for image segmentation. Their principle is to directly divide an image according to the similarity of pixels in terms of qualities such as grayness, color, and texture. The optimal solution or partition can be obtained by minimizing

Fuzzy Clustering Algorithm Based on Feature Selection
At present, the research of clustering analysis focuses on the scalability of the clustering method, the validity of clustering for complex shapes and data types, high-dimensional clustering analysis technology, and the clustering method for mixed data. Among them, high-dimensional data clustering is a difficult problem in clustering analysis, and solving the clustering problem with high-dimensional data is difficult for the traditional clustering algorithm. For example, there are large numbers of invalid clustering features in high-dimensional sample spaces, and the Euclidean distance is used as a distance measure in the FCM algorithm [13], but it cannot take into account the correlation of each feature space in high-dimensional space. At present, the problem of high-dimensional data is mainly dealt with using feature transformations and feature selection. The method based on feature selection can effectively reduce the dimension and has been widely applied. A subspace-based clustering image segmentation method is proposed in the literature. By defining search strategies and evaluation criteria, effective features for clustering are screened. The original data sets are clustered in different subspaces to reduce storage and computation costs [14].
The existing supervised feature selection method achieves the goal of dimensionality but reduces the operational efficiency. To achieve clustering segmentation using adaptive feature selection, a similarity measurement method for high-dimensional data, which takes into account the correlation between high-dimensional spatial features and effectively reduces the impact of a "dimensional disaster" on high-dimensional data, is proposed in the literature. However, there is a lack of theoretical guidance on how to select the similarity measurement criteria. To avoid any combination search and to apply the method to unsupervised learning, the concept of feature saliency is proposed in the literature. Considering the influence of different features on the clustering results, the Gaussian mixture model is used for clustering analysis to improve the performance of the algorithm [15].
The fuzzy Gaussian mixture models (FGMMs) algorithm replaces the Euclidean distance of the FCM algorithm with the Gaussian mixture model, which can more accurately fit multipeak data and achieves better segmentation of noiseless complex images. Traditional fuzzy C-means clustering analysis treats the different features of samples equally and ignores the important influence of key features on clustering results, which leads to the difference between the clustering segmentation results and the real classification results. According to the theory of feature selection, the concept of feature saliency is used to assume that the saliency of sample features obeys a probability distribution, and the clustering analysis is carried out by using the Gaussian mixture model. Ju and Liu [16] proposed an online feature selection method based on fuzzy clustering, along with its application (OFSBFCM), and a fuzzy C-means clustering method combined with a Gaussian mixture model with feature selection using Kullback-Leibler (KL) divergence (FSFCM) is proposed in this paper [16,17].
In short, the advantages of the feature-based selection of the GMM-based fuzzy clustering algorithm are as follows: Sensors 2020, 20, 2391 3 of 25 (1) By using the Gaussian mixture model as a distance measure and by accurately fitting multifront data, compared with the FCM algorithm, the FCM algorithm can manage complexly structured data sample sets. (2) The Gaussian hybrid model algorithm for feature selection assumes that different features of samples play different roles in pattern analysis. Some features play a decisive role in model analysis and overcome the limitations of the FCM algorithm. The algorithm treats the different features of samples equally for clustering analysis, ignoring the important influence of key features on the clustering results, which leads to a certain gap between the clustering results and the real classification results. (3) KL divergence regularization clustering can be widely used in the clustering analysis of class unbalanced data.
The problems of the feature-based GMM-based fuzzy clustering algorithm are as follows: (1) The parameters need to be adjusted to increase the running time of the algorithm.
(2) Like the FCM algorithm, it only clusters a single pixel without considering the influence of spatial neighborhood pixels on each central pixel. For different types of noisy images, the algorithm does not have good robustness against noise.

FLICM Algorithm
The FCM algorithm uses the fuzzy membership degree and nonsimilarity measure to construct the objective function; it also finds the corresponding membership degree and clustering center when the objective function is the smallest in the iteration process to realize the sample classification. Its structure is simple and easy to simulate, and the convergence is fast. However, it does not consider the interference from neighborhood information on the central pixel, and the results of the image segmentation with noise interference are unsatisfactory. To improve the robustness against the noise of the algorithm, Chens et al. [17]. proposed the neighborhood mean and neighborhood mean fuzzy C-means algorithms FCM-S1 and FCM-S2. Later, the Greek scholars Krinidis et al. [18,19]. proposed a neighborhood local fuzzy C-means segmentation algorithm (FLICM), which combines neighborhood pixel spatial information, gray information, and fuzzy classification information to improve the anti-noise performance of the algorithm. Its objective function expression is as follows [20,21]: Specifically, . , x iD represent the different attributes of the first sample. C is the number of clusters. z ij denotes the fuzzy membership of the first pixel in the jth category; the clustering center is v j ( j = 1, 2, . . . , C). d iβ is the Euclidean distance of the spatial position between the pixel point and the neighboring pixel x β . N i represents a set of neighborhood spatial pixels x β of pixel point x i ; the neighborhood window size is 3 × 3 or 5 × 5. The optimal iteration expressions of the classification membership degree and the clustering center are as follows [22,23]: Sensors 2020, 20, 2391 The FLICM algorithm does not strictly follow the Lagrangian multiplier method to solve the optimal expression of the objective function. Furthermore, it runs too long and falls into local minima.
To solve these problems, the unconstrained expression of the objective function is solved using the Lagrangian multiplier method as follows [24,25]: The partial derivative of J M with respect to the membership degree z ij and clustering center v j is obtained, and its partial derivative is 0: By solving Equations (6) and (7), the following solution is obtained: Compared with the iteration expressions in the literature, the iteration formula of the clustering centers needs to consider the central pixel x i values. Furthermore, the influence of the neighborhood pixels x β on the clustering center v j and the degree of classification membership also have some influence on the clustering center v j . To accurately compare the influence of the neighborhood pixels on the central pixels, this section describes the use of neighborhood spatial classification membership z βj to restrict the Euclidean distance d iβ of the spatial position between pixel x i and pixel x β , and redefines the ambiguity factor G ij to be [26,27]: Sensors 2020, 20, 2391 5 of 25 The FLICM algorithm introduces neighborhood spatial information into the objective function of the algorithm to enhance the anti-noise performance of the algorithm; however, the algorithm treats the different features of the samples equally for clustering analysis, ignoring the important impact of key features on the clustering results, which results in unsatisfactory segmentation results. In this section, the idea of feature selection is introduced into the improved FLICM algorithm, KL divergence is introduced as a regularization term to realize feature selection constraints, and a new objective function is obtained as follows [28,29]: d ij is the weighted Euclidean distance between the first sample and the center µ ij of class J. The Euclidean distance d βj is the spatial position between pixel point x i and pixel point x β . s ijl is the influence degree of the first characteristic attribute x il on the jth class in the fist sample. ε l is the eigenvalue corresponding to the mean of all samples. ρ l is the weight factor of the first dimension feature attribute of the sample. G ij is used as a fuzzy factor.
In the literature, the membership degree has been obtained strictly according to the Lagrange multiplier method after finding an unconstrained solution of the objective function but the clustering center of the formula solution is directly calculated using the traditional fuzzy C-means clustering cluster center expression, which is not strictly obtained via the Lagrange method, resulting in an inconsistency between Equation (4) and the clustering objective function. In this section, the objective function of clustering is optimized strictly using the Lagrange multiplier method, and the iterative optimization expression is solved. The process is as follows [30,31]: Finding the partial derivatives of object functions with respect to s ijl : Let the partial derivative be zero: Sensors 2020, 20, 2391 6 of 25 The unconstrained expression of the objective function obtained using the Lagrange multiplier method is given by . Finding partial derivatives of the formula with respect to z ij : Bring the local ambiguity factor G ij into the formula and set the formula equal to zero: Constraints of membership degree The iteration expression of the subordinate degree z ij is solved by introducing Equation (15) into Equation (17), as follows. Therefore: Finding the partial derivatives of the object functions with respect to µ jl gives: Finding the partial derivatives of the object functions with respect to ε l gives: Let the partial derivative be zero and obtain the expression of ε l as follows: Sensors 2020, 20, 2391 For the objective function with respect to ρ l , the partial derivative is obtained, and the partial derivative is set to 0. The iterative expression is as follows.
Using the Lagrange multiplier method, the partial derivative of the objective function with respect to π j is set to 0: The iterative expression of π j is obtained from the above formula:

Postprocessing Method of the Clustering Membership Degree
To further enhance the robustness against noise, the neighborhood weighting information is added to the membership degree of the iteration expression. Combined with the idea of the non-Markov random field (MRF) space-constrained Gaussian model in the literature, this section constructs a neighborhood weighting function by using the classification membership degree and the postprocessing clustering membership degree. The function considers the corresponding median to be a probability by classifying the membership degree of neighborhood pixels in ascending order, which is expressed as follows [32,33]: A indicates that the neighborhood window sizes are 3 × 3, 5 × 5 for the classification membership of neighborhood pixels. N i represents the set of classified membership degrees of neighborhood pixels. According to the Bayesian theorem, the weight factor of the neighborhood information function is added to Equation (18), and the new expression of the membership degree is given in Equation (27): In this equation, α is the weight factor and the selection range is. a value of 2.0 is usually chosen. Its function is similar to the fuzzy weight factor m in the traditional fuzzy C-means clustering objective function.
The improved membership degree of the sample classification in this chapter has the following properties [34,35]: (1) Neighborhood weighted membership still satisfies the constraints (2) The membership degree of the current pixel x i in class J is proportional to the probability that the neighborhood pixel x β belongs to class J.
As the probability increases, the degree of membership increases. Conversely, when neighborhood pixel x i belongs to class j, the probability tends to zero, and thus, the membership degree of the current pixel x i in class j decreases. In addition, ϕ ij = (H ij ) α , such that: The derivative is obtained, as follows: It is proved that the weighted neighborhood membership degree can be found using the neighborhood information.
The monotone incremental function of ϕ ij , which uses the ϕ ij number to restrict the membership degree of classification, improves the performance of the sample classification to a certain extent and enhances the robustness of the algorithm against noise. To achieve image segmentation, the local fuzzy clustering algorithm based on feature selection in this chapter needs to solve the iterative optimization expression. The detailed steps are as follows [36,37]: Step 1: Transform the image pixel value into sample eigenvector x i , where N), N is the total number of pixels, and C is the number of clusters.
The termination condition threshold is δ, the maximum iteration number is τ max , the regularization parameter is λ, and the feature selection parameter is γ.
Step 5: Use Equation (15) to calculate the eigenweight function s ijl .
Step 6: Calculate the membership function z ij using Equation (28).
ij } ≺ δ is satisfied, the iteration will stop; otherwise, the iteration returns to step 4.
Step 9: The image pixels are classified and segmented according to the principle of the maximum membership degree using the z ij values obtained when the algorithm's iterations have been completed.

Experimental Results and Analysis
To verify the good segmentation performance and anti-noise ability of the improved algorithm, high-resolution remote sensing images, including common ground objects in remote sensing images (such as forest farmland, bare land, and grassland), synthetic images, standard images, and high-resolution medical images were selected, as is shown in Figure 1. The improved algorithm and the FCM-S, FLICM, kernel-weighted FLICM (KWFLICM), and local data and membership relative entropy-based FCM (LDMREFCM) algorithms were used to segment gray images with different noises [36,37]. The peak signal-to-noise ratio (PSNR) and the error misclassification rate (MCR) were used to compare the segmentation performance and anti-noise performance of the algorithms [38,39].
Generally, the MCR is often used to quantitatively evaluate the performance of segmentation algorithms, which is defined as:

Segmentation Performance Test
Gaussian noise was added to two remote sensing images with a mean value of 0 and mean variances of 57 and 80. Gaussian noise was added to images containing four artificial categories, brain CT (Computed Tomography) images, and camera images with a mean value of 0 and mean variances of 140 and 161. The number of clusters was set to 3, 4, 2, and 2. The results were compared using the results from the FLICM, FCM_S, LDMREFCM, and KWFLICM algorithms and the improved algorithm. The original image is shown in Figure 1, and the experimental results are shown in Figures 2-5 (b-f). The error rate and PSNR of the segmentation results are shown in Tables 1 and 2, and the iteration time and the number of iterations are shown in Table 3 [40,41]. The efficiency of the algorithms was compared using the running time after convergence and the number of iterations n. A Dell OptiPlex 360 (Intel Core 4, 8 GB of memory) running a Windows 7 system with the MATLAB 2013a (MathWorks, Natick, MA, USA)programming environment comprised the evaluation platform. The maximum number of iterations T max of the algorithm was set to 300. The cluster numbers C for each noise was chosen to be 2, 3, and 4. The regularization parameters and characteristic parameters were selected separately to be λ = 10 3 and γ = 10 3 , respectively. The iteration threshold was δ = 10 −4 , and the neighborhood window size was set to 3 × 3.

Segmentation Performance Test
Gaussian noise was added to two remote sensing images with a mean value of 0 and mean variances of 57 and 80. Gaussian noise was added to images containing four artificial categories, brain CT (Computed Tomography) images, and camera images with a mean value of 0 and mean variances of 140 and 161. The number of clusters was set to 3, 4, 2, and 2. The results were compared using the results from the FLICM, FCM_S, LDMREFCM, and KWFLICM algorithms and the improved algorithm. The original image is shown in Figure 1, and the experimental results are shown in Figures 2-5 (b-f). The error rate and PSNR of the segmentation results are shown in Tables 1 and 2, and the iteration time and the number of iterations are shown in Table 3 [40,41].

Test Result
Comparing the segmentation results of the five algorithms in Figures 2-5 for four images with different degrees of Gaussian noise interference, we can see that the segmentation results of the FCM_S, FLICM, and LDMREFCM algorithms still contained many noise points; the KWFLICM algorithm contained fewer noise points; while the improved algorithm has the fewest noise points. Table 1 shows that the improved algorithm had the highest signal-to-noise ratio compared with the other four algorithms, which shows that the improved algorithm had the strongest anti-Gaussian noise ability. Table 2 shows that the segmentation result of the improved algorithm was the smallest of all the algorithms, which shows that the segmentation result of the improved algorithm was closer to the ideal segmentation result and had a better segmentation performance. Comparing the PSNR and iteration time of each algorithm in Table 3, the average PSNR of the improved algorithm was 0.7 dB higher than that of the KWFLICM algorithm, and the average iteration time of the improved algorithm was 500 s less than that of the KWFLICM algorithm [42,43]. The iteration times of the FCM_S and FLICM algorithms were the lowest, but the difference between the improved algorithm results and the PSNR was 2-5 dB. The anti-noise ability of the FLCM and FCM_S method was poor. Combining the PSNR test results and the iteration time, the improved algorithm had a better anti-Gaussian noise segmentation performance.

Segmentation Performance Test
In this experiment, 20% and 40% salt-and-pepper noise were added to two remote sensing images, respectively, while 40% and 30% salt-and-pepper noise were added to brain CT images and images containing four artificial categories, respectively. The experimental results are shown in Figures 6-9. The number of clusters was set to 3, 4, 2, and 2. The PSNRs and error rates are shown in Tables 4 and 5, respectively, and the iterative operation time and number of iterations are shown in Table 6.

Segmentation Performance Test
In this experiment, 20% and 40% salt-and-pepper noise were added to two remote sensing images, respectively, while 40% and 30% salt-and-pepper noise were added to brain CT images and images containing four artificial categories, respectively. The experimental results are shown in Figures 6-9. The number of clusters was set to 3, 4, 2, and 2. The PSNRs and error rates are shown in Table 4 and Table 5, respectively, and the iterative operation time and number of iterations are shown in Table 6.

Test Result
Comparing the results of image segmentation with the multiplicative noise in Figures 6-9, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed some of the multiplicative noise, but in the case of high noise interference, compared with the improved algorithm, the segmentation results contained a large amount of noise. As seen from the results of the artificial segmentation in Figures 6-9, the LDMREFCM algorithm produced the phenomenon of false segmentation. The KWFLICM algorithm and the improved algorithm could remove a large number of noise points. From the test results of the PSNR and the error rate (ERR) of the algorithms in Tables 4 and 5, along with the iteration times of the algorithms in Table 6, it can be concluded that compared with the PSNR of the FCM_S and FLICM algorithms, the LDMREFCM, KWFLICM, and improved algorithms had a significantly greater noise suppression ability. Table 6 shows that the iteration time of the improved algorithm was the lowest. Although the PSNR of the improved algorithm was 0.7 dB less than that of the KWFLICM algorithm [44,45], the iteration time was 300 s less than that of the KWFLICM algorithm, and the PSNR of the brain CT image segmentation test results in Table 6 was 0.7 dB less than that of the KWFLICM algorithm. However, the iteration time was 45 s less than that of the KWFLICM algorithm. In summary, the proposed algorithm showed a superior performance compared with the FCM_S, FLICM, KWFLICM, and LDMREFCM algorithms, where a large amount of salt-and-pepper noise is suppressed, and the iteration speed of the algorithm was faster.

Segmentation Performance Test
Multiplicative noise was added to the remote sensing image, the medical image, and the man-made image with a mean value of 0 and mean variances of 80, 114, 140, and 161. The number of clusters was set to 3, 4, 2, and 2. The experimental results are shown in Figures 10-13. The error rate of the segmentation results is shown in Tables 7 and 8. The iteration times and number of iterations of the algorithms are shown in Table 9 [46][47][48].

Test Result
Comparing the results of image segmentation with the multiplicative noise in Figures 6-9, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed some of the multiplicative noise, but in the case of high noise interference, compared with the improved algorithm, the segmentation results contained a large amount of noise. As seen from the results of the artificial segmentation in Figures 6-9, the LDMREFCM algorithm produced the phenomenon of false segmentation. The KWFLICM algorithm and the improved algorithm could remove a large number of noise points. From the test results of the PSNR and the error rate (ERR) of the algorithms in Tables 4 and 5, along with the iteration times of the algorithms in Table 6, it can be concluded that compared with the PSNR of the FCM_S and FLICM algorithms, the LDMREFCM, KWFLICM, and improved algorithms had a significantly greater noise suppression ability. Table 6 shows that the iteration time of the improved algorithm was the lowest. Although the PSNR of the improved algorithm was 0.7 dB less than that of the KWFLICM algorithm [44,45], the iteration time was 300 s less than that of the KWFLICM algorithm, and the PSNR of the brain CT image segmentation test results in Table 6 was 0.7 dB less than that of the KWFLICM algorithm. However, the iteration time was 45 s less than that of the KWFLICM algorithm. In summary, the proposed algorithm showed a superior performance compared with the FCM_S, FLICM, KWFLICM, and LDMREFCM algorithms, where a large amount of salt-and-pepper noise is suppressed, and the iteration speed of the algorithm was faster.

Segmentation Performance Test
Multiplicative noise was added to the remote sensing image, the medical image, and the man-made image with a mean value of 0 and mean variances of 80, 114, 140, and 161. The number of clusters was set to 3, 4, 2, and 2. The experimental results are shown in Figures 10-13. The error rate of the segmentation results is shown in Tables 7 and 8. The iteration times and number of iterations of the algorithms are shown in Table 9 [46][47][48].

Test Result
Comparing the results of the image segmentation with multiplicative noise in Figures 10-13, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed part of the multiplicative noise. The KWFLICM and LDMREFCM algorithms could remove a large number of noise points. Compared with the other algorithms, the improved algorithm contained the fewest noise points. The edges of the segmentation results were continuous and smooth [49]. Compared with Table 7, the PSNR of the improved algorithm was the largest, which proved that the improved algorithm had a better robustness against multiplicative noise. Comparing the error rate of the segmentation results of each algorithm in Table 8 shows that the segmentation results of this algorithm were closer to the ideal segmentation results and had a better segmentation performance. Combined with the comparison of the iteration times in Table 9, the segmentation performance and PSNR of the KWFLICM algorithm were lower than those of the improved algorithm,

Test Result
Comparing the results of the image segmentation with multiplicative noise in Figures 10-13, we can see that the FCM_S and FLICM algorithms took neighborhood information into account and suppressed part of the multiplicative noise. The KWFLICM and LDMREFCM algorithms could remove a large number of noise points. Compared with the other algorithms, the improved algorithm contained the fewest noise points. The edges of the segmentation results were continuous and smooth [49]. Compared with Table 7, the PSNR of the improved algorithm was the largest, which proved that the improved algorithm had a better robustness against multiplicative noise. Comparing the error rate of the segmentation results of each algorithm in Table 8 shows that the segmentation results of this algorithm were closer to the ideal segmentation results and had a better segmentation performance. Combined with the comparison of the iteration times in Table 9, the segmentation performance and PSNR of the KWFLICM algorithm were lower than those of the improved algorithm, and the iteration time of the improved algorithm was much shorter than that of the KWFLICM algorithm. In conclusion, the improved algorithm not only guaranteed good robustness against noise, but also reduced the iteration time and improved the operation efficiency of the algorithm.

Segmentation Performance Test
To test the segmentation efficiency of the algorithm, several real remote sensing images of different sizes were selected for segmentation. Table 10 shows the segmentation time comparison of the five real remote sensing images of different sizes (Figure 14a-g, with sizes of 256 × 256, 532 × 486, 350 × 290, 500 × 500, 590 × 490, 700 × 680, 1024 × 768, respectively), among which, the bold value is the optimal value. It can be seen from this that the segmentation efficiency of the first four comparison algorithms on each real remote sensing image is lower, and the larger the image scale is, the longer the segmentation time is; the improved algorithm can achieve less segmentation time for real remote sensing images of different sizes, and the segmentation efficiency is much higher than other algorithms. The above analysis shows that the algorithm proposed in this paper has high efficiency, and it has certain practical significance and reference value for large-scale remote sensing image processing in practical applications.     To test the segmentation efficiency of the algorithm, several real remote sensing images of different sizes were selected for segmentation. Table 10 shows the segmentation time comparison of the five real remote sensing images of different sizes (Figure 14a-g, with sizes of 256 × 256, 532 × 486, 350 × 290, 500 × 500, 590 × 490, 700 × 680, 1024 × 768, respectively), among which, the bold value is the optimal value. It can be seen from this that the segmentation efficiency of the first four comparison algorithms on each real remote sensing image is lower, and the larger the image scale is, the longer the segmentation time is; the improved algorithm can achieve less segmentation time for real remote sensing images of different sizes, and the segmentation efficiency is much higher than other algorithms. The above analysis shows that the algorithm proposed in this paper has high efficiency, and it has certain practical significance and reference value for large-scale remote sensing image processing in practical applications.

Segmentation Test of Remote Sensing Images Disturbed Using Mixed Noise
Three remote sensing images, including farmland, a stadium, and a river (Figure 15), were segmented and tested by adding Gaussian noise (mean value was 0, mean square deviation was 25) and salt-and-pepper noise of different intensities (5%, 10%, and 30%). The number of clusters was set to 2, 3, and 2, and the segmentation results are shown in Figures 16-18.

Segmentation Test of Remote Sensing Images Disturbed Using Mixed Noise
Three remote sensing images, including farmland, a stadium, and a river (Figure 15), were segmented and tested by adding Gaussian noise (mean value was 0, mean square deviation was 25) and salt-and-pepper noise of different intensities (5%, 10%, and 30%). The number of clusters was set to 2, 3, and 2, and the segmentation results are shown in Figures 16-18.

Segmentation Test of Remote Sensing Images Disturbed Using Mixed Noise
Three remote sensing images, including farmland, a stadium, and a river (Figure 15), were segmented and tested by adding Gaussian noise (mean value was 0, mean square deviation was 25) and salt-and-pepper noise of different intensities (5%, 10%, and 30%). The number of clusters was set to 2, 3, and 2, and the segmentation results are shown in Figures 16-18.    Compared with the other five algorithms, the improved algorithm was more suitable for the needs of image segmentation disturbed by salt-and-pepper and Gaussian mixture noise, as is shown in Tables 11 and 12.