Kernel Probabilistic K-Means Clustering

Kernel fuzzy c-means (KFCM) is a significantly improved version of fuzzy c-means (FCM) for processing linearly inseparable datasets. However, for fuzzification parameter m=1, the problem of KFCM (kernel fuzzy c-means) cannot be solved by Lagrangian optimization. To solve this problem, an equivalent model, called kernel probabilistic k-means (KPKM), is proposed here. The novel model relates KFCM to kernel k-means (KKM) in a unified mathematic framework. Moreover, the proposed KPKM can be addressed by the active gradient projection (AGP) method, which is a nonlinear programming technique with constraints of linear equalities and linear inequalities. To accelerate the AGP method, a fast AGP (FAGP) algorithm was designed. The proposed FAGP uses a maximum-step strategy to estimate the step length, and uses an iterative method to update the projection matrix. Experiments demonstrated the effectiveness of the proposed method through a performance comparison of KPKM with KFCM, KKM, FCM and k-means. Experiments showed that the proposed KPKM is able to find nonlinearly separable structures in synthetic datasets. Ten real UCI datasets were used in this study, and KPKM had better clustering performance on at least six datsets. The proposed fast AGP requires less running time than the original AGP, and it reduced running time by 76–95% on real datasets.

Generally, k-means minimizes the sum of squared Euclidean distances between each sample point and its nearest clustering center [1]. One variant of k-means is kernel kmeans (KKM) [30][31][32][33], which is able to find nonlinearly separable structures by using the kernel function method. Another variant of k-means is fuzzy c-means (FCM) [2], which determines partitions by computing the membership degree of each data point to each cluster. The higher the membership degree, the greater the possibility of the data point belonging to the cluster. Although FCM is more flexible in applications [11][12][13][14][15][16], it is primarily suitable for linearly separable datasets. Kernel fuzzy c-means (KFCM) [34] is a significantly improved version of fuzzy c-means for clustering linearly inseparable datasets. However, the problem of KFCM with fuzzification parameter m = 1 cannot be solved by existing methods.
To solve the special case of KFCM for m = 1, a novel model called kernel probabilistic k-means (KPKM) is proposed. In fact, KPKM is a nonlinear programming model constrained on linear equalities and linear inequalities, and it is equivalent to KFCM at m = 1. In theory, the proposed KPKM can be solved by the active gradient projection (AGP) method [35,36]. Since the AGP method may take too much time on large datasets, we further propose a fast AGP (FAGP) algorithm to solve KPKM more efficiently. Moreover, we report experiments demonstrating its effectiveness compared with KFCM, KKM, FCM, and KM.
The paper is organized as follows: Section 2 reviews previous work. The KPKM algorithm is proposed in Section 3. Section 4 proposes a solution for KPKM. Section 5 presents descriptions and analyses of experiments. Conclusions and future work are mentioned in Section 6.

Background and Related Work
There has been a lot of work related to this paper, mainly including k-means, fuzzy c-means, kernel k-means and kernel fuzzy c-means.
K-means minimizes the sum of squared Euclidean distances between each sample point and its nearest cluster center. K-means first chooses initial clustering centers randomly or manually, and then partitions a dataset into several clusters (a data point belongs to the cluster whose clustering center is nearest to the data point), and computes the mean of a cluster as the clustering center. K-means repeatedly updates clustering centers and clusters until convergence. FCM has the same ideal as k-means. FCM introduces a membership degree w ij and a fuzzy coefficient m into the objective function. The higher w ij is, the greater possibility of the i-th data point belonging to the j-th cluster. K-means and FCM belong to partition-based clustering algorithms, and partition-based clustering algorithms usually are not able to cluster linearly inseparable datasets. Kernel method maps a linearly inseparable dataset into a linearly separable space, so kernel k-means (and FCM) using a kernel function can cluster linearly inseparable datasets.

K-Means and Fuzzy C-Means
Let X = x i |x i ∈ R D , 1 ≤ i ≤ L represent a dataset. K-means divides it into k clusters.
If ω j denotes the j-th cluster, we have X = K j=1 ω j and ∀1 ≤ i = j ≤ K, ω i ω j = ∅. Using L j = ω j to stand for the number of elements in ω j with the center of c j , k-means can be described as minimizing the following objective function: Let L = K ∑ j=1 L j . By using c instead of k and assigning membership degree w ij to the i-th data point in the j-th cluster for 1 ≤ i ≤ L and 1 ≤ j ≤ C, the fuzzy c-means clustering model can be formulated as follows: where m > 1, w ij and c j are computed alternately below [2].

Kernel K-Means and Kernel Fuzzy C-Means
To improve performance of k-means and fuzzy c-means in linearly inseparable datasets, we may develop their kernel versions by choosing a feature mapping ϕ(·) : R D → H from data points to kernel Hilbert space [37]. Though ϕ is usually unknown, it must satisfy where K(x, y) is a kernel function. Commonly used kernel functions are presented in Table 1.

Name Code
Linear kernel K(x, y) = x, y Laplace Radial Basis Function kernel Polynomial kernel K pol (x, y) = (x · y + β) α Sigmoid kernel K tan (x, y) = tanh(αx · y + β) In Table 1, x, y = x · y denotes the inner product of x and y, and σ, α, β are parameters of the kernel. The objective function of kernel k-means is defined as Using (5) and (7), we have Let w ij represent the membership degree of the i-th point belonging to the j-th class. Likewise, we can define the kernel FCM clustering model as follows. where The membership degree is computed via

Kernel Probabilistic K-Means
In this section, the kernel probabilistic k-means (KPKM) are proposed. We first review the problem. When m = 1, the KFCM model gets into a special case, namely, This special case cannot be solved by Lagrangian optimization for m > 1, because (12) cannot be computed when m = 1 (when m = 1, 1 m−1 = 1 0 cannot be computed). In this paper, we use the optimization methods to solve this problem, but the partial derivative of (13) with respect to w ij is ϕ(x i ) − c j 2 , and it does not contain w ij .
In order to solve the problem, we introduce (14) into (13), and redefine the member degrees w ij as probability p ij for 1 ≤ i ≤ L and 1 ≤ j ≤ K. Finally, we have where probability vector (16) is the proposed kernel probabilistic k-means.
The proposed KPKM is a soft clustering method. The p ij (0 ≤ p ij ≤ 1) is the probability of the i-th data point belonging to the j-th cluster. The higher p ij is, the greater possibility of the i-th data point belonging to the j-th cluster. In KKM, the membership degree has only two values (0 and 1). In the proposed KPKM, w ij ∈ [0, 1] (although final w ij = 0 or 1).

A Fast Solution to KPKM
KPKM is a nonlinear programming problem with linear equalities and linear inequalities constraints, and it is able to be solved by active gradient projection [35,36] theoretically. In this section, on the basis of AGP, we first calculate the gradient of of the objection function of KPKM, and then design a fast AGP algorithm to solve the KPKM model.
The fast AGP has two advantages: iteratively updating the projection matrix and estimating the maximum step length.

Gradient Calculation
For convenience, we define F kj as Using the chain rule on (16), we obtain According to (17), we can further derive By substituting (19) into (18), we finally get and the gradient

Fast AGP
In the constraints of KPKM, there are L linear equalities and K×L linear inequalities, Let φ = K × L. Let I φ×φ be the identity matrix of size φ × φ. Define two matrices, inequality matrix A and equality matrix E, where Note that each row of A corresponds to one and only one inequality in (23), and each row of E corresponds to one and only one equality in (22). Accordingly, the KPKM's constraints can be simply expressed as Let P (0) stand for a randomly initialized probability vector, and P (k) for the probability vector at iteration k. The rows of inequality matrix A can be broken into two groups: one is active; the other is inactive. The active group is composed of all inequalities that must work exactly as an equality at P (k) , whereas the inactive group is composed of the left inequalities. If A (k) 1 and A (k) 2 respectively denote the active group and the inactive group, we have At iteration k, the active matrix N (k) is defined as When N = N (k) is not a square matrix, we can construct its projection matrix G (k) and the corresponding orthogonal projection matrix Q (k) as follows: Suppose that n from A (k−1) 2 is the active row vector at iteration k, satisfying n T P (k−1) > 0 and n T P (k) = 0. According to matrix theory [38], we can more efficiently compute G (k) and Q (k) by Furthermore, we can compute the projected gradient by Using d (k) , we update the probability vector P (k+1) as where t (k) is the step length. Usually, t (k) is chosen as a small number. For fast convergence, we estimate the maximum step length as follows.
As shown in Figure 1, maximum step length is compared with small step length. When N = N (k) is a square matrix, it must be invertible. In this case, we actually have A new descent direction can be computed as follows: (1) Compute a new vector, (2) Break q (k) into two parts q (k) 1 and q (k) 2 , namely, where the size of q   31) and (33) to compute d (k) . The above fast AGP solution to KPKM is outlined in Algorithm 1. Compared with the original AGP, the fast AGP has two advantages: iteratively updating the projection matrix (shown in (32)) and estimating the maximum step length (shown in (35)).

Analysis of Complexity
In this section, the computational complexities for the traditional AGP algorithm and the proposed FAGP algorithm are analyzed. Let T A represent the iteration number of AGP. Let T F represent the iteration number of FAGP. In computations of all matrices, the computational complexity of the projection matrix G is the highest. In the AGP algorithm, computing G via (30) requires O(K 3 L 3 ), so the total computational complexity for AGP algorithm is O(T A K 3 L 3 ). In the FAGP algorithm, G is computed via (32), and it does not require to compute the inverse of matrices (i.e., (NN T ) −1 ), and computing G via (32) requires O(K 3 L 2 ), so the total computational complexity for FAGP algorithm (i.e., Algorithm 1) is O(T F K 3 L 2 ).

Experimental Results
In order to evaluate performance of the proposed KPKM model (equivalent to KFCM at m = 1) solved by the FAGP algorithm, we have conducted a lot of experiments on one synthetic dataset, ten UCI datasets (http://archive.ics.uci.edu/ml (accessed on 8 March 2021)) and the MNIST dataset (http://yann.lecun.com/exdb/mnist/ (accessed on 8 March 2021)). These datasets are detailed in Sections 5.1-5.3. In Section 5.1, we use a synthetic dataset to compare KPKM using a Gaussian kernel with KPKM using a linear kernel. In Sections 5.2 and 5.3, we compare the proposed KPKM with KFCM, KKM, FCM, and KM to evaluate the performance of KPKM solved by the proposed fast AGP. Moreover, the Sections 5.4 and 5.5 evaluate the descent stability and convergence speed of the proposed FAGP, respectively. In the experiments, we implemented our own MATLAB code for KPKM, KFCM, and KKM with two build-in functions, sparse and full, employed for matrix optimization. Moreover, we called MATLAB's build-in functions fcm and kmeans for FCM and KM, respectively.
All experiments were carried out on a PC with an Intel(R) Core(TM) i7-4790 CPU at 3.60 GHz, 8.00 G RAM, running Windows 7 and MATLAB 2015a.

The Experiment on the Synthetic Dataset
In this experiment, we analyzed the influences of the Gaussian radial basis function kernel and the linear kernel on KPKM when clustering one synthetic dataset, which is shown in Figure 2. The synthetic dataset contained 300 points, with 100 and 200 being from two linearly inseparable classes, disc and ring, respectively. The result is displayed in Figure 3. Obviously, Gaussian radial basis function KPKM can cluster perfectly, whereas linear KPKM cannot.

Experiment on Ten UCI Datasets
In this experiment, we compare the clustering performance of KPKM with the performances of KFCM, KKM, FCM, and KM in terms of three measures: normalized mutual information (NMI), adjusted Rand index (ARI), and v-measure (VM) [39,40]. NMI is a normalization of the mutual information score to scale the results between 0 (no mutual information) and 1 (perfect correlation). The NMI is defined as where H(U) is the entropy of U; MI is given by where |U i | is the number of the samples in cluster U i . ARI is an adjusted similarity measure between two clusters. The ARI is defined as where RI is the ratio of data points clustered correctly to all data points, and E(RI) is the expectation of RI. VM is a harmonic mean between homogeneity and completeness, where homogeneity means that each cluster is a subset of a single class, and completeness means that each class is a subset of a single cluster. The VM is defined as The higher the NMI, ARI and VM, the better the performance. We describe the ten UCI datasets represented in Table 2 by name, code, number of instances, number of classes and number of dimensions. A set of experiential parameters was selected. We set m = 2 for KFCM, and m = 1.3 for FCM. With the Gaussian radial basis function kernel, we also selected appropriate σ (shown in Table 3), and report the clustering results in Table 4. The better results in each case is highlighted in bold. (The performance of many a method depends on parameters [41]. By experiments we found that for different algorithms with the same kernel function, the most appropriate parameters are usually different, so we used different settings to make the methods have the best clustering performances they could. We also tried to select the most appropriate parameter by using our experience.) We ran every algorithm 10 times, and average results were calculated.

Dataset
Moreover, we defined a CWSS kernel based on complex wavelet structural similarity (CWSS) [42]. CWSS may be regarded as a coefficient that does not change the structural content of an image. By using CWSS(x, y) to denote the similarity between two images x and y, we can express the CWSS kernel as where σ = 5 is set for KFCM, and σ = 1 for both KPKM and KKM. Additionally, m = 1.3 is set for KFCM. We present the results in Table 5, with the examples of Figure 4 correspondingly being displayed in Figure 5. From Table 5, we can see that KPKM outperformed KFCM and KKM in terms of NMI, ARI, and VM. From Figure 5, we can observe that both KPKM and KKM found ten clusters, whereas KFCM found only seven clusters, although the number of clusters was set to ten.

Experiment for Descent Stability
In this experiment, we used 10 UCI datasets to evaluate descent stability of FAGP. AGP selects a small step length to converge. If AGP selects a large step length, the objective function value may descend with oscillation, and even does not converge. FAGP iteratively estimates a maximum step length at each iteration for speeding up its convergence. However, does it have any serious influence on the convergence? we ran the proposed FAGP on 10 UCI datasets to demonstrate the descent stability of FAGP. As shown in Figure 6, we can see that FAGP descends stably without oscillation at each iteration. . The x-axis shows the iteration number. The y-axis shows the objective function value.

Convergence Speed Comparison between FAGP and AGP
In this experiment, we compared the convergence speeds of FAGP and AGP by using running time on 10 UCI datasets. η is the ratio of FAGP's running time to AGP's. FAGP and AGP used the same initializations in each case. The results are presented in Tables 6 and 7 ("-" means the running time was too long to obtain the final clustering results), and we can observe the proposed FAGP ran faster than AGP on all the 6 datasets. For Iris and Seeds datasets, FAGP used less than 10% of running time of AGP. FAGP also required fewer iterations than AGP. For large datasets, the running time of AGP was too long, but the proposed fast AGP could obtain the final clustering results.

Conclusions
In this paper, a novel clustering model (i.e., KPKM) was proposed. The proposed KPKM solves the problem of KFCM for m = 1, and this problem cannot be solved by existing method. The traditional AGP method can solve the proposed KPKM, but the efficiency of AGP is low. A fast AGP was proposed to speed up the AGP. The proposed fast AGP uses a maximum step length strategy to reduce the iteration number and uses an iterative method to update the projection matrix. The experimental results demonstrated that the fast AGP is able to solve the KPKM and the fast AGP requires less running time than AGP (the proposed FAGP requires 4.22-27.90% of the running time of AGP on real UCI datasets). The convergence of the proposed method was also analyzed by experiments. Additionally, in the experiments, the KPKM model could produce overall better clustering results than the other models, including KFCM, KKM, FCM, and KM. The proposed KPKM obtained the best clustering results on at least 6 real UCI datasets (a total of 10 real UCI datasets were used).
As future work, the proposed KPKM using other kernels will be evaluated in a variety of applications. For large datasets, the proposed method still has some disadvantages, so the next project will include speeding up the AGP on large datasets.

Abbreviations
The following abbreviations are used in this manuscript: X dataset x i i-th data points of X ϕ(·) feature mapping from data points to kernel Hilbert space K lap (·, ·) Laplace Radial Basis Function kernel K gau (·, ·) Gaussian Radial Basis Function kernel K pol (·, ·) polynomial kernel K tan (·, ·) sigmoid kernel ω j j-th cluster L number of elements in X L j number of elements in ω j c j center of ω j K and C number of clustering center W membership degree matrix w ij membership degree to the i-th data point in the j-th cluster A inequality matrix E equality matrix P probability matrix I identity matrix p ij probability to the i-th data point in the j-th cluster G projection matrix Q orthogonal projection matrix N active matrix n row vector of N d projected gradient t step length