Next Article in Journal
The Auxiliary Problem Principle with Self-Adaptive Penalty Parameter for Multi-Area Economic Dispatch Problem
Previous Article in Journal
A Study on the Fuzzy-Logic-Based Solar Power MPPT Algorithms Using Different Fuzzy Input Variables
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation

1
College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, 29 Yudao Street, Nanjing 210016, China
2
College of Electronic and Information Engineering, Nanjing University of Information Science and Technology, 219 Ningliu Road, Nanjing 210044, China
*
Author to whom correspondence should be addressed.
Algorithms 2015, 8(2), 128-143; https://doi.org/10.3390/a8020128
Submission received: 22 January 2015 / Revised: 7 April 2015 / Accepted: 9 April 2015 / Published: 13 April 2015

Abstract

:
Aiming at improving the well-known fuzzy compactness and separation algorithm (FCS), this paper proposes a new clustering algorithm based on feature weighting fuzzy compactness and separation (WFCS). In view of the contribution of features to clustering, the proposed algorithm introduces the feature weighting into the objective function. We first formulate the membership and feature weighting, and analyze the membership of data points falling on the crisp boundary, then give the adjustment strategy. The proposed WFCS is validated both on simulated dataset and real dataset. The experimental results demonstrate that the proposed WFCS has the characteristics of hard clustering and fuzzy clustering, and outperforms many existing clustering algorithms with respect to three metrics: Rand Index, Xie-Beni Index and Within-Between(WB) Index.

1. Introduction

Similar data belongs to a cluster, while different data belongs to different clusters [1,2,3]. The fuzzy C-means (FCM) algorithm is a classical pattern recognition method [4], and many FCM-type clustering algorithms were proposed [5,6]. However, the between-cluster separation is ignored in these clustering techniques because these algorithms partition data points only by minimizing the distances between data points and cluster centers (i.e., the within-cluster compactness). Therefore, Wu et al. proposed a fuzzy compactness and separation (FCS) algorithm [7]. The proposed FCS algorithm assigns a crisp boundary for each cluster so that hard memberships and fuzzy memberships can co-exist in the clustering results.
For high dimensional dataset clustering, features of data are assigned weights which illustrate the importance degree of features. A major problem of un-weighted clustering algorithms lies in treating all features equally in the clustering process. Therefore, many contributions attempt to weight features with various methods and to optimize the FCM-type algorithms [8,9,10,11,12,13]. Frigui and Nasraoui [8] proposed the simultaneous clustering and attribute discrimination algorithm, in which clustering and feature weighting can be performed simultaneously in an unsupervised manner; Wang et al. [9] discussed that the weight assignment can be given by learning according to the gradient descent technique; Jing et al. proposed an EWkmeans [10] which minimizes the within-cluster compactness and maximizes the negative weight entropy to stimulate more features contributing to the identification of a cluster; Wang et al. [11] presented a new fuzzy C-means algorithm with variable weighting (WFCM) for high dimensional data analysis; Wang et al. [12] put forward a feature weighting fuzzy clustering algorithm integrating rough sets and shadowed sets (WSRFCM); Deng et al. [13] introduced the between-cluster separation into the EWkmeans and proposed the enhanced soft subspace clustering (ESSC) algorithm. The WFCM and WSRFCM employ only the within-cluster compactness while updating the membership matrix and feature weights. ESSC uses a parameter ƞ to balance the within-cluster compactness and between-cluster separation. However, negative values may be produced in the membership matrix if the balancing parameter is too large. Therefore, to avoid the negative membership value, ƞ could be set zero. In this case, ESSC would degrade to the EWkmeans.
In the real world, some data points belong to a cluster strictly (i.e., hard clustering) and others belong to a cluster ambiguously (i.e., fuzzy clustering). For maximizing the between-cluster separation and minimizing the within-cluster compactness, we proposed a new feature weighting fuzzy compactness and separation (WFCS) algorithm with fusion of hard clustering and fuzzy clustering. The rest of this paper is organized as follows. Section 2 introduces both the FCS and the WFCS algorithms, addresses the flaw of FCS and discusses the adjustment of membership and feature weighting of WFCS. The proposed algorithm is evaluated in section 3. Finally, this paper is concluded and the future work is discussed in Section 4.
Table 1 illustrates the main symbols that appear in the following formulas.

2. The FCS and WFCS Algorithms

In this section, the FCS algorithm is reviewed and data points on the crisp boundary are discussed. Then we present the WFCS algorithm, demonstrate the formulas of the membership and feature weight and give the adjustment strategy of these formulas.
X = { x 1 , x 2 , ... , x n } is a dataset in an s-dimensional Euclidean space s , and X ¯ denotes the grand mean of X .
Table 1. Symbols list.
Table 1. Symbols list.
SymbolDescription
n the numbers of data
c the numbers of clusters
s the numbers of features
x j the j th data, x j s
a i the i th cluster center, a i s
μ i j the membership of the j th data belonging to the i th cluster
m the fuzzy exponent
ω k the k th feature weigh
α the feature weighting exponent
η the parameter to control the influence of between-cluster separation

2.1. FCS Algorithm [7]

The fuzzy within-cluster compactness S F W and the fuzzy between-cluster separation S F B are defined as:
S F W = i = 1 c j = 1 n μ i j m || x j a i || 2
S F B = i = 1 c j = 1 n μ i j m || a i X ¯ || 2
Objective function is formulated as:
J F C S = S F W η S F B . i = 1 c j = 1 n μ i j m || x j a i || 2 i = 1 c η i j = 1 n μ i j m || a i X ¯ || 2 s . t . μ i j [ 0 , 1 ] , i = 1 c μ i j = 1
where η ={ η 1 , ... , η c } .
In Equation (3), η i || a i X ¯ || 2 represents the crisp kernel size of i th cluster (2-dimensional diagram is shown in Figure 1). The parameter η i guarantees that no two crisp kernels will overlap [7] and can be demonstrated as:
η i = β 4 min i i ' || a i a i || 2 max t || a t X ¯ || 2
where 0 β 1 and t = 1 , ... , c .
By minimizing J F C S , we have:
a i = j = 1 n μ i j m x j η i j = 1 n μ i j m X ¯ j = 1 n μ i j m η i j = 1 n μ i j m
μ i j = ( || x j a i || 2 η i || a i X ¯ || 2 ) 1 m 1 t = 1 c ( || x j a t || 2 η t || a t X ¯ || 2 ) 1 m 1
According to Equations (5) and (6), dataset X can be partitioned into c clusters by iteratively updating cluster centers and membership value.
The data point in the i th crisp kernel belongs to the i th cluster strictly, which is called hard clustering. However, if a data point falls on the crisp boundary (see Figure 1), membership value μ i j will be infinite. Hence, according to Equation (6) the FCS algorithm fails.
Figure 1. Illustration of the crisp kernel.
Figure 1. Illustration of the crisp kernel.
Algorithms 08 00128 g001

2.2. WFCS Algorithm

2.2.1. The Principle of WFCS

Aiming at clustering data more reasonably, we introduce feature weight into the FCS. Firstly, we define the feature weighting fuzzy within-cluster matrix S W F W and between-cluster matrix S W F B as follows:
S W F W = i = 1 c j = 1 n k = 1 s μ i j m ω k α || x j k a i k || 2
S W F B = i = 1 c j = 1 n k = 1 s η i μ i j m ω k α || a i k X k ¯ || 2
We extend the formula of η i as:
η i = β 4 min i i   ω α || a i a i || 2 max t   ω α || a t X ¯ || 2
Based on Equation (7) and Equation (8), the objective function is shown as:
J W F C S = i = 1 c j = 1 n k = 1 s μ i j m ω k α || x j k a i k || 2 i = 1 c j = 1 n k = 1 s η i μ i j m ω k α || a i k X k ¯ || 2
Hence, WFCS can be formulated as an optimization problem which can be expressed as:
{ min J W F C S s . t . i = 1 c μ i j = 1 , k = 1 s ω k = 1
Equation (11) can be solved via the Lagrange multiplier. The L function can be given by:
L = i = 1 c j = 1 n k = 1 s μ i j m ω k α || x j k a i k || 2 i = 1 c j = 1 n k = 1 s η i μ i j m ω k α || a i k X k ¯ || 2 + j = 1 n λ j ( i = 1 c μ i j 1 ) γ ( k = 1 s ω k 1 )
Let the partial derivatives of L function with respect to μ i j , ω k , λ j and γ equal to zero. Then we have:
a i k = j = 1 n μ i j m ( x j k η i X k ¯ ) j = 1 n μ i j m ( 1 η i )
ω k = ( i = 1 c j = 1 n μ i j m ( || x j k a i k || 2 η i || a i k X k ¯ || 2 ) ) 1 1 α t = 1 s ( i = 1 c j = 1 n μ i j m ( || x j t a i t || 2 η i || a i t X t ¯ || 2 ) ) 1 1 α
μ i j = ( k = 1 s ω k α ( || x j k a i k || 2 η i || a i k X k ¯ || 2 ) ) 1 1 m t = 1 c ( k = 1 s ω k α ( || x j k a t k || 2 η t || a t k X k ¯ || 2 ) ) 1 1 m
According to Equations (13)–(15), dataset X can be partitioned into c clusters by iteratively updating a , ω and μ .
We note here that the objective functions of WFCS and ESSC include the within-cluster compactness and between-cluster separation. However, in ESSC the parameter η will be assigned a value at the beginning of the iteration procedure and will be fixed. Furthermore, if η = 0 , the ESSC will degrade to the entropy weighting clustering algorithm without the between-cluster information. However, in the proposed WFCS, η will be calculated automatically by the between-cluster information and will not be zero if the parameter β 0 .

2.2.2. The Adjustment Strategies

(1) Adjustment of ω k
Let
Δ k = i = 1 c j = 1 n μ i j m ( || x j k a i k || 2 η i || a i k X k ¯ || 2 )
then Equation (14) can be written as:
ω k = Δ k 1 1 α t = 1 s Δ k 1 1 α
If the value of Δ k is zero, this means that the k th feature has exactly the same effect on all clusters then ω k should be zero.
Here, Δ k is the grand fuzzy distance between data points and crisp kernels on the k th feature. Hence, Δ k is non-negative when distribution of data points is balance and so is ω k . On the contrary, Δ k is negative when distribution of data points is imbalance and ω k could be negative. Consequently, we have to make some adjustment. Here, Δ p { Δ k | Δ k 0 , k = 1 , ... , s } . Therefore, the projection function may be expressed as:
Δ p = P ( Δ p ) = Δ p min ( Δ t ) + min Δ q > 0 ( Δ q )
where t = 1 , ... , s and p = 1 , ... , s .
After the adjustment, the feature weighting can be given by Equation (14).
(2) Adjustment of μ i j
Let
Δ i j = k = 1 s ω k α ( || x j k a i k || 2 η i || a i k X k ¯ || 2 )
then Equation (15) can be presented as:
μ i j = Δ i j 1 1 m t = 1 c Δ i j 1 1 m
If x j falls on the i th crisp boundary, Δ i j = 0 . Accordingly, the membership value of x j is infinite. The fact is that the membership value of x j is fuzzier than that of data point in the crisp kernel. Furthermore, the membership value of x j is greater than that of data point lying outside crisp kernel. Based on the discussions above, we have the projection function as Equation (21):
Δ = P ( Δ ) = Δ + min Δ t q > 0 ( Δ t q )
where Δ { Δ i j | Δ i j 0 , i = 1 , ... , c , j = 1 , ... , n } , t = 1 , ... , c and q = 1 , ... , n .
After the adjustment, μ i j can be given by Equation (15).

2.2.3. The Implement of WFCS

Step 1. Choose β , m , α and the iterative error threshold ε . Assign a random membership partition matrix { μ i j } and random values between 0 and 1 to η . Set the initial iteration counter as l = 1
Step 2. Update a i ( l ) with μ i j ( l 1 ) , η i ( l 1 ) according to Equation (13);
Step 3. Update ω k ( l ) with μ i j ( l 1 ) , a i ( l . 1 ) , η i ( l 1 ) based on both Equations (14) and (18);
Step 4. Update μ i j ( l ) with ω k ( l . 1 ) , a i ( l . 1 ) , η i ( l 1 ) according to Equations (15) and (21);
Step 5. Compute η i ( l ) with β , a i ( l ) according to Equation (9);
Step 6. Set l = l + 1 and return to Step 2 until convergence has been reached.

3. Performance Evaluation and Analysis

In this section, the proposed WFCS algorithm has been evaluated by a large number of experiments performed on the simulated dataset and the real dataset. The real datasets include eight UCI benchmarking datasets [14] and a CFM56-type engine dataset (named as ENGINE (ENGINE data can be provided by sending email to the corresponding author)) with measurement noise which has been collected from Air Company. In order to obtain the simulated data, an aero-engine gas path data with Gauss noise (named as LTT) was obtained by a simulation software (developed by the Laboratory of Thermal Turbo machines at the National Technical University of Athens (Downloaded from http://www.ltt.mech.ntua.gr/index.php/softwaremn/teachesmn)). The ENGINE and the LTT datasets present the aero-engine’s states, including healthy states and degrade states. In these experiments, all datasets are normalized into (0, 1) [13].
First, the datasets information, validation criteria and parameters setting are described. Then, the properties of the WFCS are investigated based on the experimental results of the Iris dataset. A detailed comparison with other three feature weighting fuzzy clustering algorithms (ESSC, WFCM, WSRFCM) and one un-weighted fuzzy clustering algorithm (FCS) is performed at last.

3.1. Datasets Information, Validation Criteria and Experimental Setting

The 10 datasets information are summarized in Table 2.
Table 2. Summary of 10 datasets.
Table 2. Summary of 10 datasets.
DatasetsSize of DatasetNumber of DimensionsNumber of Clusters
Australian690142
Balance-scale62543
Breast Cancer569302
Heart270132
Iris15043
Pima76882
Vehicle846184
Wine178133
ENGINE18632
LTT30032
The rand index (RI) [15], the Xie-Beni index (XB) [16] and Within-Between index (WB) [17] are used for evaluating the performance of the proposed WFCS algorithm. The WB index is a recently proposed one. RI index is defined to evaluate the accuracy of partition—the higher the value is, the higher accuracy we get. XB and WB index are to evaluate the with-cluster compactness and between-cluster separation—the smaller the XB and WB values are, the better the clustering results is.
The parameter setting is: α < 0 or α > 1 [18], m > 1 , ε = 10 6 and β { 0.005 , 0.05 , ... , 1 } . Parameter values in experiments are tabulated in Table 3, which is based on the best clustering results in terms of the means and standard deviations of the RI index. We conduct each algorithm 10 times. All experiments were implemented on a computer with 2.5 GHz CPU and 8 GB RAM.
Table 3. Parameter values for 10 datasets.
Table 3. Parameter values for 10 datasets.
DatasetsWFCSESSCWSRFCMWFCMFCS
βαγηααβ
Australian1−710000.9−2−71
Breast Cancer1−950.5−10−101
Balance-scale0.0141000.7−6−50.05
Heart0.00521000−5−100.5
Iris1210.01221
Pima0.5−91000.2−6−91
Vehicle12500.014−101
Wine1−1500.01−1−11
ENGINE1210−1021
LTT1−210−1−101

3.2. Property Analysis of WFCS

Figure 2 demonstrates the original distribution of Iris dataset and the clustering results of the five algorithms. As shown in Figure 2a, Iris dataset contains three clusters of 50 data points each, where each cluster refers to a type of iris plant. It is obvious that Cluster1 is linearly separable from the other two while the latters are overlapped. Hence, it is more reasonable for data points in Cluster1 to be hard clustered than to be fuzzy clustered.
(1) Clustering performance
Figure 2 shows that clustering results of feature weighted clustering algorithms (WFCS, ESSC, WFCM and WSRFCM) are similar to the distribution of original data (shown in Figure 2 (a)). Data points in Cluster1 can be recognized very well by the five algorithms. Moreover, most data points in Cluster2 and Cluster3 can be recognized by the four feature weighted algorithms. In Figure 2 (f), it is obvious that some data in Cluster3 are misclassified into Cluster2 by FCS.
The cluster centers of five algorithms are different from each other. Furthermore, the distance between Cluster1, Cluster2 and Cluster3 center obtained by the five algorithms are shown in Figure 3.
With regard to WFCS, ESSC and FCS integrating the within-cluster compactness and between-cluster separation, the distances between the overlapped Cluster2 and Cluster3 center are larger than that of WSRFCM and WFCM. However, FCS can’t partition the data points belonging to Cluster2 or Cluster3 correctly for it has not included the feature weight though the biggest value of distance is obtained.
Figure 2. (a) The original data distribution; (b) The clustering results of weighting fuzzy compactness and separation algorithm (WFCS); (c) The clustering results of enhanced soft subspace clustering algorithm (ESSC); (d) The clustering results of the feature weighting fuzzy clustering algorithm integrating rough sets and shadowed sets (WSRFCM); (e) The clustering results of the feature weighting fuzzy c-means algorithm (WFCM); (f) The clustering results of the fuzzy compactness and separation algorithm (FCS).
Figure 2. (a) The original data distribution; (b) The clustering results of weighting fuzzy compactness and separation algorithm (WFCS); (c) The clustering results of enhanced soft subspace clustering algorithm (ESSC); (d) The clustering results of the feature weighting fuzzy clustering algorithm integrating rough sets and shadowed sets (WSRFCM); (e) The clustering results of the feature weighting fuzzy c-means algorithm (WFCM); (f) The clustering results of the fuzzy compactness and separation algorithm (FCS).
Algorithms 08 00128 g002
Figure 3. The distance between three cluster centers.
Figure 3. The distance between three cluster centers.
Algorithms 08 00128 g003
Since different clustering algorithms have different objective functions, we introduce the iteration function F ( t ) = i = 1 c j = 1 n μ i j || x j a i || 2 i = 1 c j = 1 n μ i j || a i X ¯ || 2 in order to evaluate the convergence of algorithm.
Figure 4 shows the convergence curves of the five algorithms.
Figure 4. Convergence of the five algorithms.
Figure 4. Convergence of the five algorithms.
Algorithms 08 00128 g004
As shown in Figure 4, the five convergence curves descend fast in the first two iterations, and the convergence curves vary slowly after three iterations. Furthermore, the smaller iteration number means the higher convergence speed. Overall, WFCS has a higher speed of convergence. The convergence speed of WFCM is lower than that of WFCS and ESSC, while the FCS has the lowest convergence speed.
(2) Hard clustering
Figure 5 shows the fuzzy membership values for Cluster1 of 150 data points in WFCS when β is 1, 0.5, 0.05 and 0.005 respectively. When membership value is equal to 1, data point is hard clustered into Cluster1. When membership value is 0, data point is hard clustered into the other two clusters.
In Figure 5a–c, there are 50, 31 and 12 data points hard clustered into Cluster1 respectively. In Figure 5d, all data point membership values are smaller than 1, then all data points are fuzzy clustered into Cluster1. As seen in Figure 5, the membership value becomes fuzzier when β is smaller. Hence, WFCS has the characteristics of both hard clustering and fuzzy clustering.

3.3. Clustering Evaluation

The best RI indexes of the five algorithms are presented in Table 4.
Figure 5. Fuzzy membership value on the first cluster with different β (a) β = 1 ; (b) β = 0.5 ; (c) β = 0.05 ; (d) β = 0.005 .
Figure 5. Fuzzy membership value on the first cluster with different β (a) β = 1 ; (b) β = 0.5 ; (c) β = 0.05 ; (d) β = 0.005 .
Algorithms 08 00128 g005
It is evident in Table 4 that WFCS demonstrates the best performance except for Breast-cancer, Vehicle and ENGINE datasets. The performance of WFCM and WSRFCM are mostly comparable or better than that of ESSC and FCS. Even if FCS is not a feature weighted clustering algorithm, it is able to achieve the best clustering result performance for the dataset Wine. Table 5 and Table 6 list the XB and WB index values of the five algorithms respectively. By comparing Table 4, Table 5 and Table 6, we found that the best clustering performance as indicated through RI is not always the smallest value as indicated through XB or WB index. Therefore, no single algorithm can always be superior to the others for all datasets.
The average performances of the five algorithms are shown in Figure 6.
Figure 6. The average performances of the five algorithms.
Figure 6. The average performances of the five algorithms.
Algorithms 08 00128 g006
Table 4. The best clustering results obtained for the 10 datasets with rand index (RI).
Table 4. The best clustering results obtained for the 10 datasets with rand index (RI).
DatasetWFCSESSCWSRFCMWFCMFCS
Australian
mean0.73360.71620.73020.72650.6995
std0.00000.11340.00330.05690.0125
Breast Cancer
mean0.87210.87790.86300.86000.8627
std0.00090.005700.00000.0000
Balance-scale
mean0.64270.63890.61010.60990.6201
std0.07580.02870.05860.05780.0662
Heart
mean0.71630.71140.71200.69390.6833
std0.00000.00190.00230.00000.0000
Iris
mean0.94950.91950.94950.94950.8679
std0.00000.00000.00810.00000.0000
Pima
mean0.58410.55640.56980.58370.5576
std0.00000.00050.00440.00090.0000
Vehicle
mean0.66540.65390.67780.65810.6528
std0.00250.00280.00060.00380.0000
Wine
mean0.95510.93980.93240.93980.9551
std0.00000.00950.00000.00000.0000
ENGINE
mean0.86000.78230.76930.86000.7903
std0.00000.00050.00670.00000.0000
LTT
mean0.96710.960.95430.96710.9607
std0.00000.00000.00000.00000.0000
Table 5. Xie-Beni (XB) index of algorithms.
Table 5. Xie-Beni (XB) index of algorithms.
DatasetWFCSESSCWSRFCMWFCMFCS
Australian
mean0.04000.71941.46361.20560.1995
std0.00070.34550.84071.04880.0267
Breast Cancer
mean0.32160.29610.42880.32700.1094
std0.00210.03440.09030.00030.0000
Balance-scale
mean0.44350.70510.69700.73922.8475
std0.00000.02480.02870.07250.0979
Heart
mean0.15930.40330.79420.63480.2267
std0.00810.09820.75220.50300.0000
Iris
mean0.08440.08610.27000.19640.2922
std0.00190.00590.02260.00000.0000
Pima
mean0.14430.49420.76100.59550.4759
std0.00020.02350.19770.04060.0000
Vehicle
mean0.25320.26010.85380.54803.2949
std0.00000.09170.03720.00470.0097
Wine
mean0.25770.39700.67750.49870.4061
std0.00340.00090.06400.00000.0000
ENGINE
mean0.16990.18360.37550.21300.1267
std0.00300.06850.02950.02170.0000
LTT
mean0.10190.10750.32990.21310.2105
std0.00140.09830.00290.00000.0000
In Figure 6, we can see that WFCS obtained the best mean values of RI (0.7946) and XB (0.1976) with the least standard variation (0.0079, 0.0021 respectively) for the 10 datasets. WFCM, WSRFCM and ESSC perform similarly in terms of RI. It can be seen that the feature weighting clustering algorithms are superior to the un-weighted clustering one. The average XB and WB index values of WFCS, ESSC and FCS are smaller than those of WFCM and WSRFCM, which demonstrate that these three algorithms integrating between-cluster separation and within-cluster compactness can partition data points more reasonably.
Table 6. Within-Between(WB) index of algorithms.
Table 6. Within-Between(WB) index of algorithms.
DatasetWFCSESSCWSRFCMWFCMFCS
Australian
mean0.05510.17300.63120.72610.5971
std0.00000.13130.00110.40540.2144
Breast Cancer
mean0.38490.38430.53880.60350.4510
std0.00100.00840.00000.00000.0000
Balance-scale
mean2.19243.65675.83546.12325.6191
std0.79320.19830.09110.16860.7135
Heart
mean1.11911.11912.32973.30051.8087
std0.00000.09910.00120.00000.0000
Iris
mean0.07290.33000.84230.62240.5366
std0.00790.00970.64520.03110.0000
Pima
mean0.28280.61951.00430.48970.4832
std0.00120.00030.06270.00730.0000
Vehicle
mean0.19080.28500.50000.41280.6351
std0.00100.00870.01490.00000.0000
Wine
mean0.98210.99342.01741.43170.8599
std0.00380.03500.00280.00000.0000
ENGINE
mean0.48660.85001.63151.60720.8484
std0.00790.00600.00562.34080.0000
LTT
mean0.92061.62533.01791.90231.8894
std0.00750.02810.01060.00020.0000

4. Conclusions

In this paper, a fuzzy clustering algorithm is proposed based on FCS by maximizing the between-cluster matrix and minimizing the within-cluster matrix with weighted features. Two adjustment formulations are derived for adjusting the values of μ i j and the ω k respectively. Through the proposed WFCS, problem for the membership of the data point lying on the crisp boundary can be solved. Experimental results show that the proposed WFCS generally outperforms the four existing clustering algorithms (FCS, WFCM, WSRFCM and ESSC).
The proposed algorithm can handle linear datasets, whereas, the high-dimensional nonlinear data has not been considered in this paper. In the future, we will employ the kernel methodology [19,20,21] to analyze the high-dimensional nonlinear data.

Acknowledgments

This work is supported by National Natural Science Foundation of China (Grant No. 61079013), Natural Science Foundation of Jiangsu Province (Grant No. BK2011737), National Natural Science Foundation of China (Grant No. 61203273).

Author Contributions

All authors contributed equally.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Hartigan, J. Clustering Algorithms; Wiley: New York, NY, USA, 1975. [Google Scholar]
  2. Jain, A.K.; Murty, M.N.; Flynn, P.J. Data clustering: A review. ACM Comput. Surv. (CSUR) 1999, 31, 264–323. [Google Scholar] [CrossRef]
  3. Xu, R.; Wunsch, D. Survey of clustering algorithms. Neu. Net. IEEE Transactions on 2005, 16, 645–678. [Google Scholar] [CrossRef]
  4. Bezdek, J.C. Pattern Recognition with Fuzzy Objective Function Algorithms; Plenum Press: New York, NY, USA, 1981. [Google Scholar]
  5. Wei, L.M.; Xie, W.X. Rival checked fuzzy c-means algorithm. Acta Electron. Sinica 2000, 28, 63–66. [Google Scholar]
  6. Fan, J.L.; Zhen, W.Z.; Xie, W.X. Suppressed fuzzy c-means clustering algorithm. Pattern Recognit. Lett. 2003, 24, 1607–1612. [Google Scholar] [CrossRef]
  7. Wu, K.L.; Yu, J.; Yang, M.S. A novel fuzzy clustering algorithm based on a fuzzy scatter matrix with optimality tests. Pattern Recognit. Lett. 2005, 26, 639–652. [Google Scholar] [CrossRef]
  8. Frigui, H.; Nasraoui, O. Unsupervised learning of prototypes and feature weights. Pattern Recognit. 2004, 37, 567–581. [Google Scholar] [CrossRef]
  9. Wang, X.; Wang, Y.; Wang, L. Improving fuzzy c-means clustering based on feature-weight learning. Pattern Recognit. Lett. 2004, 25, 1123–1132. [Google Scholar] [CrossRef]
  10. Jing, L.; Ng, M.K.; Huang, J.Z. An entropy weighting k-means algorithm for subspace clustering of high-dimensional sparse data. Knowl. Data Engin. IEEE Transactions on 2007, 19, 1026–1041. [Google Scholar] [CrossRef]
  11. Wang, Q.; Ye, Y.; Huang, J.Z. Fuzzy k-means with variable weighting in high dimensional data analysis. In Proceedings of Web-Age Information Management, 2008, WAIM'08. The Ninth International Conference on. Hunan, China; 2008; pp. 365–372. [Google Scholar]
  12. Wang, L.; Wang, J. Feature weighting fuzzy clustering integrating rough sets and shadowed sets. Int. J. Pattern Recognit. Artif. Intell. 2012, 26, 1769–1776. [Google Scholar]
  13. Deng, Z.; Choi, K.S.; Chung, F.L.; Wang, S.T. Enhanced soft subspace clustering integrating within-cluster and between-cluster information. Pattern Recognit. 2010, 43, 767–781. [Google Scholar] [CrossRef]
  14. Lichman, M. UCI Machine Learning Repository; University of California: Irvine, CA, USA, 2013. [Google Scholar]
  15. Rand, W.M. Objective criteria for the evaluation of clustering methods. J. Am. Stat. Assoc. 1971, 66, 846–850. [Google Scholar] [CrossRef]
  16. Xie, X.L.; Beni, G.A. A validity measure for fuzzy clustering. IEEE Trans. Pattern Anal. Mach. Intell. 1991, 13, 841–847. [Google Scholar] [CrossRef]
  17. Zhao, Q.; Fränti, P. WB-index: A sum-of-squares based index for cluster validity. Data Knowl. Eng. 2014, 92, 77–89. [Google Scholar] [CrossRef]
  18. Huang, J.Z; Ng, M.K; Rong, H.; Li, Z.C. Automated variable weighting in k-means type clustering. IEEE Trans. Pattern Anal. Mach. Intell. 2005, 27, 657–668. [Google Scholar] [CrossRef] [PubMed]
  19. Filippone, M.; Camastra, F.; Masulli, F.; Rovetta, S. A survey of kernel and spectral methods for clustering. Pattern Recognit. 2008, 41, 176–190. [Google Scholar] [CrossRef]
  20. Huang, H.C.; Chuang, Y.Y.; Chen, C.S. Multiple kernel fuzzy clustering. Fuzzy Syst. IEEE Transactions on 2012, 20, 120–134. [Google Scholar] [CrossRef]
  21. Lin, K.P. A Novel Evolutionary Kernel Intuitionistic Fuzzy C-means Clustering Algorithm. Fuzzy Syst. IEEE Transactions on 2014, 22, 1074–1087. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Zhou, Y.; Zuo, H.-f.; Feng, J. A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation. Algorithms 2015, 8, 128-143. https://doi.org/10.3390/a8020128

AMA Style

Zhou Y, Zuo H-f, Feng J. A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation. Algorithms. 2015; 8(2):128-143. https://doi.org/10.3390/a8020128

Chicago/Turabian Style

Zhou, Yuan, Hong-fu Zuo, and Jiao Feng. 2015. "A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation" Algorithms 8, no. 2: 128-143. https://doi.org/10.3390/a8020128

APA Style

Zhou, Y., Zuo, H. -f., & Feng, J. (2015). A Clustering Algorithm based on Feature Weighting Fuzzy Compactness and Separation. Algorithms, 8(2), 128-143. https://doi.org/10.3390/a8020128

Article Metrics

Back to TopTop