Abstract
Semi-Nonnegative Matrix Factorization (Semi-NMF), as a variant of NMF, inherits the merit of parts-based representation of NMF and possesses the ability to process mixed sign data, which has attracted extensive attention. However, standard Semi-NMF still suffers from the following limitations. First of all, Semi-NMF fits data in a Euclidean space, which ignores the geometrical structure in the data. What’s more, Semi-NMF does not incorporate the discriminative information in the learned subspace. Last but not least, the learned basis in Semi-NMF is unnecessarily part based because there are no explicit constraints to ensure that the representation is part based. To settle these issues, in this paper, we propose a novel Semi-NMF algorithm, called Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD) to overcome the aforementioned problems. GGSemi-NMFD adds the graph regularization term in Semi-NMF, which can well preserve the local geometrical information of the data space. To obtain the discriminative information, approximation orthogonal constraints are added in the learned subspace. In addition, norm constraints are adopted for the basis matrix, which can encourage the basis matrix to be row sparse. Experimental results in six datasets demonstrate the effectiveness of the proposed algorithms.
1. Introduction
Nonnegative Matrix Factorization (NMF) [1] is a useful data representation technique for finding compact and low dimensional representations of data. NMF decomposes the nonnegative data matrix into a basis matrix and an encoding matrix whose product can approximate the original data matrix . Due to the nonnegative constraint, NMF only allows additive combination, which leads to the parts-based representation. The parts-based representation is consistent with the psychological intuition of combining parts to form a whole, so NMF has been widely used in data mining and pattern-recognition problems. The nonnegative constraints distinguish NMF from many other traditional matrix factorization algorithms, such as Principal Component Analysis (PCA) [2], independent component analysis [3] and Singular Value Decomposition (SVD). However, the major limitation of NMF is that it cannot deal with mixed sign data.
To address the limitation of NMF while inheriting all its merits, Ding et al. [4] proposed Semi-Nonnegative Matrix Factorization (Semi-NMF), which can handle mixed sign items in data matrix . Specifically, Semi-NMF only imposes non-negative constraints on encoding matrix and allows mixed signs in both the data matrix and the basis matrix . This allows Semi-NMF to learn a new representation from any signed data and extends the range of application of NMF ideas.
Numerous studies [5,6,7] have demonstrated that data are usually drawn from sampling a probability distribution that has support on or near a submanifold of the ambient space. Some manifold learning algorithms such as ISOMAP[5], Locally Linear Embedding (LLE) [6], Laplacian Eigenmap [7], etc, have been proposed to detect the hidden manifold structure. All these algorithms use the locally invariant idea [8], i.e., the nearby points are very likely to have similar embeddings. If the geometrical structure is utilized, the learning performance will be evidently enhanced.
On the other hand, the discriminative information of data is very important in computer vision and pattern recognition. Usually, exploiting label information in the framework of NMF can allow obtaining discriminative information. For example, Liu et al. [9] proposed Constrained Nonnegative Matrix Factorization (CNMF), which imposes the label information for the objective function as hard constraints. Li et al. [10] developed a semi-supervised robust structured NMF, which exploited the block-diagonal structure in the framework of NMF. Unfortunately, under the unsupervised scenario, we cannot have any label information. However, through reformulation of the scaled indicator matrix, we find that there is an approximation orthogonal discriminability in the learned subspace. Adding approximation orthogonal constraints to the new representation, we could acquire some discriminative information in the learned subspace.
Donoho and Stodden [11] theoretically proved that NMF cannot guarantee decomposing an object into parts. In other words, NMF may be unable to result in the parts-based representation for some datasets. To ensure the parts-based representation, sparse constraints have been introduced to NMF. Hoyer [12] proposed sparse constrained NMF, which added the norm penalty on the basis and encoding matrix and obtained more sparse representation than standard NMF. However, the norm regularization cannot guarantee that all the data vectors are sparse in the same features [13], so it is not suitable for feature selection. To settle this issue, Nie [14] proposed a robust feature selection method emphasizing joint -norm minimization on both the loss function and regularization. Yang et al. [15], Hou et al. [16] and Gu et al. [17] used the -norm regularization in discriminant feature selection, sparse regression and subspace learning, respectively. The -norm regularization is regarded as a powerful model for sparse feature selection and has attracted increasing attention [14,15,18].
The goal of this paper is to preserve the local geometrical structure of the mixed sign data and characterize the discriminative information in the learned subspace under the framework of Semi-NMF. In addition, we encourage the basis matrix to be group sparse, which is suitable to reserve the important basis vectors and remove the irrelevant ones. We propose a novel algorithm, called Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD), for data representation. Graph regularization [19] has been introduced to encode the local structure of non-negative data in the framework of NMF. We apply it to preserve the intrinsic geometric structure of mixed sign data in the framework of Semi-NMF. In addition, discriminative information is also very important in pattern recognition. To incorporate the discriminative information of the data, we add approximate orthogonal constraints in the learned latent subspace, and thus improve the performance of Semi-NMF in clustering tasks. We further constrain the learned basis matrix to be row sparse. This is inspired by the intuition that different dimensions of basis vectors have different importance. For model optimization, we develop an effective iterative updating scheme for GGSemi-NMFD. Experimental results on six real-world datasets demonstrate the effectiveness of our approach.
To summarize, it is worthwhile to highlight three aspects of the proposed method here:
1. While the standard Semi-NMF models the data in the Euclidean space, GGSemi-NMFD exploits the intrinsic geometrical information of the data distribution and adds it as a regularization term. Hence, when the data are sampled from a high dimensional space’s submanifold, our algorithm is especially applicable.
2. To incorporate the discriminative information of the data, we add approximate orthogonal constraints in the learned space. By adding the approximate orthogonal constraints, our algorithm can have more discriminative power than the standard Semi-NMF.
3. Our algorithm adds -norm constraints in the basis matrix, which can shrink some rows of basis matrix to zero, making basis matrix suitable for feature selection. By preserving the group sparse structure in the basis matrix, our algorithm can acquire more flexible and meaningful semantics.
The remainder of this paper is organized as follows: Section 2 presents a brief overview of related works. Section 3 introduces our GGSemi-NMFD algorithm and the optimization scheme. Experimental results on six real-world datasets are presented in Section 4. Finally, we draw the conclusion in Section 5.
2. Related Work
In this section, we briefly review some related works that are closely related to our work.
2.1. NMF
Given a non-negative data matrix , the goal of NMF is to find a non-negative basis matrix and a non-negative encoding matrix , where their product can well approximate the non-negative data matrix . Here, K denotes the desired reduced dimension.
The least square objective function of NMF is formulated as follows:
It is clear that Equation (1) is not convex when both and are taken as variables. However, it is convex in when is fixed and vice versa. Lee and Seung [20] presented an iterative multiplicative updating rules as follows:
Using the above updating rules, we could find the local optimal solution of Equation (1).
2.2. Semi-NMF
One limitation of NMF is that it cannot handle the mix signed data. To settle this issue, Ding et al. [4] proposed Semi-Nonnegative Matrix Factorization (Semi-NMF). Specifically, Semi-NMF relaxes the non-negative constraints to data matrix and basis matrix and only adds non-negative constraints in encoding matrix . In this way, Semi-NMF can process mix signed matrix and inherit all the merit of NMF. The objective function of Semi-NMF is written as follows:
To solve Equation (3), Ding et al. [4] proposed the updating rule as follows:
where we separate the positive and negative parts of a matrix as:
3. Model
In this section, we propose a novel algorithm, called Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD), which considers the group sparsity of the basis matrix and better preserves the locally geometric structure of the data, as well as incorporates the discriminative information in the learned subspace.
3.1. Graph Regularized Semi-NMF
Spectral graph theory [21] and manifold learning theory [7] have demonstrated that the local geometrical structure can be effectively fitted through a nearest neighbor graph on a scatter of data points. For each data point , we could find its k nearest neighbors and put an edge between and its neighbors in the adjacent matrix . Thus, we could use the graph regularization term to measure the smoothness of the low-dimensional representation:
where is a diagonal matrix whose entries are column sums of , . is the graph Laplacian.
There are many ways to define adjacent weight matrix ; here, we use the 0–1 weighting, since it is simple and effective. It is defined as:
where denotes the set of k nearest neighbors of data point . Combing the graph regularization term with the standard Semi-NMF objective function, we could obtain the graph regularized Semi-NMF; it can be written as follows:
where the regularization parameter controls the smoothness of the new representation.
In [19], Cai et al. proposed Graph regularized Non-negative Matrix Factorization (GNMF), which considers the local invariance in the framework of NMF. However, the major difference between GGSemi-NMF and GNMF is that GGSemi-NMF constructs the data graph for any signed data, but GNMF constructs the data graph only for non-negative data. What is more, GNMF ignores the discriminative information and cannot guarantee the parts-based representations. Therefore, GGSemi-NMF extends GNMF and has some novel properties, which will be represented in detail as follows.
3.2. Discriminative Constraints
If we could obtain the discriminative information hidden in the data, it will be a benefit for learning a better representation. To address this issue, we follow the works in [22,23], where the indicated matrix is given. At first, we introduce the indicator matrix , where if the i-th data point belongs to the j-th group. Then, the scaled indicator matrix can be defined as follows:
where each column in is:
where is the number of samples in the j-th group. If the new representation can obtain the discriminative information, it will be discriminative. Unfortunately, under the unsupervised scenario, we cannot have the label information in advance. However, we find that the scaled indicator matrix is strictly orthogonal:
where is a identity matrix. Hence, should also be orthogonal. However, the orthogonal constraint is too strict. Therefore, we relax it and let be approximal orthogonal, i.e.,
3.3. Group Sparse Constraints
Usually, basis matrix has redundant and irrelevant basis vectors. Removing the non-significant basis vectors and keeping the important one will lead to learning a better representation. To achieve this aim, we choose the third regularization term to distinguish the importance of different dimensions of basis vectors. Specifically, we encourage the significant dimensions of basis vectors to be non-zero values and the non-significant ones to be zero. Motivated by [14,15], we add the -norm constraint on the basis matrix , which propels some rows in towards zero. Thus, we can reserve the important dimensions of basis vectors (i.e., items with non-zero values) and remove the unimportant ones (i.e., items with zero values). The -norm is defined as follows:
where represents the j-th row of , which reveals the importance of the j-th basis vector.
3.4. Objective Function
By integrating Equations (7), (10) and (11), the overall loss function of GGSemi-NMFD is defined as:
where , and are regularization parameters. Parameter measures the smoothness of the learned representation; parameter controls the orthogonality of ; and parameter controls the degree of sparsity in basis matrix .
3.5. Optimization
In this section, we will give the solution to Equation (12). As we see, objective function Equation (12) is nonconvex in both and together, so we cannot have a closed-form solution. We will give an alternating scheme to optimize the objective function, which can achieve the local optimal solution in the following. For the ease of representation, we define:
3.5.1. Updating Rule for
Optimizing Equation (12) with respect to is equivalent to optimizing:
Inspired by [14], the derivative of the objective function with respect to is as follows:
where is a diagonal matrix with . Letting , we get the following updating rule for :
3.5.2. Updating Rule for
Let be the Lagrange multiplier for constraint . Keeping the part of that is related to , the Lagrange function is defined as:
The partial derivative of with respect to is as:
By using the Karush–Kuhn–Tucker condition, i.e., , we get the following equations,
where we separate the positive and negative parts of matrix as:
Then, we obtain the following multiplicative updating rule:
4. Experimental Section
To demonstrate the effectiveness of GGSemi-NMFD, we carried out extensive experiments on six public datasets: ORL, YALE, UMIST, Ionosphere, USPST and Waveform. All statistical significance tests were performed using Student’s t-tests with a significance level of 0.05. All the NMF-based methods are random initializations.
4.1. Datasets and Metrics
In our experiment, we use 6 datasets that are widely used as benchmark datasets in the clustering literature. The statistics of these datasets are summarized in Table 1.
Table 1.
Statistics of the datasets.
ORL: The ORL face dataset contains face images of 40 distinct persons. Each person has ten different images, taken at different times, totaling 400. All images are cropped to pixel grayscale images, and we reshape them into a 1024-dimensional vector.
YALE: The YALE face database contains 165 grayscale images in GIF format of 15 individuals. There are 11 images per subject, one per different facial expression or configuration. All images are cropped to pixel grayscale images, and we reshape them into a 1024-dimensional vector.
UMIST: The UMIST face databases contains 575 images from 20 individuals. All images are cropped to pixel grayscale images, and we reshape them into a 644-dimensional vector.
Ionosphere: Ionosphere is from the UCI repository. Ionosphere was collected by a radar system and consists of a phased array of 16 high-frequency antennas with a total transmitted power of the order of 6.4 kilowatts. The dataset consists of 351 instances with 34 numeric attributes.
USPST: The USPST dataset comes from the USPS system, and each image in USPST is presented at the resolution of pixels. It is the test split of the USPS.
Waveform: Waveform is obtainable at the UCI repository. It has three categories with 21 numerical attributes and 2746 instances.
We utilize clustering performance to evaluate the effectiveness of data representation. Clustering Accuracy (ACC) and Normalized Mutual Information (NMI) are two widely-used metrics for clustering performance, whose definitions are as follows:
where and are cluster labels of item i in clustering results and in the ground truth, respectively; equals 1 if and equals 0 otherwise; and is the permutation mapping function that maps to the equivalent cluster label in the ground truth. denotes the entropy of cluster set C. is the mutual information between C and :
is the probability that a randomly-selected item from all testing items belongs to cluster , and is the joint probability that a randomly-selected item is in and simultaneously. If C and are identical, . when the two cluster sets are completely independent.
4.2. Compared Algorithms
To demonstrate how the clustering performance can be improved by our method, we compare the following popular clustering algorithms:
- Traditional K-means clustering algorithm (Kmeans).
- Kmeans clustering in Principal Component Analysis (PCA) [3].
- Nonnegative Matrix Factorization (NMF) [1].
- Semi-Nonnegative Matrix Factorization (Semi-NMF) [4].
- Graph regularized Non-negative Matrix Factorization (GNMF) [19].
- Our proposed Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD).
4.3. Parameter Settings
Baseline methods have several parameters to be tuned. To compare these methods fairly, we perform grid search in the parameter space for each method and recode the best average results.
For datasets ORL, YALE and UMIST, we set K, the dimension of latent space, to the number of true classes of the dataset [19], for all NMF-based methods. For the dataset Ionosphere, since its class number is too small (only 2 classes), we set for NMF-based methods. We applied the compared methods to learn a new representation, and then, Kmeans was adapted for data clustering on the new data representation. For a given cluster number, 10 test runs were conducted on different classes of data randomly chosen from the dataset.
For GNMF, the number of nearest neighbors for constructing the data graph is set by searching the grid according to [24], and the graph regularization parameter is chosen from .
For GGSemiNMFD, the number of the neighborhood size k is selected from . We also set , and by searching the grid . If we adopt better parameter tuning, better clustering performance will be achieved.
Note that there is no parameter selection for Kmeans, PCA, NMF and Semi-NMF, given the number of clusters.
In the coming section, we repeat clustering 10 times, and the mean and the standard error are computed. Additionally, we report the best average result for each method.
4.4. Performance Comparison
Table 2, Table 3, Table 4, Table 5, Table 6 and Table 7 show the clustering results on the ORL, YALE, UMIST, Ionosphere, USPST and Waveform datasets, respectively.
Table 2.
Clustering performance on ORL.
Table 3.
Clustering performance on YALE.
Table 4.
Clustering performance on UMIST.
Table 5.
Clustering performance on Ionosphere.
Table 6.
Clustering performance on USPST.
Table 7.
Clustering performance on Waveform.
The experiments reveal some important points.
- The NMF-based methods, including NMF, Semi-NMF, GNMF and GGSemi-NMFD, outperform the PCA and Kmeans methods, which demonstrates the merit of the parts-based representation in discovering the hidden factors.
- On nonnegative datasets, NMF demonstrates somewhat superior performance over Semi-NMF.
- For nonnegative datasets, methods considering the local geometrical structure of data, such as GNMF and GGSemi-NMFD, significantly outperform NMF and Semi-NMF, which suggests the importance of exploiting the intrinsic geometric structure of data.
- When dataset has mixed signs, NMF and GNMF cannot work. Semi-NMF tends to outperform Kmeans and PCA, which indicates the advantage of the parts-based representation in finding the hidden matrix factors even in the mixed sign data.
- Regardless of the datasets, our GGSemi-NMF always represents the best performance. This shows that by leveraging the power of parts-based representation, graph Laplacian regularization, group sparse constraints and discriminative information simultaneously, GGSemi-NMFD can learn a better compact and meaningful representation.
4.5. Parameter Study
GGSemi-NMFD has four parameters, , , and the number of nearest neighbors k. Parameter measures the weight of the graph Laplacian; parameter controls the orthogonality of the learned representation; parameter controls the sparse degree of the basis matrix; and k controls the complexity of the graph. We investigated their influence on GGSemi-NMFD’s performance by varying one parameter at a time while fixing the others. For each specific setting, we run GGSemi-NMFD 10 times, and the average performance was recorded.
The results are shown in Figure 1, Figure 2, Figure 3 and Figure 4 for ORL, YALE, UMIST and Ionosphere respectively (results for USPST and Waveform were similar to Ionosphere). We found that the four parameters have the same behavior: when increasing the parameter from a very tiny value, the performance curves first rose and then descended. This denotes that when assigned proper values, the graph Laplacian, approximation orthogonal and sparseness constraints, as well as the number of nearest neighbors are surely helpful to learn a better representation. For dataset ORL, we set , . For dataset YALE, we set . . For dataset UMIST, we set , and . For dataset Ionosphere, we set , . For nearest neighbors k, we can observe from the result that GGSemi-NMF consistently outperforms the best baseline algorithms on four datasets when .
Figure 1.
Influence of different parameter settings on the performance of GGSemi-NMFD in the ORL dataset: (a) varying while setting , and ; (b) varying while setting , and ; (c) varying while setting , and ; (d) varying k while setting , and .
Figure 2.
Influence of different parameter settings on the performance of GGSemi-NMFD in the YALE dataset: (a) varying while setting , and ; (b) varying while setting , and ; (c) varying while setting , and ; (d) varying k while setting , and .
Figure 3.
Influence of different parameter settings on the performance of GGSemi-NMFD in the UMIST dataset: (a) varying while setting , and ; (b) varying while setting , and ; (c) varying while setting , and ; (d) varying k while setting , and .
Figure 4.
Influence of different parameter settings on the performance of GGSemi-NMFD in the Ionosphere dataset: (a) varying while setting , and ; (b) varying while setting , and ; (c) varying while setting , and ; (d) varying k while setting , and .
4.6. Convergence Analysis
The updating rules for minimizing the objective function of GGSemi-NMFD in Equation (12) are essentially iterative, and it can be proven that these rules are convergent. Figure 5a–d show the convergence curve of GGSemi-NMFD on datasets ORL, YALE, UMIST and Ionosphere, respectively. For each figure, we use the objective function values with log scale (blue line) and the values of the objective function in the next two iterates (green line) to measure the convergence of GGSemi-NMFD. As can be seen, usually within dozens of iterations, the multiplicative update rules for GGSemi-NMFD converge very fast.

Figure 5.
Convergence analysis of GGSemi-NMFD on: (a) ORL; (b) YALE; (c) UMIST; and (d) Ionosphere. The y-axes for objective function values are in the log scale.
5. Conclusions
In this work, we proposed Group sparsity and Graph regularized Semi-Nonnegative Matrix Factorization with Discriminability (GGSemi-NMFD), a novel latent representation learning algorithm for representation learning from any signed data. GGSemi-NMFD tried to learn a semantic latent subspace of items by exploiting the graph Laplacian, discriminative information and sparse constraints, simultaneously. The graph Laplacian term encouraged items of the same category to be near each other. Approximation orthogonal constraints were introduced to incorporate some discriminative information in the learned subspace. Another novel property of GGSemi-NMFD was that it allowed each dimension of the basis matrix to be related or unrelated with new representation by imposing the -norm penalty on basis matrix . Therefore, GGSemi-NMFD is able to learn a more plentiful and flexible semantic latent subspace. We proposed an efficient optimization method for GGSemi-NMFD and demonstrated its validity by six real-world data sets. Experimental results on six real-world data sets indicate that GGSemi-NMFD is effective and outperforms the baselines significantly. In our future work, we will investigate multi-view case [25], which can learn a more accurate representation from multi-view data.
Acknowledgments
This research was supported by the National High-tech R&D Program of China (863 Program) (No. 2014AA015201), Changjiang Scholars and Innovative Research Team in University of Ministry of Education of China (Grant No. IRT_17R87) and the Program for Changjiang Scholars and Innovative Research Team in University (No. IRT13090). The content of the information does not necessarily reflect the position or the policy of the Government, and no official endorsement should be inferred.
Author Contributions
Peng Luo wrote the paper and performed the experiments; Jinye Peng conceived and designed the experiments and analyzed the data.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791. [Google Scholar] [PubMed]
- Abdi, H.; Williams, L.J. Principal component analysis. Wiley Interdiscip. Rev. Comput. Stat. 2010, 2, 433–459. [Google Scholar] [CrossRef]
- Hyvärinen, A.; Karhunen, J.; Oja, E. Independent Component Analysis; John Wiley & Sons: Hoboken, NJ, USA, 2004; Volume 46. [Google Scholar]
- Ding, C.H.; Li, T.; Jordan, M.I. Convex and semi-nonnegative matrix factorizations. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 32, 45–55. [Google Scholar] [CrossRef] [PubMed]
- Tenenbaum, J.B.; De Silva, V.; Langford, J.C. A global geometric framework for nonlinear dimensionality reduction. Science 2000, 290, 2319–2323. [Google Scholar] [CrossRef] [PubMed]
- Roweis, S.T.; Saul, L.K. Nonlinear Dimensionality Reduction by Locally Linear Embedding. Science 2000, 290, 2323. [Google Scholar] [CrossRef] [PubMed]
- Belkin, M.; Niyogi, P. Laplacian eigenmaps and spectral techniques for embedding and clustering. In Proceedings of the International Conference on Neural Information Processing Systems: Natural and Synthetic, Vancouver, BC, Canada, 3–8 December 2001; pp. 585–591. [Google Scholar]
- Hadsell, R.; Chopra, S.; LeCun, Y. Dimensionality reduction by learning an invariant mapping. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1735–1742. [Google Scholar]
- Liu, H.; Wu, Z.; Li, X.; Cai, D.; Huang, T.S. Constrained Nonnegative Matrix Factorization for Image Representation. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 1299. [Google Scholar] [CrossRef] [PubMed]
- Li, Z.; Tang, J.; He, X. Robust Structured Nonnegative Matrix Factorization for Image Representation. IEEE Trans. Neural Netw. Learn. Syst. 2017, PP, 1–14. [Google Scholar] [CrossRef] [PubMed]
- Donoho, D.; Stodden, V. When Does Non-Negative Matrix Factorization Give Correct Decomposition into Parts? In Advances in Neural Information Processing Systems 16 (NIPS 2003); MIT Press: Cambridge, MA, USA, 2004; p. 2004. [Google Scholar]
- Hoyer, P.O. Non-negative Matrix Factorization with Sparseness Constraints. J. Mach. Learn. Res. 2004, 5, 1457–1469. [Google Scholar]
- Zhu, X.; Huang, Z.; Yang, Y.; Shen, H.T.; Xu, C.; Luo, J. Self-taught dimensionality reduction on the high-dimensional small-sized data. Pattern Recognit. 2013, 46, 215–229. [Google Scholar] [CrossRef]
- Nie, F.; Huang, H.; Cai, X.; Ding, C. Efficient and robust feature selection via joint 2,1 -norms minimization. In Proceedings of the International Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–11 December 2010; pp. 1813–1821. [Google Scholar]
- Yang, Y.; Shen, H.T.; Ma, Z.; Huang, Z.; Zhou, X. l 2,1 -norm regularized discriminative feature selection for unsupervised learning. In Proceedings of the International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 1589–1594. [Google Scholar]
- Hou, C.; Nie, F.; Yi, D.; Wu, Y. Feature Selection via Joint Embedding Learning and Sparse Regression. In Proceedings of the International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 1324–1329. [Google Scholar]
- Gu, Q.; Li, Z.; Han, J. Joint feature selection and subspace learning. In Proceedings of the International Joint Conference on Artificial Intelligence, Barcelona, Spain, 16–22 July 2011; pp. 1294–1299. [Google Scholar]
- Li, Z.; Liu, J.; Yang, Y.; Zhou, X.; Lu, H. Clustering-Guided Sparse Structural Learning for Unsupervised Feature Selection. IEEE Trans. Knowl. Data Eng. 2014, 26, 2138–2150. [Google Scholar]
- Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1548–1560. [Google Scholar] [PubMed]
- Lee, D.D.; Seung, H.S. Algorithms for Non-negative Matrix Factorization. In Advances in Neural Information Processing Systems 13 (NIPS 2000); MIT Press: Cambridge, MA, USA, 2001; pp. 556–562. [Google Scholar]
- Chung, F.R. Spectral Graph Theory; American Mathematical Society: Providence, RI, USA, 1997. [Google Scholar]
- Yang, Y.; Xu, D.; Nie, F.; Yan, S.; Zhuang, Y. Image clustering using local discriminant models and global integration. IEEE Trans. Image Process. 2010, 19, 2761. [Google Scholar] [CrossRef] [PubMed]
- Ye, J.; Zhao, Z.; Wu, M.; Platt, J.C.; Koller, D.; Singer, Y.; Roweis, S. Discriminative K-means for Clustering. In Proceedings of the Annual Conference on Advances in Neural Information Processing Systems 21, Vancouver, BC, Canada, 8–10 December 2008; pp. 1649–1656. [Google Scholar]
- Cai, D.; He, X.; Wu, X.; Han, J. Non-negative Matrix Factorization on Manifold. In Proceedings of the Eighth IEEE International Conference on Data Mining, Pisa, Italy, 15–19 December 2008; pp. 63–72. [Google Scholar]
- Peng, L.; Peng, J.; Guan, Z.; Fan, J. Multi-view Semantic Learning for Data Representation. In Proceedings of the European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases, Porto, Portugal, 7–11 September 2015; pp. 367–382. [Google Scholar]
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).