Open Access
This article is
- freely available
- re-usable
Molecules 2017, 22(12), 2131; doi:10.3390/molecules22122131
Article
A Robust Manifold Graph Regularized Nonnegative Matrix Factorization Algorithm for Cancer Gene Clustering
^{1}
School of Information Science and Engineering, Central South University, Changsha 410083, China
^{2}
School of Information Science and Engineering, Qufu Normal University, Rizhao 276826, China
*
Correspondence: Tel.: +86-138-0731-7005
Received: 27 October 2017 / Accepted: 29 November 2017 / Published: 2 December 2017
Abstract
:Detecting genomes with similar expression patterns using clustering techniques plays an important role in gene expression data analysis. Non-negative matrix factorization (NMF) is an effective method for clustering the analysis of gene expression data. However, the NMF-based method is performed within the Euclidean space, and it is usually inappropriate for revealing the intrinsic geometric structure of data space. In order to overcome this shortcoming, Cai et al. proposed a novel algorithm, called graph regularized non-negative matrices factorization (GNMF). Motivated by the topological structure of the GNMF-based method, we propose improved graph regularized non-negative matrix factorization (GNMF) to facilitate the display of geometric structure of data space. Robust manifold non-negative matrix factorization (RM-GNMF) is designed for cancer gene clustering, leading to an enhancement of the GNMF-based algorithm in terms of robustness. We combine the ${l}_{2,1}$-norm NMF with spectral clustering to conduct the wide-ranging experiments on the three known datasets. Clustering results indicate that the proposed method outperforms the previous methods, which displays the latest application of the RM-GNMF-based method in cancer gene clustering.
Keywords:
robust; manifold; matrix factorization; gene clustering1. Introduction
With the progressive implementation of human whole-genome and microarray technologies, it is possible to simultaneously observe the expressions of numerous genes in different tissue samples. By analyzing gene expression data, genes with varying expressions in tissues and their relationships may be identified to figure out the pathogenic mechanism of cancers based on genetic changes [1]. Recently, cancer classification based on gene expression data has become a hot research topic in bioinformatics.
Due to the fact that the analysis of genome-wide expression patterns can provide unique perspectives into the structure of genetic networks, the clustering technique has been used to analyze gene expression data [2,3]. Cluster analysis is the most widespread statistical techniques for analyzing massive gene expression data. Its major task is to classify genes with similar expressions to discover groups of genes with identical features or similar biological functions, in order that people can acquire a deeper understanding about the essence of many biological phenomena such as gene functions, development, cancer, and pharmacology [4].
Currently, it has been shown that non-negative matrix factorization (NMF) [5,6] is superior to the hierarchical clustering (HC) and self-organizing map (SOM) [7] in the application of cancer samples in clustering gene expression data. Over the past few years, the NMF-based method has been used for the gene expressions of statistically analyzing data for clustering [8,9,10,11,12,13]. The main idea is to approximately factorize a non-negative data matrix into a product of two non-negative matrices, which makes sure that all elements of the matrices are non-negative. Therefore, the appearance of NMF has attracted considerable attention. Recently, various variants based on the original NMF have been developed by modifying the objective function or the constraint conditions [14,15,16]. For instance, Cai et al. proposed graph regularized non-negative matrix factorization (GNMF), giving forth to the neighboring geometric structure. It illustrates the nearest neighbor graph that preserves the neighborhood information of high-dimensional space in low-dimensional space. The GNMF reveals the intrinsic geometrical structure by incorporating a Laplacian regularization term [17], which is effective for solving clustering problems. After that, the sparse NMF [18] was proposed with sparse constraints upon the basis matrices and coefficient matrices factored by the NMF so that the sparseness may be reflected from data. The non-smooth NMF [19] can realize the global or local sparseness [20] by making basis and encoding matrices sparse simultaneously. For the sake of enhancing the robustness of the GNMF-based method in gene clustering, we propose improved robust manifold non-negative matrix factorization (RM-GNMF) by making use of the combination of ${l}_{2,1}$-norm and spectral clustering with Laplacian regularization, leading to the internal geometry of data representations. It facilitates the display purposes of intrinsic geometric structure of the cancer gene data space.
2. The NMF-Based and GNMF-Based Method
The NMF-based method [5] is a linear and non-negative approximate data description for non-negative matrices. We consider an original decomposed matrix X of size $m\times n$, where m represents data characteristics and n represents the number of samples. Based on the NMF method, the matrix X is decomposed into two non-negative matrices $W\in {R}^{m\times r}$ and $H\in {R}^{n\times r}$, i.e.,
where $r\le min(m,n)$. For a given decomposition $X=W{H}^{T}$ , sample m can be divided into r classes according to matrix ${H}^{T}$ . Each sample is placed in the highest metagene expression level in the sample, meaning that if ${H}_{ij}^{T}$ is the largest in column j, then sample j is assigned to class i .
$$X=W{H}^{T},$$
Using the square of the Euclidean distance between X and $W{H}^{T}$ [21], we have the objective function of the NMF method
$${O}^{NMF}=\left|\right|X-W{H}^{T}{\left|\right|}^{2}=\sum _{ij}({x}_{ij}-\sum _{k=1}^{r}{w}_{ik}{h}_{jk}{)}^{2}.$$
According to the iterative update algorithms [6], the NMF-based method is performed on the basis of multiplicative update rules of W and H given by
$${w}_{ik}\leftarrow {w}_{ik}\frac{{\left(XH\right)}_{ik}}{{\left(W{H}^{T}H\right)}_{ik}},\phantom{\rule{1.em}{0ex}}{h}_{jk}\leftarrow {h}_{jk}\frac{{\left({X}^{T}W\right)}_{jk}}{{\left(H{W}^{T}W\right)}_{jk}}.$$
In order to overcome the limitation of the NMF-based method, Cai et al. [17] proposed the GNMF-based method, in which an affinity graph is generated to encode the geometrical information followed by a matrix factorization with respect to the graph structure. In contrast to the NMF-based method, it has a regular graph constraint, which preserves the advantage of the local sparse representation of the NMF-based method and preserves the similarity between the original data points after dimensionality reduction.
There are several weighting schemes, such as zero-one weighting, heat kernel weighting, and Gaussian weighting [17]. In what follows, we consider the zero-one weighting described as
$${Q}_{ij}=\left\{\begin{array}{cc}1,& \mathrm{if}\phantom{\rule{4pt}{0ex}}{x}_{i}\in {N}_{k}\left({x}_{j}\right)\phantom{\rule{4pt}{0ex}}\mathrm{or}\phantom{\rule{4pt}{0ex}}{x}_{j}\in {N}_{k}\left({x}_{i}\right),\hfill \\ 0,& \mathrm{otherwise}.\hfill \end{array}\right.$$
Based on the weight matrix Q, we obtain the objective function of the GNMF method given by
where $Tr(\xb7)$ denotes the trace of matrices and $L=D-Q$. D is a diagonal matrix whose entries are column or row sums of Q with ${D}_{jj}={\sum}_{l}{Q}_{jl}$ [22]. The regularization parameter $\lambda \ge 0$ can be used for the smoothness control of the new representation. By the iterative algorithms to minimize the objective function ${O}^{GNMF}$ , we achieve the updating rules
$${O}^{GNMF}=\left|\right|X-W{H}^{T}{\left|\right|}^{2}+\lambda Tr\left({H}^{T}LH\right),$$
$${w}_{ik}\leftarrow {w}_{ik}\frac{{\left(XH\right)}_{ik}}{{\left(W{H}^{T}H\right)}_{ik}},\phantom{\rule{1.em}{0ex}}{h}_{jk}\leftarrow {h}_{jk}\frac{{({X}^{T}W+\lambda WH)}_{jk}}{{(H{W}^{T}W+\lambda DH)}_{jk}}.$$
3. The RM-GNMF-Based Method for Gene Clustering
So far, we have described the NMF-based method and GNMF-based method. In what follows, we seek RM-GNMF gene clustering by making use of the combination of ${l}_{2,1}$-norm and spectral clustering with the Laplacian regularization.
3.1. The ${l}_{2,1}$-Norm
The ${l}_{2,1}$-norm of a matrix was initially employed as a rotational invariant ${l}_{1}$-norm [23], which was usually used for multi-task learning [24,25] and tensor factorization [26]. Instead of using ${l}_{2}$-norm-based loss function that is sensitive to outliers, we resort to the ${l}_{2,1}$-norm-based loss function and regularization [23], which is convergence-proved.
For the sake of getting over the drawbacks of the NMF-based method and enhancing the robustness of the GNMF-based method, we employ the ${l}_{2,1}$-norm for the matrix factorization in the RM-GNMF-based method. For a non-negative matrix X of size $m\times n$, the ${l}_{2,1}$-norm of matrice X is defined as
where data vectors are arranged in columns, and the ${l}_{2,1}$-norm calculates the ${l}_{2}$-norm for column vectors first. Subsequently, the matrix factorization assignment becomes
$${\left|\right|X\left|\right|}_{2,1}=\sum _{i=1}^{n}\left|\right|{x}_{i}{\left|\right|}_{2},$$
$$\underset{H\ge 0}{\mathrm{min}}\left|\right|X-W{H}^{T}{\left|\right|}_{2,1}.$$
3.2. Spectral-Based Manifold Learning to Constrained GNMF
The spectral method is a classical method of analysis and algebra in mathematics. It is widely used in low dimensional representation and clustering problems of high dimensional data [27,28]. A relational matrix describing the similarity of the pair of data points is defined according to the given sample dataset, and the eigenvalues and eigenvectors of the matrices are calculated. After that, the appropriate eigenvectors are selected and the low dimensional embedding of the data is obtained. The degree matrices are defined on a given graph, such as an adjacency matrix of the graph, a Laplacian matrix, and so on [22].
Based on the spectrum of the matrices with respect to the graph, spectral theory further reveals the information contained in the graph , and establishes the connection between the discrete space and the continuous space through the techniques of geometry, analysis, and algebra. It has a wide range of applications in manifold learning. In this section, the p-nearest neighbor method can be used for establishing the relationship between each data point and its neighborhood.
For a data matrix $X\in {R}^{m\times n}$, we treat each column of X as a data point and each data point as a vertex, respectively. The p-nearest-neighbor graph G can be constructed with n vertices. Then the symmetric weight matrix $Q\in {R}^{n\times m}$ is generated, where the element ${q}_{ij}$ denotes the weight of the edge joining vertices i and j and the value of ${q}_{ij}$ is given by
where ${N}_{p}\left({x}_{i}\right)$ denotes the set of p-nearest neighbors of ${x}_{i}$. It is obvious that the matrix Q represents the affinity between the data points.
$${q}_{ij}=\left\{\begin{array}{cc}1,& \mathrm{if}\phantom{\rule{4pt}{0ex}}{x}_{i}\in {N}_{p}\left({x}_{j}\right)\phantom{\rule{4pt}{0ex}}\mathrm{or}\phantom{\rule{4pt}{0ex}}{d}_{j}\in {N}_{p}\left({x}_{i}\right),\hfill \\ 0,& \mathrm{otherwise}.\hfill \end{array}\right.$$
There is an assumption about manifold. Namely, if two data points ${x}_{i}$ and ${x}_{j}$ are close in the intrinsic geometric structure of the data distribution, then their presentations under a new basis will be close [29]. Therefore, we define the relationship as follows
where ${m}_{i}$ and ${m}_{j}$ denote the mappings of ${x}_{i}$ and ${x}_{j}$, respectively. The degree matrix D is a diagonal matrix given by ${d}_{ii}={\sum}_{j}{q}_{ij}$. Obviously, ${d}_{ii}$ is the sum of all the similarities related with ${x}_{i}$. Then, the graph Laplacian matrix is given by
$$\underset{{x}_{p}}{\mathrm{min}}\sum _{ij}\left|\right|{x}_{i}-{x}_{j}{\left|\right|}^{2}{q}_{ij},$$
$$L=D-Q.$$
The graph embedding can be written as
$$\begin{array}{ccc}\underset{x}{\mathrm{min}}\sum _{ij}\left|\right|{x}_{i}-{x}_{j}{\left|\right|}^{2}{q}_{ij}\hfill & =& \underset{X}{\mathrm{min}}tr\left(X(D-Q){X}^{T}\right)\hfill \\ & =& \underset{X}{\mathrm{min}}tr\left(XL{X}^{T}\right).\hfill \end{array}$$
In the RM-GNMF-based method, we combine the GNMF-based method with the spectral clustering, resulting in the ${l}_{2,1}$-norm constrained GNMF as follows
where $\lambda \ge 0$ is the regularization parameter. We resort to the augmented Lagrange multiplier (ALM) method to solve the above problem.
$${O}^{RMGNMF}=\left|\right|X-W{H}^{T}{\left|\right|}_{2,1}+\lambda Tr\left({H}^{T}LH\right),$$
For an auxiliary variable $Z=X-W{H}^{T}$, we rewrite the ${O}^{RMGNMF}$ in Equation (13) as
satisfying the constraints $Z-X+W{H}^{T}=0$ and ${H}^{T}H=I$. Then, we define the augmented Lagrangian function
satisfying the constraint ${H}^{T}H=I$, where $\mu $ is the step size of update and $\mathsf{\Lambda}$ is the Lagrangian multiplier.
$$\underset{W,H,Z}{\mathrm{min}}{\left|\right|Z\left|\right|}_{2,1}+\alpha Tr\left({H}^{T}LH\right),$$
$$\begin{array}{ccc}{L}_{\mu}(Z,W,H,\mathsf{\Lambda})\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\hfill & =\hfill & {\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\phantom{\rule{-0.166667em}{0ex}}\left|\right|Z\left|\right|}_{2,1}+Tr{\mathsf{\Lambda}}^{T}(Z-X+W{H}^{T})+\frac{\mu}{2}\left|\right|Z-X+W{H}^{T}{\left|\right|}_{2,1}\hfill \\ & & \phantom{\rule{-5.0pt}{0ex}}+\alpha Tr\left({H}^{T}LH\right).\hfill \end{array}$$
To optimize Equation (15), we rewrite the objective function to get the following task
satisfying the constraint ${H}^{T}H=I$.
$${L}_{\mu}(Z,W,H,\mathsf{\Lambda})={\left|\right|Z\left|\right|}_{2,1}+\frac{\mu}{2}\left|\right|Z-X+W{H}^{T}+\frac{\mathsf{\Lambda}}{\mu}{\left|\right|}_{F}^{2}+\alpha Tr\left({H}^{T}LH\right),$$
3.3. Computation of Z
For the given W and H, we can solve Z in Equation (15) by using the update of Z related to the following issue
$${Z}^{r+1}=arg\underset{Z}{\mathrm{min}}{\left|\right|Z\left|\right|}_{2,1}+\frac{\mu}{2}\left|\right|Z-(X-{W}^{r}{\left({H}^{r}\right)}^{T}-\frac{{\mathsf{\Lambda}}^{r}}{\mu}{\left|\right|}_{F}^{2}.$$
We need the following Lemma to solve Z in Equation (15). Please see the Appendix for a detailed proof.
Lemma 1.
Given a matrix $W=[{w}_{1},\cdots ,{w}_{n}]\in {R}^{m\times n}$ and a positive scalar λ, ${Z}^{*}$ is the optimal solution of
and the i-th column of ${Z}^{*}$ is given by
$$\mathrm{min}\frac{1}{2}\left|\right|Z-{W\left|\right|}_{F}^{2}+{\lambda \left|\right|Z\left|\right|}_{2,1},$$
$${Z}^{*}(:,i)=\left\{\begin{array}{cc}\frac{\left|\right|{w}_{i}\left|\right|-\lambda}{\left|\right|{w}_{i}\left|\right|}{W}_{i},\phantom{\rule{4pt}{0ex}}& if\phantom{\rule{4pt}{0ex}}\lambda <\left|\right|{W}_{i}\left|\right|,\hfill \\ 0,& \mathit{otherwise}.\hfill \end{array}\right.$$
3.4. Computation of W and H
For the given other parameters, we solve the optimal W. The update of W amounts to solve
$${W}^{r+1}=\mathrm{arg}\phantom{\rule{4pt}{0ex}}\underset{W}{\mathrm{min}}\frac{\mu}{2}\left|\right|{Z}^{r}-X+{W}^{r}{\left({H}^{r}\right)}^{T}+\frac{\mathsf{\Lambda}}{\mu}{\left|\right|}_{F}^{2}.$$
Let $X-Z+\frac{\mathsf{\Lambda}}{\mu}=M$. The problem in Equation (20) can be rewritten as
$${W}^{r+1}=\mathrm{arg}\phantom{\rule{4pt}{0ex}}\underset{W}{\mathrm{min}}\frac{\mu}{2}\left|\right|M+{W}^{r}{\left({H}^{r}\right)}^{T}{\left|\right|}_{F}^{2}.$$
If the partial derivative of W is set to be 0, we obtain
$${W}^{r+1}=M{H}^{r}.$$
Then, we derive the optimal H. Taking $W=MH$, the update problem of H can be expressed as
satisfying the constraint ${H}^{T}H=I$. We have
$${H}^{r+1}=\mathrm{arg}\underset{H}{\mathrm{min}}\frac{\mu}{2}\left|\right|M-MH{H}^{T}{\left|\right|}_{F}^{2}+\alpha Tr\left({H}^{T}LH\right),$$
$$\begin{array}{ccc}{H}^{r+1}\hfill & =& \mathrm{arg}\underset{H}{\mathrm{min}}\left|\right|M-MH{H}^{T}{\left|\right|}_{F}^{2}+\frac{2\alpha}{\mu}Tr\left({H}^{T}LH\right)\hfill \\ & =& arg\underset{H}{\mathrm{min}}Tr{H}^{T}(-{M}^{T}M+\frac{2\alpha}{\mu}L)H.\hfill \end{array}$$
Therefore, the optimal ${H}^{r+1}$ can be achieved by counting eigenvectors of
$${H}^{r+1}=({h}_{1},\cdots ,{h}_{k}).$$
3.5. Updating of $\mathsf{\Lambda}$ and $\mu $
The update standard of $\mathsf{\Lambda}$ and $\mu $ can be described as follows
where p is the nearest neighbor graph parameter. The detailed process of the RM-GNMF-based method is listed in Algorithm 1.
$$\begin{array}{cc}\hfill & {\mathsf{\Lambda}}^{r+1}={\mathsf{\Lambda}}^{r}+\mu ({Z}^{r+1}-X+{W}^{r}{\left({H}^{r}\right)}^{T}),\hfill \end{array}$$
$$\begin{array}{c}{\mu}^{r+1}=p{\mu}^{r},\hfill \end{array}$$
Algorithm 1: The RM-GNMF-based Algorithm |
Input: The dataset $X=[{x}_{1},{x}_{2},\cdots ,{x}_{n}]\in {R}^{m\times n}$, a predefined number of clusters k, parameters $\mu $, $\lambda $, the nearest neighbor graph parameter p, maximum iteration number ${t}_{max}$. Initialization: $Z=\mathsf{\Lambda}=0$, ${W}_{0}\in {R}^{m\times k}$, ${H}_{0}\in {R}^{k\times n}$. Repeat Fix other parameters, and then update Z by formula (17); Fix other parameters, and then update W by: $W=(X-Z+\frac{\mathsf{\Lambda}}{\mu})H$; Update H by $H=U{V}^{T}$, where U and V are the left and right singular values of the SVD decomposition; Fix other parameters, and then update $\mathsf{\Lambda}$ and $\mu $ by formulas (26)(27); t = t + 1; Until $t\ge {t}_{max}$. Output: matrix $W\in {R}^{m\times k}$ , matrix $H\in {R}^{k\times n}$. |
4. Results and Discussion
In this section, we evaluate the performance of the proposed method on the three gene expression datasets. We compare the RM-GNMF-based method with the NMF-based method [6], the ${l}_{2,1}$-NMF-based method [23], the LNMF-based method [20], and the GNMF-based method [17].
4.1. Datasets
In order to evaluate the performance of proposed RM-GNMF algorithm, the clustering experiment was conducted on several gene expressions datasets of cancer patients. Three classical genetic datasets were used in the experiment, including leukemia [1], colon, and GLI_85 [30]. These gene expression datasets are downloaded from: http://featureselection.asu.edu/datasets.php.The colon cancer datasets consist of the gene expression profiles of 2000 genes for 62 tissue samples among which 40 are colon cancer tissues and 22 are normal tissues. The leukemia datasets consist of 7129 genes and 72 samples (47 ALL and 25 AML).
A brief description of experimental datasets is described in Table 1.
More detailed information on these datasets can be found in the relevant references, and these datasets are available for download from the reference website.
4.2. Evaluation Metrics
For the sake of evaluating the clustering results, we use the clustering accuracy and normalized mutual information (NMI) to demonstrate the performance of the proposed algorithm.
Clustering accuracy can be calculated as
where ${c}_{i}$ is the cluster label of ${x}_{i}$, and ${l}_{i}$ is the true class label i-th sample, n denotes the total number of samples, and $\delta (map\left({c}_{i}\right),{l}_{i})$ is a delta function. If $map\left({c}_{i}\right)={l}_{i}$, we obtain $\delta (map\left({c}_{i}\right),{l}_{i})=1$, where $map\left({c}_{i}\right)$ is the mapping function that maps the cluster label ${c}_{i}$ into the actual label ${l}_{i}$. Otherwise, we have $\delta (map\left({c}_{i}\right),{l}_{i})=0$. We can find the best mapping by the Kuhn–Munkres method [31]. NMI can be described as
where ${n}_{i}$ is the size of the i-th cluster and ${\widehat{n}}_{j}$ is the size of the j-th class, ${n}_{i,j}$ is the number of data between the intersections, and N denotes the number of clusters. We perform 100 experiments under each target feature dimension, taking the mean of the accurate and NMI values as the experimental results.
$$ACC=\frac{{\sum}_{i=1}^{n}\delta (map\left({c}_{i}\right),{l}_{i})}{n},$$
$$NMI=\frac{{\sum}_{i=1}^{N}{\sum}_{j=1}^{N}{n}_{i,j}log\frac{{n}_{i,j}}{{n}_{i}{\widehat{n}}_{j}}}{\sqrt{\left({\sum}_{i=1}^{N}{n}_{i}log\frac{{n}_{i}}{n}\right)\left({\sum}_{j=1}^{N}{\widehat{n}}_{j}log\frac{{\widehat{n}}_{j}}{n}\right)}},$$
4.3. Parameter Selection
The RM-GNMF-based method involves two essential parameters, i.e., the regularization parameter $\lambda $ and the regularity coefficient $\mu $ determining the penalty for infeasibility.
We set the parameters $\lambda $ and $\mu $ in the range of $\lambda \in \{0.001,0.005,0.01,0.05,0.1,0.5,1\}$ and $\mu \in \{{10}^{-1},{10}^{-2},{10}^{-3},{10}^{-4},{10}^{-5},{10}^{-6},{10}^{-7}\}$ . We use the cross-validation method to get the best parameter values $\lambda =0.05$ and $\mu ={10}^{-3}$. In order to intuitively to analyze the influence of parameters $\lambda $ and $\mu $ of the RM-GNMF-based method on the accuracy of clustering, Figure 1 shows the variation on clustering accuracy when the two parameters are modified. The three subgraphs in Figure 1 correspond to three gene expression datasets respectively. As can be viewed in Figure 1, the parameter $\mu ={10}^{-3}$ can get higher ACC. With the change of regular parameter $\lambda $, the change of ACC is relatively flat, and the clustering accuracy is higher when the value of $\lambda $ is smaller. Therefore, we set $\lambda =0.05,\mu ={10}^{-3}$ in the follow-up experiments.
4.4. Clustering Results
In Table 2, we demonstrate the clustering results on the colon, GLI_85 and leukemia datasets, respectively. Reported is the mean of clustering results from 100 runs of different NMF methods together.
It can be found that the RM-GNMF-based method outperforms the original NMF-based method, while the RM-GNMF-based method achieves the best performance compared with the other three datasets. The clustering accuracies of the RM-GNMF-based method are $66.13\%,75.29\%$, and $65.28\%$ for the colon, GLI_85, and leukemia datasets, respectively.
Our tests on several gene expression profiling datasets of cancer patients consistently indicate that the RM-GNMF-based method achieves significant improvements in comparison with the NMF-based method, the ${l}_{2,1}$-NMF-based method, the LNMF-based method, and the GNMF-based method, in terms of cancer prediction accuracy.
As shown in Figure 2, the RM-GNMF-based method always gives birth to better clustering results than other NMF-based method using the three original datasets.
To demonstrate the robustness of our approach to data changes, we add uniform noise onto the three gene expression datasets. A disturbed matrix ${Y}_{noise}$ is generated by adding independent uniform noise, defined as follows:
where Y is the original matrix, r is a random number generated by a uniform distribution on the interval $[0,max]$, and $max$ is the maximum expression of Y .
$${Y}_{noise}=Y+r,$$
The experimental results with noise added are shown in Figure 3. It can be seen that the clustering result of RM-GNMF algorithm is still stable with the addition of noise, which shows that RM-GNMF algorithm is robust.
In order to verify the results obtained from the algorithms in the experiments, we import the clustering result of the comparison methods into STAC web platform to perform the statistical test (http://tec.citius.usc.es/stac/). We selected the Friedman test of non-parametric multiple groups; the significance level is $0.05$. The analysis results obtained are presented in Table 3 and Table 4.
From the above test results it can be concluded that $H0$ is rejected. Hence, we believe that the clustering results of five algorithms are significantly different.
5. Conclusions
We have proposed the RM-GNMF-based method with the ${l}_{2,1}$-norm and spectral-based manifold learning. This algorithm is suitable for cancer gene expression data clustering with an elegant geometric structure. Our tests on several gene expression profiling datasets of cancer patients consistently indicate that the RM-GNMF-based method achieves significant improvements in comparison with the NMF-based method, the ${l}_{2,1}$-NMF-based method, the LNMF-based method, and the GNMF-based method, in terms of cancer prediction accuracy and robustness.
Acknowledgments
This work was supported by the National Natural Science Foundation of China (Grant Nos. 61379153, 61572529, 61572284).
Author Contributions
Rong Zhu and Jin-Xing Liu conceived and designed the experiments; Rong Zhu performed the experiments; Rong Zhu and Yuan-Ke Zhang analyzed the data; Rong Zhu and Ying Guo wrote the paper.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
Lemma A1.
With the given matrix $W=[{w}_{1},\cdots ,{w}_{n}]\in {R}^{m\times n}$ and the positive scalar λ, ${Z}^{*}$ is the optimal solution of
and the i-th column of ${Z}^{*}$ can be calculated as
$$\mathrm{min}\frac{1}{2}\left|\right|Z-{W\left|\right|}_{F}^{2}+{\lambda \left|\right|Z\left|\right|}_{2,1},$$
$${Z}^{*}(:,i)=\left\{\begin{array}{cc}\frac{\left|\right|{w}_{i}\left|\right|-\lambda}{\left|\right|{w}_{i}\left|\right|}{W}_{i},\phantom{\rule{4pt}{0ex}}& if\phantom{\rule{1.em}{0ex}}\phantom{\rule{4pt}{0ex}}\lambda <\left|\right|{W}_{i}\left|\right|,\hfill \\ 0,& \mathit{otherwise}.\hfill \end{array}\right.$$
Proof.
The objection function in Equation (A1) is equivalent to the following equation
which can be solved in a decoupled manner
$$\sum _{i=1}^{n}\left|\right|{z}_{i}-{w}_{i}{\left|\right|}_{F}^{2}+\lambda \sum _{i=1}^{n}\left|\right|{z}_{i}{\left|\right|}_{2},$$
$$\underset{{z}_{i}}{\mathrm{min}}\frac{1}{2}\left|\right|{z}_{i}-{w}_{i}{\left|\right|}_{2}^{2}+\lambda \left|\right|{z}_{i}{\left|\right|}_{2}.$$
After taking derivative with respect to ${z}_{i}$, we get
where r is a subgradient vector and ${\left|\right|r\left|\right|}_{2}\le 1$. For ${z}_{i}=0$, we get
where $\lambda \ge \left|\right|{w}_{i}\left|\right|$. For ${z}_{i}\ne 0$, we get
$$\frac{\partial \left|\right|{z}_{i}{\left|\right|}_{2}}{\partial {z}_{i}}=\left\{\begin{array}{cc}r,& \mathrm{if}\phantom{\rule{1.em}{0ex}}{z}_{I}=0\hfill \\ \frac{{z}_{i}}{\sqrt{{z}_{i}^{T}{z}_{i}}},& \mathrm{otherwise},\hfill \end{array}\right.$$
$$-{w}_{i}+\lambda r=0,$$
$${z}_{i}-{w}_{i}+\lambda \frac{{z}_{i}}{\sqrt{{z}_{i}^{T}{z}_{i}}}=0.$$
Combining Equation (A6) with Equation (A7), we obtain
where $\alpha =\frac{\left|\right|{z}_{i}{\left|\right|}_{2}}{\left|\right|{z}_{i}{\left|\right|}_{2}+\lambda}>0$. Plugging ${z}_{i}$ in Equation (A8) to Equation (A7), we solve $\alpha $, which is substituted back into Equation (A8). After performing the above-mentioned steps, we obtain
□
$${z}_{i}=\alpha {w}_{i},$$
$${z}_{i}=(1-\frac{\lambda}{\left|\right|{w}_{i}\left|\right|}){w}_{i}.$$
References
- Golub, T.R.; Slonim, D.K.; Tamayo, P.; Huard, C.; Gaasenbeek, M.; Mesirov, J.P.; Coller, H.; Loh, M.L.; Downing, J.R.; Caligiuri, M.A.; et al. Molecular classification of cancer: Class discovery and class prediction by gene expression monitoring. Science 1999, 286, 531–537. [Google Scholar] [CrossRef] [PubMed]
- Jiang, D.; Tang, C.; Zhang, A. Cluster analysis for gene expression data: Survey. IEEE Trans. Knowl. Data Eng. 2004, 16, 1370–1386. [Google Scholar] [CrossRef]
- Devarajan, K. Nonnegative matrix factorization: an analytical and interpretive tool in computational biology. PLoS Comput. Biol. 2008, 4, e1000029. [Google Scholar] [CrossRef] [PubMed]
- Luo, F.; Khan, L.; Bastani, F.; Yen, I.L.; Zhou, J. A dynamically growing self-organizing tree (DGSOT) for hierarchical clustering gene expression profiles. Bioinformatics 2004, 20, 2605–2617. [Google Scholar] [CrossRef] [PubMed]
- Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788. [Google Scholar] [PubMed]
- Lee, D.D.; Seung, H.S. Algorithms for non-negative matrix factorization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2001; pp. 556–562. [Google Scholar]
- Brunet, J.P.; Tamayo, P.; Golub, T.R.; Mesirov, J.P. Metagenes and molecular pattern discovery using matrix factorization. Proc. Natl. Acad. Sci. USA 2004, 101, 4164–4169. [Google Scholar] [CrossRef] [PubMed]
- Li, T.; Ding, C.H. Nonnegative Matrix Factorizations for Clustering: A Survey. In Data Clustering: Algorithms and Applications; CRC Press: Boca Raton, FL, USA, 2013. [Google Scholar]
- Ding, C.; He, X.; Simon, H.D. On the equivalence of nonnegative matrix factorization and spectral clustering. In Proceedings of the 2005 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2005; pp. 606–610. [Google Scholar]
- Kuang, D.; Ding, C.; Park, H. Symmetric nonnegative matrix factorization for graph clustering. In Proceedings of the 2012 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2012; pp. 106–117. [Google Scholar]
- Akata, Z.; Thurau, C.; Bauckhage, C. Non-negative matrix factorization in multimodality data for segmentation and label prediction. In Proceedings of the 16th Computer vision winter workshop, Mitterberg, Austria, 2–4 February 2011. [Google Scholar]
- Liu, J.; Wang, C.; Gao, J.; Han, J. Multi-view clustering via joint nonnegative matrix factorization. In Proceedings of the 2013 SIAM International Conference on Data Mining; SIAM: Philadelphia, PA, USA, 2013; pp. 252–260. [Google Scholar]
- Singh, A.P.; Gordon, G.J. Relational learning via collective matrix factorization. In Proceedings of the 14th ACM SIGKDD international conference on Knowledge discovery and Data Mining, Las Vegas, Nevada, USA, 24–27 August 2008; pp. 650–658. [Google Scholar]
- Huang, Z.; Zhou, A.; Zhang, G. Non-negative matrix factorization: A short survey on methods and applications. In Computational Intelligence and Intelligent Systems; Springer: Berlin, Germany, 2012; pp. 331–340. [Google Scholar]
- Wang, Y.X.; Zhang, Y.J. Nonnegative matrix factorization: A comprehensive review. IEEE Trans. Knowl. Data Eng. 2013, 25, 1336–1353. [Google Scholar] [CrossRef]
- Kim, J.; He, Y.; Park, H. Algorithms for nonnegative matrix and tensor factorizations: A unified view based on block coordinate descent framework. J. Glob. Optim. 2014, 58, 285–319. [Google Scholar] [CrossRef]
- Cai, D.; He, X.; Han, J.; Huang, T.S. Graph regularized nonnegative matrix factorization for data representation. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 1548–1560. [Google Scholar] [PubMed]
- Kim, H.; Park, H. Sparse non-negative matrix factorizations via alternating non-negativity-constrained least squares for microarray data analysis. Bioinformatics 2007, 23, 1495–1502. [Google Scholar] [CrossRef] [PubMed]
- Pascual-Montano, A.; Carazo, J.M.; Kochi, K.; Lehmann, D.; Pascual-Marqui, R.D. Nonsmooth nonnegative matrix factorization (nsNMF). IEEE Trans. Pattern Anal. Mach. Intell. 2006, 28, 403–415. [Google Scholar] [CrossRef] [PubMed]
- Li, S.Z.; Hou, X.W.; Zhang, H.J.; Cheng, Q.S. Learning spatially localized, parts-based representation. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), Kauai, HI, USA, 8–14 December 2001. [Google Scholar]
- Paatero, P.; Tapper, U. Positive matrix factorization: A non-negative factor model with optimal utilization of error estimates of data values. Environmetrics 1994, 5, 111–126. [Google Scholar] [CrossRef]
- Chung, F.R. Spectral Graph Theory; Number 92; American Mathematical Soc.: Providence, RI, USA, 1997. [Google Scholar]
- Nie, F.; Huang, H.; Cai, X.; Ding, C.H. Efficient and robust feature selection via joint l2, 1-norms minimization. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2010; pp. 1813–1821. [Google Scholar]
- Argyriou, A.; Evgeniou, T.; Pontil, M. Multi-task feature learning. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2007; pp. 41–48. [Google Scholar]
- Guyon, I.; Weston, J.; Barnhill, S.; Vapnik, V. Gene selection for cancer classification using support vector machines. Mach. Learn. 2002, 46, 389–422. [Google Scholar] [CrossRef]
- Huang, H.; Ding, C. Robust tensor factorization using r_{1} norm. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Anchorage, AK, USA, 23–28 June 2008. [Google Scholar]
- Tao, H.; Hou, C.; Nie, F.; Zhu, J.; Yi, D. Scalable Multi-View Semi-Supervised Classification via Adaptive Regression. IEEE Trans. Image Process. 2017, 26, 4283–4296. [Google Scholar] [CrossRef] [PubMed]
- Shawe-Taylor, J.; Cristianini, N.; Kandola, J.S. On the concentration of spectral properties. In Advances in Neural Information Processing Systems; The MIT Press: Cambridge, MA, USA, 2002; pp. 511–517. [Google Scholar]
- Yin, M.; Gao, J.; Lin, Z. Laplacian regularized low-rank representation and its applications. IEEE Trans. Pattern Anal. Mach. Intell. 2016, 38, 504–517. [Google Scholar] [CrossRef] [PubMed]
- Li, J.; Cheng, K.; Wang, S.; Morstatter, F.; Robert, T.; Tang, J.; Liu, H. Feature Selection: A Data Perspective. arxiv, 2017. [Google Scholar]
- Lovász, L.; Plummer, M.D. Matching Theory; American Mathematical Soc.: Providence, RI, USA, 2009; Volume 367. [Google Scholar]
Data Sets | Instances | Features | Classes |
---|---|---|---|
Colon | 62 | 2000 | 2 |
GLI_85 | 85 | 22,283 | 2 |
Leukemia | 72 | 7070 | 2 |
Table 2.
Clustering results on different datasets. NMF: non-negative matrix factorization; GNMF: graph regularized non-negative matrices factorization; RM-GNMF: robust manifold non-negative matrix factorization; NMI: normalized mutual information.
Methods | Colon | GLI_85 | Leukemia | |||
---|---|---|---|---|---|---|
ACC | NMI | ACC | NMI | ACC | NMI | |
NMF | 0.6290 | 0.0110 | 0.6088 | 0.1906 | 0.6389 | 0.0193 |
${L}_{2,1}$-NMF | 0.5323 | 0.0048 | 0.6088 | 0.1916 | 0.6328 | 0.0258 |
LNMF | 0.6129 | 0.0181 | 0.5294 | 0.0011 | 0.6250 | 0.0306 |
GNMF | 0.6290 | 0.0110 | 0.6000 | 0.1584 | 0.6389 | 0.0193 |
RM-GNMF | 0.6613 | 0.0220 | 0.7529 | 0.1925 | 0.6528 | 0.0369 |
Statistic | p-Value | Result |
---|---|---|
7.00000 | 0.01003 | $H0$ is rejected |
Rank | Algorithm |
---|---|
1.33333 | LNMF |
2.33333 | NMF |
2.66667 | ${L}_{2,1}$-NMF |
3.66667 | GNMF |
5.00000 | RM-GNMF |
© 2017 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).