Next Article in Journal
Towards Efficient Positional Inverted Index †
Previous Article in Journal
Mining Domain-Specific Design Patterns: A Case Study †
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Stable Analysis of Compressive Principal Component Pursuit

1
School of Computer Science, Civil Aviation Flight University of China, Guanghan 618307, China
2
School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
*
Author to whom correspondence should be addressed.
Algorithms 2017, 10(1), 29; https://doi.org/10.3390/a10010029
Submission received: 5 January 2017 / Revised: 10 February 2017 / Accepted: 17 February 2017 / Published: 21 February 2017

Abstract

:
Compressive principal component pursuit (CPCP) recovers a target matrix that is a superposition of low-complexity structures from a small set of linear measurements. Pervious works mainly focus on the analysis of the existence and uniqueness. In this paper, we address its stability. We prove that the solution to the related convex programming of CPCP gives an estimate that is stable to small entry-wise noise. We also provide numerical simulation results to support our result. Numerical results show that the solution to the related convex program is stable to small entry-wise noise under board condition.

1. Introduction

Recently, there has been a rapidly increasing interest in recovering a target matrix that is a superposition of low-rank and sparse components from a small set of linear measurements. In many cases, this problem is shorted for matrix completion [1,2,3], which arises in a number of fields, such as medical imaging [4,5], seismology [6], and computer vision [7,8] and Kalman filter [9]. Mathematically, there exists a large-scale data matrix M = L 0 + S 0 , where L 0 is a low-rank matrix, and S 0 is a sparse matrix. One of the important problems here is how to extract the intrinsic low-dimensional structure from a small set of linear measurements. In a recent paper [10], E. J. Candès et al. proved that most low-rank matrices and the sparse components can be recovered, provided that the rank of the low-rank component is not too large, and that the sparse component is reasonably sparse. It is more important that they proved that these two components can be recovered by solving a simple convex optimization problem. In [11], John Wright et al. generalized this problem to decompose a matrix into multiple incoherent components:
minimize i τ λ i X i ( i ) subject   to i τ X i = M ,
where X i ( i ) are norms that encourage various types of low-complexity structure. The authors also provide a sufficient condition that can promise the existence and uniqueness theorem of compressive principle component pursuit (CPCP). The result in [11] requires that the components are low-complexity structures.
However, in many applications, the observed measurements are always corrupted by different kinds of noise which may affect every entry of the data matrix. In order to further complete the theory developed in [11], it is necessary to research the stability of CPCP which can guarantee stable and accurate recovery in the presence of entry-wise noise. In this paper, we make a commendable attempt in this respect. We denote M as the observing matrix which can decompose into multiple incoherent components, and assume that
M = i τ X i , 0 + Z 0 ,
where X i , 0 are corresponding incoherent components and Z 0 is an independent and identically distributed (i.i.d.) noise. We assume that Z 0 is only limited by Z 0 F δ for some δ > 0 . In order to recover the unknown low-complexity structures, we suggest solving the following relaxed optimization problem.
minimize i τ λ i X i ( i ) subject   to i τ X i M F δ
In this paper, we prove the solution of (2) is stable to small entry-wise noise. The rest of paper is organized as follows. In Section 2, we show some notations and the main result, which will be proven in Section 3 and Section 4. In Section 3, we give two important lemmas which are an important parts of our main result. In Section 4, The proof of Theorem 1 will be given. We further provide numerical results in Section 5 and conclude the paper in Section 6.

2. Notations and Main Results

In this section, we first give some important notions which will be used throughout this paper, and then provide the main result.

2.1. Notations

We denote the operator norm of matrix by X , the Frobenius norm by X F , and the nuclear norm by X * , and denote the dual norm of X ( i ) by X ( i ) * . The Euclidean inner product between two matrices is defined by the formula X , Y = t r a c e ( X * Y ) . Note that X F 2 = X , X . The Cauchy–Schwarz inequality gives X , Y X F Y F , and it is well known that we also have X , Y X ( i ) Y ( i ) * (e.g., [1,12]). X ( i ) majorized the Frobenius norm means X ( i ) X F for all X. Linear transformations which act on the space of matrices are denoted by P T X . It is easy to see that the operator of P T is a high dimension matrix. The operator norm of the operator is denoted by P T . It should be noted that P T = sup { X F = 1 } P T X F .
For any matrix vector x = [ X i ] , i = 1 , 2 , , τ , where X i R m × n is i-th matrix. We will consider two norms of this matrix pair, which can define as x : = i τ λ i X i ( i ) and x 2 : = i τ X i F . In order to simplify the stability analysis of CPCP, we also define the subspaces (the common component) γ : = [ Γ i ] , Γ i = ( l τ X l ) / τ i = 1 , 2 , , τ , and (the different component) γ : = [ Γ i ] , Γ i = X i Γ i i = 1 , 2 , , τ . In order to analyze the behavior of special projection operator, we define the projection operator P T 1 × × P T τ ( x ) : = [ P T 1 ( X 1 ) , , P T τ ( X τ ) ] .
we assume that X i ( i ) i = 1 , 2 , , τ are decomposable norms. The definition of decomposable norms is below.
Definition 1 (Decomposable Norms).
if there exists a subspace T and a matrix Z satisfying
· ( X ) = { Λ | P T Λ = Z , P T Λ * 1 } ,
where · * denotes the dual norm of · and P T is nonexpansive with respect to · * . Then, we say that the norm · is decomposable at X.
Definition 2 (Inexact Certificate).
We say Λ is an ( α , β ) -inexact certificate for a putative solution ( X 1 , , , X τ , ) to (1.1) with parameters ( λ 1 , , λ τ ) if for each i, P T i Λ λ i Z i F α , and P T i Λ ( i ) * < λ i β .

2.2. Main Results

Pertaining to Problem (1), we have the result as follows.
Lemma 1
([11]). Assume there exists a feasible solution x = ( X 1 , , X τ ) to the optimization Problem (1). Suppose that each of the norms · ( i ) is decomposable at X i , and that each of the · ( i ) majorizes the Frobenius norm. Then, x is the unique optimal solution if T 1 , , T τ are independent subspaces with
P T i P T j < 1 τ 1 i j ,
and there exists an ( α , β ) -inexact certificate Λ ^ , with
β + α τ 1 ( τ 1 ) m a x i j P T i P T j × 1 m i n l λ l 1 .
The main contribution of this paper is the stability analysis of the solution of CPCP; the main Theorem of [13] can be regarded as a special case of our result (although the main idea of proof is similar to the paper [13], there are some important differences here). Next, we will provide the proposed related convex programming (2) is stable from small entry-wise noise under board condition. The main result of this paper is provided below.
Theorem 1.
Assume x = ( X 1 , , , X τ , ) , x ^ = ( X 1 , , X τ ) are the solutions of the optimization Problems (1) and (2), respectively. Suppose that each of the norms · ( i ) is decomposable at X i , , and each of the · ( i ) majorizes the Frobenius norm. Then, if T 1 , , T τ are independent subspaces with
P T i P T j < 1 τ 1 i j
and there exists an ( α , β ) -inexact certificate Λ ^ , with
β + α τ 1 ( τ 1 ) m a x i j P T i P T j × 1 m i n l λ l 1 ,
then for any Z 0 which is limited by Z 0 F δ , the solution x ^ to the convex programming (2) obeys
i x ^ i x i , 2 2 C ( n , τ , α , β ) δ 2 ,
where C ( n , τ , α , β ) is a numerical constant only depending upon n , τ , α , β .

3. Main Lemmas

In this section, we present two main lemmas which are used to obtain Theorem 1. The paper [11] states that:
Lemma 2
([11]). Suppose T 1 , , T τ are independent subspaces of R m × n and Z 1 T 1 , , Z τ T τ , under the other conditions of Lemma 1. Then, the below equations
P T i Δ = λ i Z i P T i Λ , i = 1 , , τ
have a solution Δ T 1 + + T τ obeying
Δ F α 2 τ 1 ( τ 1 ) m a x i j P T i P T i .
In order to bound the behavior of the norm of x , we have the first main lemma that is used to obtain Theorem 1.
Lemma 3.
Assume P T i P T j < 1 τ 1 i j . Suppose there exists an ( α , β ) -inexact certificate Λ ^ satisfying Lemma 1. Then, for any perturbation h = [ H i ] obeying i H i = 0
x 0 + h x 0 + i = 1 τ ( λ i C α λ i β ) P T i H i ( i ) ,
wherein, let C α = α 2 τ 1 ( τ 1 ) m a x i j P T i P T i . It is easy to see that under the hypothesis of Lemma 1, the coefficients of P T i H i ( i ) satisfy λ i C α λ i β > 0 .
Proof. 
According to the property of convex function, for any subgradients z = [ Z i ] x 0 , we can obtain
x 0 + h x + i = 1 τ λ i < Z i , H i > .
Now, because the norm of the subgradients is decomposable at X i , there exists Λ, Z i , α, and β obeying P T i Λ λ i Z i F α , and P T i Λ ( i ) * < λ i β . Let Δ i : = P T i Δ = λ i Z i P T i Λ T i (see Lemma 2). Note that
Λ + Δ i + P T i ( λ i Z i Λ ) = Λ + λ i Z i P T i Λ + P T i λ i Z i P T i Λ = λ i Z i + P T i λ i Z i = λ i Z i ,
where the second equation obeys Z i T i . According to the above equation, we will continue bounding i = 1 τ λ i < Z i , H i > .
i = 1 τ λ i < Z i , H i > = i = 1 τ < Λ + Δ i + P T i ( λ i Z i Λ ) , H i > = i = 1 τ < Λ , H i > + i = 1 τ < P T i Δ , H i > + i = 1 τ < P T i ( λ i Z i Λ ) , H i > = < Λ , i = 1 τ H i > + i = 1 τ < Δ , P T i H i > + i = 1 τ < λ i Z i Λ , P T i H i > = i = 1 τ < Δ , H i > i = 1 τ < Δ , P T i H i > + i = 1 τ < λ i Z i Λ , P T i H i > i = 1 τ < λ i Z i Λ , P T i H i > i = 1 τ Δ F P T i H i F i = 1 τ < λ i Z i Λ , P T i H i > i = 1 τ Δ F P T i H i ( i )
With the definition of duality, there exists Z ^ i X i , 0 ( i ) with Z ^ i ( i ) * 1 such that < Z i * , P T i H i > = P T i H i ( i ) . Moreover, with the Cauchy–Schwarz inequality, we have
| < Λ , P T i H i > | = | < P T i Λ , P T i H i > | P T i Λ ( i ) * P T i H i ( i ) .
Let Z i = Z ^ i . Then, we can obtain:
< λ i Z i Λ , P T i H i > ( λ i P T i Λ ( i ) * ) P T i H i ( i ) .
Combining with the inequalities above, we can obtain
x 0 + h x 0 + i = 1 τ ( λ i Δ F P T i Λ ( i ) * ) P T i H i ( i ) x 0 + i = 1 τ ( λ i C α λ i β ) P T i H i ( i ) .
The Lemma 3 is established. ☐
For bounding the behavior of i x ^ i x i , F 2 , we have to bound the projection operator P T 1 × × P T τ ( x ) . Therefore, we have the second main lemma that will be used to obtain Theorem 1.
Lemma 4.
Assume that P T i P T j < 1 τ 1 i j . For any matrix vector x = [ X i ] , we have
P γ ( P T 1 × × P T τ ) ( x ) F 2 1 m a x i 1 2 j i ( P T i P T j + P T j P T i ) τ P T 1 × × P T τ ( x ) F 2 .
It is easy to see that under the hypothesis of P T i P T j < 1 τ 1 i j , the constant 1 m a x i 1 2 j i ( P T i P T j + P T j P T i ) τ 2 is strictly greater than zero.
Proof. 
With respect to any matrix x = [ X i ] , we have P γ ( x ) = [ Γ i ] , where Γ i = ( l = 1 τ X l ) / τ . It is easy to see that P γ ( x ) F 2 = 1 τ l = 1 τ X l F 2 . Then, we have
P γ ( P T 1 × × P T τ ) ( x ) F 2 = 1 τ i = 1 τ P T i X i F 2 = 1 τ ( i = 1 τ ( P T i X i F 2 + j i < P T i X i , P T j X j > ) ) .
Note that
< P T i X i , P T j X j > = < P T i X i , P T i P T j X j > P T i P T j P T i X i F P T j X j F .
Together with P T i P T j < 1 τ 1 i j , we have
P γ ( P T 1 × × P T τ ) ( x ) F 2 1 τ ( i = 1 τ P T i X i F 2 j i P T i P T j P T i X i F P T j X j F ) ) 1 τ ( i = 1 τ ( P T i X i F 2 j i P T i P T j 2 ( P T i X i F 2 + P T j X j F 2 ) ) = 1 τ i = 1 τ ( 1 1 2 j i ( P T i P T j + P T j P T i ) ) P T i X i F 2 1 max i 1 2 j i ( P T i P T j + P T j P T i ) τ ( i P T i X i F 2 ) 1 max i 1 2 j i ( P T i P T j + P T j P T i ) τ P T 1 × × P T τ ( x ) F 2 ,
where in the second inequality, we have used the inequity that for any x , y , 2 x y ( x 2 + y 2 ) . Therefore, Lemma 4 is established. ☐

4. Proof of Theorem 1

In this section, we will provide the proof of Theorem 1. Our main proof is based on two elementary and important properties of x ^ , which is the solution of Problem (2). First, note that x 0 is also a feasible solution to Problem (2) and x ^ is the optimum solution; therefore, we can obtain x ^ x 0 . Second, according to triangle inequality, we can obtain
x ^ x 0 2 = x ^ M ( x 0 M ) 2 x ^ M 2 + x 0 M 2 2 δ .
Let x ^ = x 0 + h , where h = [ H i ] . According to the definition of subspace of γ, we denote h γ : = P γ ( h ) , h γ : = P γ ( h ) for short. Our main aim is to bound h 2 = x ^ x 0 2 , which can be rewritten as
h 2 2 = h γ 2 2 + h γ 2 2 = h γ 2 2 + P T 1 × × P T τ h γ 2 2 + P T 1 × × P T τ h γ 2 2 .
Combining with (4), we have
h γ 2 2 = i j = 1 τ H j τ 2 2 4 δ 2 τ
Therefore, it is necessary to bound the other two terms on the right-hand-side of (5). We will bound the second and third terms, respectively.
Norm equivalence theorem tells us that every two norms on a finite dimensional normed space are equivalent, which implies that there exists two constants C ( n , τ ) c ( n , τ ) > 0 satisfying
c ( n , τ ) x 2 x C ( n , τ ) x 2 .
A. Estimate the third term of (5) Let Λ be a dual certificate obeying Lemma 1. Then, using triangle inequality, we have
x 0 + h 2 x 0 + h γ 2 h γ 2 .
Combining with Lemma 3, we can obtain
x 0 + h γ 2 x 0 d + i = 1 τ ( λ i C α λ i β ) P T i H i ( i ) x 0 d + ( 1 C α 1 min i λ i β ) i = 1 τ λ i P T i H i Γ ( i ) x 0 + h d + ( 1 C α 1 min i λ i β ) i = 1 τ λ i P T i H i Γ ( i ) ,
wherein, to get the third inequality, we used the fact x ^ d x 0 d . For simplification, let
C 1 ( α , β ) 1 C α 1 min i λ i β > 0 .
Therefore, we have
x 0 + h γ 2 x 0 + h d + C 1 ( α , β ) i = 1 τ λ i P T i H i Γ ( i ) .
Combining with (7), we can obtain
C 1 ( α , β ) i = 1 τ λ i P T i H i Γ ( i ) h γ 2 .
Then
i = 1 τ λ i P T i H i Γ ( i ) C 2 ( α , β ) h γ 2 ,
where C 2 ( α , β ) = 1 / C 1 ( α , β ) . We will estimate the third term of (5). Using triangle inequality, we have
P T 1 × × P T τ h Γ 2 i P T i H i Γ F 1 c ( n , τ ) i = 1 τ λ i P T i H i Γ ( i ) C 2 ( α , β ) c ( n , τ ) h γ 2 C ( n , τ , α , β ) δ ,
where C ( n , τ , α , β ) : = 2 C 2 ( α , β ) c ( n , τ ) τ . The second inequality is set up by (6); the fourth inequality is obtained by (8); the last one is obtained by the fact h γ 2 2 δ τ . Therefore, we can obtain
P T 1 × × P T τ ( h Γ ) 2 2 C 2 ( n , τ , α , β ) δ 2 ,
which implies that the third term of (5) can bound by C δ .
B. Estimate the second term of (5) According to Lemma 4, we can obtain
P γ ( P T 1 × × P T τ ) ( h γ ) 2 2 1 max i 1 2 j i ( P T i P T j + P T j P T i ) τ P T 1 × × P T τ ( h γ ) 2 2 = C ^ ( τ , α , β ) P T 1 × × P T τ ( h γ ) 2 2 ,
where C ^ ( τ , α , β ) : = 1 max i 1 2 j i ( P T i P T j + P T j P T i ) τ . Note that
0 = P γ ( h γ ) = P γ P T 1 × × P T τ h γ + P γ P T 1 × × P T τ h γ .
Therefore,
P γ P T 1 × × P T τ h γ 2 = P γ P T 1 × × P T τ h γ 2 P T 1 × × P T τ h γ 2 .
Taking the previous two inequalities, we have
P T 1 × × P T τ h γ 2 P γ P T 1 × × P T τ h γ 2 C ^ ( τ , α , β ) P T 1 × × P T τ h γ 2 C ^ ( τ , α , β ) C ( n , τ , α , β ) δ ,
where C ( n , τ , α , β ) is an appropriate constant. Combining with (9), we can obtain
h 2 2 = h Γ 2 2 + P T 1 × × P T τ h Γ 2 2 + P T 1 × × P T τ h Γ 2 2 C ^ ( n , τ , α , β ) δ 2 .
Therefore, Theorem 1 is established.
Remark 1.
if τ = 2 , then Theorem 1 will degrade to the main result of [13].

5. Numerical Results

In this section, numerical experiments with varieties of the value of parameter σ, parameter ρ s , and rank r are given. For each setting of parameters, we show the average errors over 10 trials. Our implementation was realized with MATLAB. All the computational results were obtained on a desktop computer with a 2.27-GHz CPU (Intel(R) Core(TM) i3) and 2 GB of memory. Without loss of generality, we assume that τ = 2 . In [13], the authors certified this result with Accelerated Proximal Gradient (APG) by numerical experiments. In our numerical experiments, we will provide that this result is also proper with Principal Component Pursuit by Alternating Direction Method (PCP-ADM). In our simulations, our matrix is generated by the formulation as: M = L 0 + S 0 + N 0 , and a rank-r matrix L 0 is a product L 0 = X Y T , where X and Y are m × r and n × r matrices in which entries are independently sampled from a N ( 0 ; 1 ) distribution. According to PCP-ADM, we can generate S 0 by choosing a support set of size k s = ρ s m n uniformly at random, and set S 0 = P Ω E . Noise component N 0 is generated with entries independently sampled from a N ( 0 ; σ ) distribution. Without loss of generality, we set m = n = 200 and ρ s = 0 . 01 , and other parameters which PCP-ADM requires are the same as parameters of PCP-ADM [10]. Here we briefly interpret PCP-ADM. In [10], in order to stably recover X ^ = ( L ^ ; S ^ ) , the ADM method operates on the augmented Lagrangian
l ( L , S , Y ) = L * + λ S 1 + < Y , M L S > + μ 2 M L S F 2 .
The details of the PCP-ADM can be found in [14,15].
In our simulations, the stopping criterion of the PCP-ADM algorithm can be
L + S M F M F tolerance
or the maximum iteration number ( k m a x = 500 ). In order to estimate the errors, we use the root-mean-squared (RMS) error as L ^ L 0 F / n , S ^ S 0 F / n for the low-rank component and the sparse component, respectively. Figure 1 shows the RMS errors’ variation with different values of σ 2 . It is noted that the RMS error grows approximately linearly with the noise level in Figure 1. This phenomenon verifies Theorem 1 by numerical experiments with PCP-ADM (this phenomenon also exists in [13] with APG, which is very different from PCP-ADM in principle).

6. Conclusions

In this paper, we have investigated the the stability of CPCP. Our main contribution is the proof of Theorem 1, which implies the solution to the related convex programming (1.2) is stable to small entrywise noise under board condition. It is an extension of the result in [13], which only allows τ = 2 . Moreover, in the numerical experiments, we have investigated the performance of the PCP-ADM algorithm. Numerical results showed that it is stable to small entrywise noise.

Acknowledgments

The author would like to thank the anonymous reviewers for their comments that helped to improve the quality of the paper. This research was supported by the National Natural Science Foundation of China (NSFC) under Grant U1533125, and the Scientific Research Program of the Education Department of Sichuan under Grant 16ZB0032.

Author Contributions

Qingshan You and Qun Wan. contributed reagents/materials/analysis tools; Qingshan You wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Candès, E.J.; Recht, B. Exact matrix completion via convex optimzation. Found. Comput. Math. 2009, 9, 717–772. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Plan, Y. Matrix completion with noise. Proc. IEEE 2010, 98, 925–936. [Google Scholar] [CrossRef]
  3. Candès, E.J.; Tao, T. The power of convex relaxation: Near-optimal matrix completion. IEEE Trans. Inf. Theory 2010, 56, 2053–2080. [Google Scholar] [CrossRef]
  4. Ellenberg, J. Fill in the blanks: Using math to turn lo-res datasets into hi-res samples. Wired 2010. Available online: https://www.wired.com/2010/02/ff_algorithm/all/1 (accessed on 26 January 2016). [Google Scholar]
  5. Antonin Chambolle and Pierre-Louis Lions. Image recovery via total variation minimization and related problems. Numer. Math. 1997, 76, 167–188. [Google Scholar]
  6. Jon, F. Claerbout and Francis Muir. Robust modeling of erratic data. Geophysics 1973, 38, 826–844. [Google Scholar]
  7. Zeng, B.; Fu, J. Directional discrete cosine transforms: A new framework for image coding. IEEE Trans. Circuits Syst. Video Technol. 2011, 18, 305–313. [Google Scholar] [CrossRef]
  8. Elad, M.; Aharon, M. Image denoising via sparse and redundant representations over learned dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  9. Rodger, J.A. Toward reducing failure risk in an integrated vehicle health maintenance system: A fuzzy multi-sensor data fusion Kalman filter approach for IVHMS. Expert Syst. Appl. 2012, 39, 9821–9836. [Google Scholar] [CrossRef]
  10. Candès, E.J.; Li, X.; Ma, Y.; Wright, J. Robust principal component analysis? J. ACM 2011. [Google Scholar] [CrossRef]
  11. Wright, J.; Ganesh, A.; Min, K.; Ma, Y. Compressive Principal Component Pursuit. Available online: http://yima.csl.illinois.edu/psfile/CPCP.pdf (accessed on 9 April 2012).
  12. Recht, B.; Fazel, M.; Parrilo, P. Guaranteed minimum rank solutions of matrix equations via nuclear norm minimization. arXiv, 2007; arxiv:0706.4138. [Google Scholar] [CrossRef]
  13. Zhou, Z.; Li, X.; Wright, J.; Candès, E.J.; Ma, Y. Stable Principal Component Pursuit. arXiv, 2010; arXiv:1001.2363v1. [Google Scholar]
  14. Yuan, X.; Yang, J. Sparse and low rank matrix decomposition via alternating direction method. Pac. J. Optim. 2009, 9, 167–180. [Google Scholar]
  15. Kontogiorgis, S.; Meyer, R. A variable-penalty alternating direction method for convex optimization. Math. Program. 1989, 83, 29–53. [Google Scholar] [CrossRef]
Figure 1. Root-mean-squared (RMS) errors as a function of σ 2 with r = 10 ; ρ s = 0.01 ; n = 200 . PCP-ADM: Principal Component Pursuit by Alternating Direction Method.
Figure 1. Root-mean-squared (RMS) errors as a function of σ 2 with r = 10 ; ρ s = 0.01 ; n = 200 . PCP-ADM: Principal Component Pursuit by Alternating Direction Method.
Algorithms 10 00029 g001

Share and Cite

MDPI and ACS Style

You, Q.; Wan, Q. Stable Analysis of Compressive Principal Component Pursuit. Algorithms 2017, 10, 29. https://doi.org/10.3390/a10010029

AMA Style

You Q, Wan Q. Stable Analysis of Compressive Principal Component Pursuit. Algorithms. 2017; 10(1):29. https://doi.org/10.3390/a10010029

Chicago/Turabian Style

You, Qingshan, and Qun Wan. 2017. "Stable Analysis of Compressive Principal Component Pursuit" Algorithms 10, no. 1: 29. https://doi.org/10.3390/a10010029

APA Style

You, Q., & Wan, Q. (2017). Stable Analysis of Compressive Principal Component Pursuit. Algorithms, 10(1), 29. https://doi.org/10.3390/a10010029

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop