Abstract
We propose the Group Orthogonal Matching Pursuit (GOMP) algorithm to recover group sparse signals from noisy measurements. Under the group restricted isometry property (GRIP), we prove the instance optimality of the GOMP algorithm for any decomposable approximation norm. Meanwhile, we show the robustness of the GOMP under the measurement error. Compared with the P-norm minimization approach, the GOMP is easier to implement, and the assumption of -decomposability is not required. The simulation results show that the GOMP is very efficient for group sparse signal recovery and significantly outperforms Basis Pursuit in both scalability and solution quality.
Keywords:
compressed sensing; group orthogonal matching pursuit; group sparse; group restricted isometry property; instance optimality; robustness; scalability MSC:
41A65; 41A25; 94A15; 94A08; 94A12; 68P30
1. Introduction
The problem of recovering sparse signals from incomplete measurements has been studied extensively in the field of compressed sensing. Many important results have been obtained, see refs. [1,2,3,4,5]. In this article, we consider the problem of recovering group sparse signals. We first recall some basic notions and results.
Assume that is an unknown signal, and the information we gather about can be described by
where is the encoder matrix and is the information vector with a noise level . To recover x from we use a decoder , which is a mapping from to , and denote
as our approximation of
The -norm minimization, also known as Basis Pursuit [2], is one of the most popular decoder maps. It is defined as follows:
Candès and Tao in Ref. [6] introduced the restricted isometry property (RIP) of a measurement matrix as follows:
A matrix is said to satisfy the RIP of order k if there exists a constant such that, for all k-sparse signals
In particular, the minimum of all satisfying the above inequality is defined as the isometry constant
Based on the RIP of the encoder matrix it has been shown that the solution to (2) recovers x exactly provided that x is sufficiently sparse [7]. We denote by the set of all k-sparse signals, i.e.,
where is the set of i for which and is the cardinality of the set Let be the sparsity index of x from which is defined by
In Ref. [7], the authors obtained the estimation of the residual error
where are constants depending only on the encoder matrix but not on x or e. In many current papers on sparse recovery, the -norm objective function in (2) has been changed to several other norms.
At about the same time, the research community began to propose that the number of nonzero components of a signal might not be the only reasonable measure of the sparsity of a signal. Alternate notions under the broad umbrella of “group sparsity” and “group sparse recovery” began to appear. For more detail, one can see Ref. [8]. It has been shown that group sparse signals are used widely in EEG [9], wavelet image analysis [10], gene analysis [11], multi-channel image analysis [12] and other fields.
We next recall a result on the recovering of the group sparse signals from Ref. [13]. In that paper, Ahsen and Vidyasagar estimated x from y by solving the following optimization problem:
where is the penalty norm. We begin with recalling some definitions from [13]. We denote the set by the symbol The group structure is a partition of Here, we suppose that for all A signal is said to be group k-sparse if its support set supp(x) is contained in a group k-sparse set, which is defined as follows:
Definition 1.
A subset is said to be group k-sparse if for some and
We use GkS to denote the set of all group k-sparse subsets of
Definition 2.
We say a norm on is decomposable if satisfy with being disjoint subsets of then one has
It is clear that -norm is decomposable. More examples of decomposable norms can be found in Ref. [13].
In the processing of recovering an unknown signal the approximation norm on will be denoted by We assume that the approximation norm is decomposable.
Definition 3.
If there exists such that for any with and are disjoint subsets of it is true that we say the norm is γ-decomposable.
Note that when , -decomposability coincides with decomposability.
For any index set let denote the part of the signal x consisting of coordinates falling on The sparsity index and the optimal decomposition of a signal in are defined respectively as follows:
Definition 4.
The group k-sparse index of a signal with respect to the norm and the group structure is defined by
For and a norm on if there exist for such that
and
then we call an optimal group k-sparse decomposition of
Definition 5.
A matrix is said to satisfy the group restricted isometry property (GRIP) of order k if there exists a constant , such that
Then, we are ready to state the result of Ahsen and Vidyasagar. We need the following constants:
and
Theorem 1.
Suppose that
1. The norm is decomposable.
2. The norm is γ-decomposable for some
3. The matrix Φ satisfies GRIP of order with constant
4. Suppose the “compressibility condition"
holds, then
Furthermore,
where
and
Recently, Ranjan and Vidyasagar presented sufficient conditions for (2) to achieve robust group sparse recovery by establishing a group robust null space property [14]. They derived the residual error bounds for the -norm for
In compressed sensing, an alternative decoder for (1) is Orthogonal Matching Pursuit (OMP), which was originally proposed by J. Tropp in [15]. It is defined as Algorithm 1:
| Algorithm 1 Orthogonal Matching Pursuit (OMP) |
| Input: measurement matrix measurement vector Initialization: Iteration: repeat until a stopping criterion is met at Output: the -sparse vector |
The major advantage of the OMP algorithm is that it is easy to implement, see Refs. [16,17,18,19,20,21]. In [22], under the RIP, T. Zhang proved that OMP with k iterations can exactly recover k-sparse signals and the procedure is also stable in under measurement noise. Based on this result, Xu showed that the OMP can achieve instance optimality under the RIP, cf. [23]. In [24], Cohen, Dahmen and DeVore extended T. Zhang’s result to the general context of k-term approximation from a dictionary in arbitrary Hilbert spaces. They have shown that OMP generates near-best k-term approximations under a similar RIP condition.
In this article, we will generalize the OMP to recover the group sparse signals. Assume that the signal x is group k-sparse with the group structure then the number of non-zero groups of is no more than We define
It is obvious that
The group support set of x with respect to is defined as follows:
Let be the number of distinct groups on which x is supported. For any the subvector (submatrix) can also be written as
Now we start to propose the Group Orthogonal Matching Pursuit (GOMP) algorithm. Assume that the group structure is given a prior. Let be the initial feature set of the group sparse signal x with the maximum allowed sparsity M. The GOMP is defined as Algorithm 2:
| Algorithm 2 Group Orthogonal Matching Pursuit ( |
| Input: encoding matrix the vector group structure a set , maximum allowed sparsity M. Initialization:, , .
|
The GOMP algorithm begins by initializing the residual as , and at the lth iteration, we chose one group index which matched best. However, the algorithm does not obtain a least-square minimization over the group index sets that have already been selected, but updates the residual and proceeds to the next iteration.
Then, we studied the efficiency of the GOMP. We show the instance optimality and robustness of the GOMP algorithm under the GRIP of the matrix . We formulate them in the following theorem.
Theorem 2.
Suppose that the norm is decomposable and If the condition holds with then for any signal x and any permutation e with the solution obeys
where
Furthermore,
where
When , from inequalities (6) and (8), we have
and
This implies the instance optimality of the GOMP. In other words, from the viewpoint of the error, the GOMP is almost the best. The robustness of the GOMP means that the signal can be recovered stably for different dimensions, sparsity levels, numbers of measurements and noise scenarios (which can be observed from the experiments in Section 3). Compared with Basis Pursuit (BP), our proposed algorithm performs far better, and runs extremely faster, especially when it comes large-scale problems.
Based on the observation of Theorems 1 and 2, one can see that the assumption of -decomposability is not required for the GOMP. Instead, we developed some new techniques to analyze the error performance of the GOMP.
We remark that the results of Theorem 2 are rather general. They cover some important cases.
Firstly, if and we take a special partition of as
then the unknown sparse signals appear in a few blocks, i.e., the signals are block-sparse. It is obvious that our results include the case of the block-sparse signals. In practice, there are many forms of block-sparse signal, such as multi-band signal, DNA array, radar pulse signal and the multi-measurement vector problem and so on, see Refs. [25,26,27,28,29,30,31,32].
Secondly, applying Theorem 2 to the case of conventional sparsity, we obtain the following new results on the error estimates of the OMP.
Corollary 1.
(Conventional sparsity) Suppose that Φ satisfies the condition with Then,
and
where the constants are defined in (1.6) and (1.8) for
The remaining part of the paper is organized as follows. In Section 2, based on a corollary of Theorem 3, we prove Theorem 2, while the proof of Theorem 3 will be given in the Appendix B. In the Appendix A, we establish some preliminary results for the proof. In Section 3, we compare the Group Orthogonal Matching Pursuit (GOMP) with the Basis Pursuit (BP) by numerical experiments. In Section 4, we draw the conclusions of our study. Moreover, we mention some other competitive algorithms which we will study in the future.
2. Proof of Theorem 2
The proof of Theorem 2 is based on the estimation of the residual error of the GOMP algorithm. To establish such estimation, we use the restricted gradient optimal constant for group sparse signals.
Definition 6.
Given and we define the restricted gradient optimal constant as the smallest non-negative value such that
for all with
Some estimates about have been given in Ref. [22], such as and so on.
The following theorem is our result of the residual error of the GOMP algorithm. Its proof is quite technical and is given in the Appendix B.
Theorem 3.
Let and If there exists s, such that
then when we have
and
Next, we will use Theorem 3 in the case of being empty to prove the following corollary, which plays an important role in the proof of Theorem 2.
Corollary 2.
Let Let If the GRIP condition holds, then for we have
Proof.
Now, we prove Theorem 2 with the help of Corollary 2.
Proof of Theorem 2.
Let be an optimal group k-sparse decomposition of Taking in Corollary 2.1, we obtain that
where Noting that and we have
Combining (12) with (13), we conclude
We now estimate the term It is clear that
By the GRIP of the matrix we obtain
Using (5) and the decomposability of we have
Combining inequalities (14)–(17), we obtain
Inequalities (17) and (18) imply that
which leads to the bound in (6).
To derive inequality (8), we adopt a different strategy. Let Without loss of generality, we assume that Gsupp and . We construct a subset of as follows:
Note that . We picked 1 as an element of contains 2 if and only if . Inductively suppose we have constructed the set , then k is an element of if and only if
By this method, we can construct a unique subset . Then, by using the same method, we can construct a unique subset:
Inductively suppose we have constructed subsets , we can form a subset , then we can construct a unique subset . In this way, we can decompose as follows:
From the construction, we know that when , for ,
For any
So we have
which is to say, , so . When , we also have .
Using the Cauchy–Schwarz inequality and the decomposability of we have
3. Simulation Results
In this section, we test the performance of the GOMP and present the results of our experiments. We first describe the details relevant to our experiments in Section 3.1. We demonstrate the effectiveness of the GOMP in Section 3.2. In Section 3.3, we compare the GOMP with Basis Pursuit to show the efficiency and scalability of our algorithm.
3.1. Implementation
For all experiments, we considered the following model. Suppose that is an unknown N-dimensional signal and we wish to recover it by the given data
where is a known measurement matrix with and e is a noise. Furthermore, since , the column vectors of are linearly dependent and the collection of these columns can be viewed as a redundant dictionary.
For arbitrary , define
and
where and Obviously, is a Hilbert space with the inner product .
In the experiment, we set the measurement matrix to be a Gaussian matrix where each entry is selected from the distribution and the density function of this distribution is . We executed the GOMP with the data vector .
To demonstrate the performance of signal-recovering algorithms, we use the mean square error (MSE) to measure the error between the real signal x and its approximant , which is defined as follows:
For the Group Orthogonal Matching Pursuit (GOMP), we let as the initialization. In the experiments, we constructed input signals randomly by the following steps:
- i.
- Given a sparse level K;
- ii.
- Produce a group structure randomly, satisfying for each index i;
- iii.
- Randomly select a set , such that ;
- iv.
- Let the set be the support, and produce a signal by random numbers from normal distribution .
Some of examples of the randomly constructed group 50-sparse signals in different dimensions can be found in Figure 1.
Figure 1.
Examples of group 50-sparse signals in different dimensions: (a) An example of the group 50-sparse signal in dimension . (b) An example of the group 50-sparse signal in dimension . (c) An example of the group 50-sparse signal in dimension . (d) An example of the group 50-sparse signal in dimension .
We set Basis Pursuit (BP) as the baseline for further comparison with the GOMP to analyze the latter’s properties. For the implementation of Basis Pursuit, we used the Magic toolbox (Open-sourced code: https://candes.su.domains/software/l1magic/#code, accessed on 15 February 2023) developed by E. Candès and J. Romberg in [33].
3.2. Effectiveness of the GOMP
Figure 2 shows the performance of the GOMP for an input signal in dimension with group sparsity level and number of measurements under different noises, where the red line represents the original signal and the black squares represent the approximation. Figure 3 shows the performance of BP. The poor performance of BP can be easily observed when the noise occurs.
Figure 2.
The recovery of an input signal via GOMP in dimension with group sparsity level and number of measurements under different noises: (a) The recovery of an input signal via GOMP under the noise . (b) The recovery of an input signal via GOMP under a Gaussian noise e from .
Figure 3.
The recovery of an input signal via BP in dimension with group sparsity level and number of measurements under different noises: (a) The recovery of an input signal via BP under the noise . (b) The recovery of an input signal via BP under a Gaussian noise e from .
In general, the above example shows that the GOMP is very effective for signal recovering, i.e., it can recover the group sparse signal exactly. We will analyze the performance of the GOMP further in terms of the mean square error by a comparison with BP in the next subsection.
Figure 4 describes the situation in dimension under noise . It displays the relation between the percentage (of 100 input signals) of the support that can be recovered correctly and the number N of measurements. Furthermore, to some extent, it shows how many measurements are necessary to recover the support of the input group K-sparse signal with high probability. If the percentage equals 100%, it means that support of all the 100 input signals can be found, which implies the support of input signal can be exactly recovered. As expected, Figure 4 shows that when the group sparsity level K increases, it is necessary to increase the number N of measurements to guarantee signal recovery. Furthermore, we can find that the GOMP can recover the correct support of the input signal with high probability.
Figure 4.
The percentage of the support of 100 input signals correctly recovered as a function of number N of Gaussian measurements for different group sparsity levels K in dimension .
3.3. Comparison with Basis Pursuit
In order to demonstrate the efficiency and robustness of the Group Orthogonal Matching Pursuit (GOMP), we implement the Group Orthogonal Matching Pursuit (GOMP) and the Basis Pursuit (BP) to recover the group sparse signals for comparison.
We implemented the GOMP and BP for recovering the group sparse signals in dimension with number of measurements under different noises to calculate the average mean square error (MSE) and the average running time by repeating the test 100 times. Figure 5 and Figure 6, respectively, show the MSE and running time of the GOMP, implying that the error and the running time of the GOMP will constantly increase with the increased . The figure of the error of the GOMP in terms of MSE is about (noiseless) and (Gaussian noise), which means that the GOMP can effectively recover the group sparse signal. Meanwhile, the computational complexity of the GOMP is also relatively low according to its running time. Figure 7 and Figure 8 show that the MSE and the running time of the BP have the same trend as the GOMP: the error and the running time increase with the increased sparsity level, without changes to other parameters, while the MSE of BP is higher in figure when compared with the GOMP, which is around (noiseless) and (Gaussian noise). In order to facilitate comparison, the details of the above results in Figure 5, Figure 6, Figure 7 and Figure 8 also can be found in Table 1 and Table 2.
Figure 5.
MSE and running time of the GOMP in dimension with number of measurements under the noise .
Figure 6.
MSE and running time of the GOMP in dimension with number of measurements under a Gaussian noise e from .
Figure 7.
MSE and running time of BP in dimension with number of measurements under the noise .
Figure 8.
MSE and running time of BP in dimension with number of measurements under a Gaussian noise e from .
Table 1.
The average MSE and running time (repeating 100 times) of the GOMP and BP in 1024 dimension with number of measurements under the noise .
Table 2.
The average MSE and running time (repeating 100 times) of the GOMP and BP in 1024 dimension with number of measurements under a Gaussian noise e from .
According to the mean square error, obviously our algorithm outperforms the BP by a lot on both noiseless and noisy data. This trend is expected because the solution of the GOMP is updated by solving least squares problems. What is more, the BP normally cannot exactly recover the real signal since the produced solution may not be 0 on the indices which are not in the support, according to the observation during the experiments (can also be found in Figure 3b. For the GOMP, the solution’s support has been restricted in the group sparse set based on the selected indices of the group structure at each iteration. That also leads to the better performance of the GOMP compared with BP. Although the performance of the GOMP can be affected by the group sparsity level and noise, our method still has pretty low mean square errors and performs better than BP, showing the robustness and effectiveness of our proposed algorithm. We can also observe that the GOMP runs much quicker than the BP, demonstrating the scalability and efficiency of our algorithm.
Further more, when it comes to large-scale group sparse signal recovery, the execution of BP occupies more memory and takes far more time to obtain the solution compared with our method (details can be found in Table 3 and Table 4 for different noise scenarios). This is because the scale of least square problems that the GOMP needs to solve to obtain the solution are bounded by the sparsity level K, as seen in the GOMP defined in Section 1. At the same time, the GOMP still outperforms BP a lot in terms of MSE. In addition, we can also find that the running speed of BP will be slightly slower when the noise occurs, while for the GOMP it will not be affected.
Table 3.
The average MSE and running time (repeating 100 times) of the GOMP and BP with number of measurements and group sparsity level under the noise .
Table 4.
The average MSE and running time (repeating 100 times) of the GOMP and BP with number of measurements and group sparsity level under a Gaussian noise e from .
4. Conclusions
We propose the Group Orthogonal Matching Pursuit (GOMP) to recover group sparse signals. We analyze the error of the GOMP algorithm in the process of recovering group sparse signals from noisy measurements. We show the instance optimality and robustness of the GOMP under the group restricted isometry property (GRIP) of the decoder matrix. Compared with the P-norm minimization approach, the GOMP has two advantages. One is its easier implementation, that is, it runs quickly and has lower computational complexity. The other is that we do not need the concept of -decomposability. Furthermore, our simulation results show that the GOMP is very efficient for group sparse signal recovery and significantly outperforms Basis Pursuit in both scalability and solution quality.
On the other hand, there are several algorithms that we will study in the future, such as Sparsity Adaptive Matching Pursuit (SAMP) [34], Constrained Backtracking Matching Pursuit (CBMP) [35] and Group-based Sparse Representation-Joint Regularization (GSR-JR) [36], which could be potentially competitive with the GOMP.
Author Contributions
Conceptualization, all authors; methodology, all authors; software, all authors; validation, all authors; formal analysis, all authors; investigation, all authors; resources, all authors; data curation, all authors; writing—original draft preparation, all authors; writing—review and editing, all authors; visualization, all authors; supervision, all authors; project administration, all authors; funding acquisition, all authors. All authors have read and agreed to the published version of the manuscript.
Funding
The work is supported by the National Natural Science Foundation of China Grant No. 11671213 and No. 11701411.
Data Availability Statement
The data presented in this study are available upon request from the corresponding author.
Acknowledgments
The authors would like to thank the referees and the editors for their very useful suggestions, which improved significantly this paper.
Conflicts of Interest
The authors declare no conflict of interest.
Appendix A
In this appendix, we establish three lemmas which will be used in the proof of Theorem 3. The first one is concerned with the reduction of the residuals for the GOMP algorithm.
Lemma A1.
Let be the residual error sequences generated by the applied to and let If is not contained in then we have
Proof.
The proof of Lemma A1 is based on the ideas of Cohen, Dahmen and DeVore [24]. We may assume that Otherwise, inequality (A1) is trivially satisfied.
Denote
It is clear that Note that
which implies
Now, we estimate Since satisfies the GRIP of order with the isometry constant we have
which results in the following lower estimate
where the first inequality is obtained by (A3) and the Cauchy–Schwarz inequality. Using inequality (A4), we can continue to estimate (A2) and conclude
Therefore, in view of inequalities (A5) and (A1), it remains to prove that
which is equivalent to
We note that
This is the same as
On the one hand, by the GRIP of we have
On the other hand, we have
According to the greedy step of the GOMP, we further derive
where we have used the Cauchy–Schwarz inequality.
Corollary A1.
Assume that for and subsets If
and
then
Proof.
By Lemma A1, for we have either or inequality (A1) holds. Inequality (A1), along with , imply that either or
where we use the inequality exp for .
Therefore, for any and we have either or
We now prove this corollary by induction on If then we can set and consider When
we have by inequality (A11)
and hence
Note that this inequality also holds when since
Therefore, inequality (A10) always holds when
Assume that the corollary holds at for some This is, with
we have
Note that for we have
The second and third ones are technical lemmas. For , set
Lemma A2.
For all we have
Proof.
Let Then Therefore,
We continue to estimate (A14). Note that By the definition of and the GRIP of , we have
where we have used the mean value inequality in the second inequality. □
Lemma A3.
For all we have
Proof.
Notice that
By the definition of and the GRIP of , we derive
where we have used again the mean value inequality in the second inequality. □
Appendix B
Proof of Theorem 3.
We prove this theorem by induction on If then
Assume that the inequality holds with for some integer Then we consider the case of Without loss of generality, we assume for notational convenience that and for is arranged in descending order so that Let L be the smallest positive integer, such that for all we have
but
where Note that so Moreover, if the second inequality is always satisfied for all then we can simply take
We define
Then Lemma 3.2 implies that for
where
By (A16), we have
for Thus and It follows from Corollary A1 that for
and
we have
Combining (A17) with (A18), we obtain
If then
where we have used in the last inequality.
If then combining inequality (A20) with Lemma A3, we have
which implies that
Therefore, that is, Then we can take for and run the GOMP algorithm again. By induction, after another
GOMP iterations, we have
while Since
so for
and
we have
This finishes the induction step for the case
For the second inequality, combining the first inequality with Lemma A3, we have
Thus we complete the proof of this theorem. □
When is the empty set, we have the following corollary of Theorem 3.
Corollary A2.
If the GRIP condition holds, then we have for
and
where
References
- Foucart, S.; Rauhut, H. A Mathematical Introduction to Compressive Sensing; Springer: Berlin/Heidelberg, Germany, 2013. [Google Scholar]
- Cai, T.; Zhang, A. Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 2014, 60, 122–132. [Google Scholar] [CrossRef]
- Günther, M.; Böhringer, S.; Wieczorek, D.; Würtz, R.P. Reconstruction of images from Gabor graphs with applications in facial image processing. Int. J. Wavelets Multiresolut. Inf. Process. 2015, 13, 1550019. [Google Scholar] [CrossRef]
- Ye, P.X.; Wei, X.J. Efficiency of weak greedy algorithms for m-term approximations. Sci. China. Math. 2016, 59, 697–714. [Google Scholar] [CrossRef]
- Yang, B.; Yang, C.; Huang, G. Efficient image fusion with approximate sparse representation. Int. J. Wavelets Multiresolut. Inf. Process. 2016, 14, 1650024. [Google Scholar] [CrossRef]
- Candès, E.; Kalton, N.J. The restricted isometry property and its implications for compressed sensing. C. R. Acad. Sci. Sér. I 2008, 346, 589–592. [Google Scholar] [CrossRef]
- Candès, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
- Yuan, M.; Lin, Y. Model selection and estimation in regression with grouped variables. J. R. Stat. Ser. B 2006, 68, 49–67. [Google Scholar] [CrossRef]
- Ou, W.; Hämäläinen, M.S.; Golland, P. A distributed spatio-temporal EEG/MEG inverse solver. NeuroImage 2009, 44, 932–946. [Google Scholar] [CrossRef]
- Antoniadis, A.; Fan, J. Regularization of wavelet approximations. J. Amer. Stat. Assoc. 2001, 96, 939–967. [Google Scholar] [CrossRef]
- He, Z.; Yu, W. Stable feature selection for biomarker discovery. Comput. Biol. Chem. 2010, 34, 215–225. [Google Scholar] [CrossRef]
- Mishali, M.; Eldar, Y.C. Blind multiband signal reconstruction: Compressed sensing for analog signals. IEEE Trans. Signal Process. 2009, 57, 993–1009. [Google Scholar] [CrossRef]
- Ahsen, M.E.; Vidyasagar, M. Error bounds for compressed sensing algorithms with group sparsity: A unified approach. Appl. Comput. Harmon. Anal. 2017, 42, 212–232. [Google Scholar] [CrossRef]
- Ranjan, S.; Vidyasagar, M. Tight performance bounds for compressed sensing with conventional and group sparsity. IEEE Trans. Signal Process 2019, 67, 2854–2867. [Google Scholar] [CrossRef]
- Tropp, J.A. Greedy is good: Algorithmic results for sparse approximation. IEEE Trans. Inform. Theory 2004, 50, 2231–2242. [Google Scholar] [CrossRef]
- Lin, J.H.; Li, S. Nonuniform support recovery from noisy measurements by orthogonal matching pursuit. J. Approx. Theory 2013, 165, 20–40. [Google Scholar] [CrossRef]
- Mo, Q.; Shen, Y. A remark on the restricted isometry property in orthogonal matching pursuit. IEEE Trans. Inf. Theory 2012, 58, 3654–3656. [Google Scholar] [CrossRef]
- Dan, W. Analysis of orthogonal multi-matching pursuit under restricted isometry property. Sci. China Math. 2014, 57, 2179–2188. [Google Scholar] [CrossRef]
- Xu, Z.Q. The performance of orthogonal multi-matching pursuit under RIP. J. Comp. Math. 2015, 33, 495–516. [Google Scholar] [CrossRef]
- Shao, C.F.; Ye, P.X. Almost optimality of orthogonal super greedy algorithms for incoherent dictionaries. Int. J. Wavelets Multiresolut. Inf. Process. 2017, 15, 1750029. [Google Scholar] [CrossRef]
- Wei, X.J.; Ye, P.X. Efficiency of orthogonal super greedy algorithm under the restricted isometry property. J. Inequal. Appl. 2019, 124, 1–21. [Google Scholar] [CrossRef]
- Zhang, T. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Trans. Inf. Theory 2011, 57, 6215–6221. [Google Scholar] [CrossRef]
- Xu, Z.Q. A remark about orthogonal matching pursuit algorithm. Adv. Adapt. Data Anal. 2012, 04, 1250026. [Google Scholar] [CrossRef]
- Cohen, A.; Dahmen, W.; DeVore, R. Orthogonal matching pursuit under the restricted isometry property. Constr. Approx. 2017, 45, 113–127. [Google Scholar] [CrossRef]
- Eldar, Y.C.; Mishali, M. Robust recovery of signals from a structured union of subspaces. IEEE Trans. Inf. Theory 2009, 55, 5302–5316. [Google Scholar] [CrossRef]
- Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef]
- Van Den Berg, E.; Friedlander, M.P. Theoretical and empirical results for recovery from multiple measurements. IEEE Trans. Inf. Theory 2010, 56, 2516–2527. [Google Scholar] [CrossRef]
- Eldar, Y.C.; Kuppinger, P.; Bölcskei, H. Block-sparse signals: Uncertainty relations and efficient recovery. IEEE Trans. Signal Process. 2010, 58, 3042–3054. [Google Scholar] [CrossRef]
- Lai, M.J.; Liu, Y. The null space property for sparse recovery from multiple measurement vectors. Appl. Comput. Harmon. Anal. 2011, 30, 402–406. [Google Scholar] [CrossRef]
- Zhang, J.; Wang, J.; Wang, W. A perturbation analysis of block-sparse compressed sensing via mixed l2/l1 minimization. Int. J. Wavelets, Multiresolut. Inf. Process. 2016, 14, 1650026. [Google Scholar] [CrossRef]
- Chen, W.G.; Ge, H.M. A sharp recovery condition for block sparse signals by block orthogonal multi-matching pursuit. Sci. China Math. 2017, 60, 1325–1340. [Google Scholar] [CrossRef]
- Wen, J.M.; Zhou, Z.C.; Liu, Z.L.; Lai, M.J.; Tang, X.H. Sharp sufficient conditions for stable recovery of block sparse signals by block orthogonal matching pursuit. Appl. Comput. Harmon. Anal. 2019, 47, 948–974. [Google Scholar] [CrossRef]
- Candès, E.; Romberg, J. l1-Magic: Recovery of Sparse Signals via Convex Programming. Available online: https://candes.su.domains/software/l1magic/#code (accessed on 15 February 2023).
- Do, T.T.; Gan, L.; Nguyen, N.; Tran, T.D. Sparsity adaptive matching pursuit algorithm for practical compressed sensing. In Proceedings of the 2008 42nd Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 26–29 October 2008; pp. 581–587. [Google Scholar]
- Bi, X.; Leng, L.; Kim, C.; Liu, X.; Du, Y.; Liu, F. Constrained Backtracking Matching Pursuit Algorithm for Image Reconstruction in Compressed Sensing. Appl. Sci. 2021, 11, 1435. [Google Scholar] [CrossRef]
- Wang, R.; Qin, Y.; Wang, Z.; Zheng, H. Group-Based Sparse Representation for Compressed Sensing Image Reconstruction with Joint Regularization. Electronics 2022, 11, 182. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).