Next Article in Journal
Photon-Added Deformed Peremolov Coherent States and Quantum Entanglement
Next Article in Special Issue
Tractability of Multivariate Approximation Problem on Euler and Wiener Integrated Processes
Previous Article in Journal
Display Conventions for Octagons of Opposition
Previous Article in Special Issue
On the Convergence of an Approximation Scheme of Fractional-Step Type, Associated to a Nonlinear Second-Order System with Coupled In-Homogeneous Dynamic Boundary Conditions
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Signal Recovery via Rescaled Matching Pursuit

Department of Applied Mathematics, School of Mathematical Sciences and LPMC, Nankai University, Tianjin 300071, China
*
Author to whom correspondence should be addressed.
Axioms 2024, 13(5), 288; https://doi.org/10.3390/axioms13050288
Submission received: 16 March 2024 / Revised: 7 April 2024 / Accepted: 21 April 2024 / Published: 24 April 2024

Abstract

:
We propose the Rescaled Matching Pursuit (RMP) algorithm to recover sparse signals in high-dimensional Euclidean spaces. The RMP algorithm has less computational complexity than other greedy-type algorithms, such as Orthogonal Matching Pursuit (OMP). We show that if the restricted isometry property is satisfied, then the upper bound of the error between the original signal and its approximation can be derived. Furthermore, we prove that the RMP algorithm can find the correct support of sparse signals from random measurements with a high probability. Our numerical experiments also verify this conclusion and show that RMP is stable with the noise. So, the RMP algorithm is a suitable method for recovering sparse signals.

1. Introduction

Compressed sensing (or compressive sensing) [1] has become a very active research direction in the field of signal processing, after the pioneering work by Candès and Tao (see [2]). It has been successfully applied in image compression, signal processing, medical imaging, and computer science (see [3,4,5,6,7]). Compressed sensing refers to the problem of recovering sparse signals from low dimensional measurements (see [8]). In terms of image processing, one needs to compress the image first, then the image signal becomes sparse and its reconstruction belongs to the category of compressed sensing.
Let R d denote the d-dimensional Euclidean space. Consider a signal x R d . We say x is a k-sparse vector if # supp ( x ) k . Here, we use supp ( x ) to denote the set of index such that x i 0 for all i supp ( x ) and # A represents the cardinality of a set A. By the compressed sensing theory, the recovery of a sparse signal can be mathematically described as
f = Φ x
where x is a d-dimensional k-sparse vector, and Φ R N × d is the measurement matrix with N d . The goal of signal recovery is establishing an algorithm to approximate the unknown signal x by using the measurement matrix Φ and given data f = Φ x .
A well-known method in compressed sensing is basis pursuit (BP), which is also known as 1 minimization (see [9]). It is defined as
x ^ = arg min x R d { x 1 : subject to f = Φ x }
By imposing a “restricted orthonormality hypothesis”, which is far weaker than assuming orthonormality and defined as the following Definition 1, the 1 minimization program can recover x exactly. Furthermore, the target sparse signal x could be recovered with a high probability when Φ is a Gaussian random matrix.
Definition 1.
A matrix Φ is said to satisfy the restricted isometry property at sparsity level k with the constant δ k ( 0 , 1 ) if
( 1 δ k ) x 2 2 Φ x 2 2 ( 1 + δ k ) x 2 2
holds for all k-sparse vectors x. And, we say that the matrix Φ satisfies the restricted isometry property with parameters ( k , δ ) if δ k < δ .
The 1 minimization method and related algorithms found by solving (2) have been well studied, and they range widely in effectiveness and complexity (see [1,2,10,11,12]). It is known that a linear program problem can be solved in polynomial time by using optimization software. However, it still takes a long time if the length of the signal is large.
Compared with the BP algorithm, greedy-type algorithms are easy to implement and have less computational complexity and fast convergence speed. Thus, greedy-type algorithms are powerful methods for the recovery of signals and are increasingly used in that field; for example, see [13,14,15,16,17]. To recall this type of algorithm, we first recall some basic notions on sparse approximation in a Hilbert space H. We denote the inner product of H by · , · . The norm · on H is defined by f = f , f 1 / 2 for all f H . We call a set D H a dictionary if the closure of span ( D ) is H. Here span ( D ) is the linear space spanned by D . We only consider normalized dictionaries, that is, for any φ D , φ = 1 . Let Σ m ( D ) denote the set of all elements which can be expressed as a linear combination of m elements from D
Σ m ( D ) : = g : g = φ Λ c φ φ , φ D , # Λ m
We define the minimal error of the approximation of an element f H as g Σ m ( D )
σ m ( f ) : = σ m ( f , D ) = inf g Σ m ( D ) f g
Let a dictionary D be given. We define the collection of elements by
A 1 o ( D , M ) : = f H : f = k Λ c k ( f ) φ k , φ k D , # Λ , k Λ | c k ( f ) | M
Then, we define A 1 ( D , M ) to be the closure of A 1 o ( D , M ) in H. Finally, we define the linear space A 1 ( D ) by
A 1 ( D ) : = M > 0 A 1 ( D , M )
and we equip A 1 ( D ) with the norm
f A 1 ( D ) : = inf { M : f A 1 ( D , M ) }
for all f A 1 ( D ) .
The simplest greedy algorithm in H is known as the Pure Greedy Algorithm (PGA ( H , D ) ). We recall its definition from [18].
Pure Greedy Algorithm (PGA( H , D ))
Step 1: Set f 0 = 0 .
Step 2:
-If f = f m 1 , then stop and define f l = f m 1 = f for l m .
-If f f m 1 , then choose an element φ m D satisfying
| f f m 1 P G A , φ m | = sup φ D | f f m 1 P G A , φ |
Define the new approximation of f as
f m P G A = i = 1 m f f m 1 P G A , φ i φ i
and proceed to Step 2.
In [19], DeVore and Temlyakov proposed the Orthogonal Greedy Algorithm (OGA ( H , D ) ) in H with a dictionary D . We recall its definition.
Orthogonal Greedy Algorithm (OGA( H , D ))
Step 1: Set f 0 = 0 .
Step 2:
-If f = f m 1 , then stop and define f l = f m 1 = f for l m .
-If f f m 1 , then choose an element φ m D satisfying
| f f m 1 O G A , φ m | = sup φ D | f f m 1 O G A , φ |
Define the new approximation of f as
f m O G A = P m ( f )
where P m ( f ) is the best approximation of f from the linear space V m := span { φ 1 , φ 2 , · · · , φ m } , and proceed to Step 2.
In fact, the OGA is one of the modifications of the PGA, improving the convergence rate. The convergence rate is one of the critical issues for the approximation power of algorithms. If f m is the approximation of the target element f by applying a greedy algorithm after m iterations, the efficiency of the algorithm can be measured by the decay of the sequence { f f m } . It has been shown that the PGA and the OGA can achieve the convergence rates m 1 / 6 and m 1 / 2 for f f m , respectively, if the target element f belongs to A 1 ( D , M ) (see [19]).
Theorem 1.
Suppose D is a dictionary of H. Then, for every f A 1 ( D , M ) , the following inequality holds:
f f m P G A M m 1 / 6 , m = 0 , 1 , 2 , .
Theorem 2.
Suppose D is a dictionary of H. Then, for every f A 1 ( D , M ) , the following inequality holds:
f f m O G A M m 1 / 2 , m = 0 , 1 , 2 , .
It is known from [20] that the convergence rate m 1 / 2 cannot be improved. Thus, as a very general result for all dictionaries, the convergence rate is optimal.
The fundamental problem of signal recovery is to find out the support of the original target signal by using the model (1). Since N is much smaller than the dimension d, the column vectors of the measurement matrix Φ are linearly dependent. Therefore, these vectors consist of a redundant system. This redundant system may be considered as a dictionary of R N . It is well known that R N is a Hilbert space with the usual inner product
x , y = j = 1 N x j y j , x R N , y R N
Thus, signal recovery can be considered as an approximation problem with a dictionary in a Hilbert space. In particular, the approximation of the original signal is the solution of the following minimization problem:
min x ^ 1 , , x ^ d R f i = 1 d x ^ i φ i 2
where φ 1 , , φ d denote the columns of Φ . Naturally, one can adapt greedy approximation algorithms to handle the signal recovery problem. In CS, one such method is Orthogonal Matching Pursuit (OMP) (see [21,22]), which is an application of OGA to the field of signal processing. Compared with the BP, the OMP is faster and it admits an ease of implementation (see [17,23]). We recall the definition of the OMP from [23].
Orthogonal Matching Pursuit (OMP)
Input: An N × d matrix Φ , f R N , the sparsity index k.
Step 1: Set r 0 = f , the index set Λ 0 = , and m = 1 .
Step 2: Define Λ m : = Λ m 1 { i m } such that
| r m 1 , φ i m | = sup φ Φ | r m 1 , φ |
Then,
x m = arg min z : supp ( z ) Λ m f Φ z 2 , f m o = Φ x m
and
r m = f f m o = f Φ x m
Step 3: Increment m, and return to Step 2 if # Λ m < k .
Output: If # Λ m k , then output Λ m and x ^ = x m .
It is clear from the available literature that the OMP algorithm is the most popular greedy-type algorithm being used for signal recovery. The OMP algorithm, with a high probability, can exactly recover each sparse signal by using a Gaussian or Bernoulli matrix (see [17]). And, it is shown that the OMP algorithm can stably recover sparse signals in l 2 under measurement noise (see [24]). Although the OMP algorithm is powerful, the computational complexity of the orthogonality step is quite high, especially for large-scale problems. See [25] for the detailed computation and the storage cost of the OMP. So, the main disadvantage of the OMP is its high computational cost. In particular, when the sparsity level k of the target signal is large, OMP may not be a good choice, since the cost of orthogonalization increases quadratically with the number of iterations. Thus, in this case, it is natural to look for other greedy-type algorithms to reduce the computational cost.
In Section 2, we propose an algorithm called Rescaled Matching Pursuit (RMP) for signal recovery. This algorithm has less computational complexity than the OMP, which means it can save resources such as storage space and computation time. In Section 3, we analyze the efficiency of the RMP algorithm under the RIP condition. In Section 4, we prove that the RMP algorithm can be used to find the correct support of sparse signal from a random measurement matrix with a high probability. In Section 5, we use numerical experiments to verify this conclusion. In Section 6, we summarize our results.

2. Rescaled Matching Pursuit Algorithm

In [26], Petrova presented another modification of the PGA which is called the escaled pure greedy algorithm (RPGA). This modification is very simple. It simply rescales the approximation in each iteration of the PGA.
We recall the definition of the RPGA(H, D ) from [26].
Rescaled Pure Greedy Algorithm (RPGA( H , D ))
Step 1: Set f 0 = 0 , r 0 = f .
Step 2:
-If f = f m 1 , then stop and define f k = f m 1 = f for k m .
-If f f m 1 , then choose an element φ m D satisfying
| r m 1 , φ m | = sup φ D | r m 1 , φ |
with
λ m = r m 1 , φ m , f ^ m : = f m 1 + λ m φ m , s m = f , f ^ m f ^ m 2
Then, define the the new approximation of f as
f m = s m f ^ m , r m = f f m
and proceed to Step 2.
The following theorem on the convergence rate of the RPGA was also obtained in [26].
Theorem 3.
Suppose D is a dictionary of H. Then, for every f A 1 ( D ) , the RPGA satisfies the following inequality:
f f m f A 1 ( D ) ( m + 1 ) 1 / 2 , m = 0 , 1 , 2 ,
Notice that the RPGA only needs to solve a one-dimensional optimization problem at each step. Thus, the RPGA is simpler than the OGA. Together with Theorem 3, it is known that the RPGA is a greedy algorithm with minimal computational complexity to achieve the optimal convergence rate m 1 / 2 . Based on this result, the RPGA has been applied successfully to solve the problems of convex optimization and regression, see [27,28,29]. In solving these two types of problems, the RPGA performs better than the OGA. The main reason is that the RPGA has smaller computational cost than the OGA. For more detailed results about the RPGA, one can refer to [26,27,28,29,30]. Besides, in [31] the authors propose the Super Rescaled Pure Greedy Learning Algorithm and investigate its behavior. The success of the RPGA inspires us to consider its application in the field of signal recovery.
In the present article, we will design an algorithm based on the RPGA to recover sparse signals and analyze its performance. Suppose that x is a k-sparse signal in R d and Φ is an N × d matrix. We denote the columns of Φ as φ 1 , , φ d . Then, we design the Rescaled Matching Pursuit (RMP) for sparse signal recovery.
Rescaled Matching Pursuit (RMP)
Input: Measurement matrix Φ , vector f, the sparsity index k.
Step 1: Set r 0 = f , f 0 = 0 , the index set Λ 0 = , and m = 1 .
Step 2: Define Λ m : = Λ m 1 { i m } such that
| r m 1 , φ i m | = sup φ Φ | r m 1 , φ |
Then,
f m = s m f ^ m
where θ m = r m 1 , φ i m , f ^ m = f m 1 + θ m φ i m , and s m = f , f ^ m f ^ m 2 2 . Next, update the residual
r m = f f m
Step 3: Increment m, and return to Step 2 if # Λ m < k .
Output: If # Λ m k , then output Λ m and x ^ . It is easy to see that x ^ has nonzero indices at the components listed in Λ m . If i m 1 = i m 2 = = i m s = i j , then we can figure out the value of x ^ in component i j through
x ^ ( i j ) = l = 1 s θ m l i = m l k s i
The main procedure of the algorithm can be divided into selecting the columns of Φ and constructing the approximation f m . In Step 2, the approximation f m is obtained by solving a one-dimensional optimization problem. In fact, f m is an orthogonal projection of f onto the one-dimensional space span { f ^ m } . The RMP algorithm is broadly similar to the OMP algorithm, while the computational complexity differs. For the OMP algorithm, the approximation f m o : = Φ x ^ is obtained by solving the m-dimensional least squares problem at m iteration in Step 2, which has a total cost of O ( m N ) . For the RMP algorithm, as argued above, the approximation f m is obtained by solving a one-dimensional optimization problem at the m iteration in Step 2, which has a total cost of O ( N ) . It is not difficult to see that the computational cost is less expensive than most existing greedy algorithms, such as the OMP algorithm. That is, we can save a lot of resources such as storage space and computation time by using the RMP algorithm.

3. Efficiency of the RMP under the Restricted Isometry Property

Suppose that the matrix Φ satisfies the RIP condition which is defined in Definition 1. Then, we study the performance of the RMP for k-sparse signal recovery.
Under the RIP, we obtain the convergence rate of the error between the given data vector f and its approximation f m .
Theorem 4.
Assume that x is an arbitrary k-sparse signal in R d and the measurement matrix Φ R N × d satisfies the RIP condition with parameters ( k , δ ) . Then, for f = Φ x and every positive integer m, we have
f f m 2 k 1 / 2 f 2 ( 1 δ ) ( m + 1 ) 1 / 2 , m = 0 , 1 ,
Moreover, we derive the upper bound of the difference between the original signal x and the estimate x ^ of x.
Theorem 5.
Assume that x is an arbitrary k-sparse signal in R d and the measurement matrix Φ R N × d satisfies the RIP condition with parameters ( 2 k , δ ) . Given the data f = Φ x , if # Λ L = k , L k , we have
x x ^ 2 k 1 / 2 f 2 ( 1 δ ) 3 / 2 ( L + 1 ) 1 / 2
To prove Theorems 4 and 5, we will use a lemma which was obtained in [32].
Lemma 1.
Let the positive constants ℓ, r, and B be given. Suppose { a m } and { r m } are finite or infinite sequences of positive real numbers such that
a J B , a m a m 1 1 r m r a m 1 , m = J + 1 , J + 2 ,
Then, a m has the following upper bound:
a m max { 1 , 1 / } r 1 / r B + k = J + 1 m r k 1 / , m = J + 1 , J + 2 ,
Proof of Theorem 4: 
It follows from the definitions of f ^ m and θ m that
f f ^ m 2 2 = f f ^ m , f f ^ m = f f m 1 θ m φ i m , f f m 1 θ m φ i m = f f m 1 2 2 2 θ m f f m 1 , φ i m + θ m 2 φ i m 2 2 = f f m 1 2 2 2 f f m 1 , φ i m 2 + φ i m 2 2 f f m 1 , φ i m 2
From the design of the RMP, we know that f m is the orthogonal projection of f onto span { f ^ m } . Thus, the following must be true:
f f m 2 2 f f ^ m 2 2 = f f m 1 2 2 2 f f m 1 , φ i m 2 + φ i m 2 2 f f m 1 , φ i m 2
We continue to estimate the approximation error of f m by using the RIP condition with parameters ( k , δ ) of the matrix Φ . Hence, the above inequality can be estimated as
f f m 2 2 f f m 1 2 2 2 f f m 1 , φ i m 2 + ( 1 + δ ) f f m 1 , φ i m 2 = f f m 1 2 2 ( 1 δ ) f f m 1 , φ i m 2
Now, we will derive a lower bound for | f f m 1 , φ i m | . Note that f m is orthogonal to f f m , that is, the following applies
f f m , f m = 0 , m 0
It is observed that
f = Φ x = i = 1 d x i φ i = i supp ( x ) x i φ i
From (4), (5), and the Hölder inequality, we have
f f m 1 2 2 = f f m 1 , f f m 1 = f f m 1 , f = i supp ( x ) x i f f m 1 , φ i i supp ( x ) | x i | | f f m 1 , φ i | | f f m 1 , φ i m | i supp ( x ) | x i | | f f m 1 , φ i m | k 1 / 2 i supp ( x ) | x i | 2 1 / 2 = k 1 / 2 | f f m 1 , φ i m | x 2
From the RIP condition of Φ with the parameters ( k , δ ) , we obtain
f f m 1 2 2 k 1 / 2 | f f m 1 , φ i m | x 2 k 1 / 2 ( 1 δ ) 1 / 2 | f f m 1 , φ i m | Φ x 2 = k 1 / 2 ( 1 δ ) 1 / 2 | f f m 1 , φ i m | f 2
which implies
| f f m 1 , φ i m | ( 1 δ ) 1 / 2 k 1 / 2 f 2 f f m 1 2 2
Thus, combining the above inequality with (3), we have
f f m 2 2 f f m 1 2 2 ( 1 δ ) ( 1 δ ) k f 2 2 f f m 1 2 4 = f f m 1 2 2 1 ( 1 δ ) 2 k f 2 2 f f m 1 2 2 = f f m 1 2 2 1 k 1 / 2 f 2 ( 1 δ ) 2 f f m 1 2 2
Note that
f f 0 2 = f 2 k ( 1 δ ) 2 f 2 2
Applying Lemma 1 with a m = f f m 2 2 , B = k 1 / 2 f 2 1 δ 2 , r m : = 1 , r = k 1 / 2 f 2 1 δ 2 , J = 0 , and = 1 , then we have
f f m 2 2 k 1 / 2 f 2 1 δ 2 ( m + 1 ) 1 , m 1
which completes the proof of Theorem 4. □
Based on the result of Theorem 4, we can prove Theorem 5.
Proof of Theorem 5: 
If the matrix Φ satisfies the RIP condition with parameters ( 2 k , δ ) , then by Theorem 4, we can estimate the upper bound of x x ^ as follows:
x x ^ 2 2 ( 1 δ ) 1 Φ ( x x ^ ) 2 2 = ( 1 δ ) 1 Φ x Φ x ^ 2 2 = ( 1 δ ) 1 f f L 2 2 k f 2 2 ( 1 δ ) 3 ( L + 1 ) 1
Thus, the proof of Theorem 5 is finished. □
It is known from Theorem 5 that RMP can obtain a good approximation of the sparse signal f. In this case, it is worse than the OMP, since OMP can recover f exactly if the RIP condition is satisfied. The key point is that one can establish the ideal Lebesgue-type inequality for OMP but not for RMP; for example, see [22,33,34]. On the other hand, as far as we know, until now, it is impossible to construct a deterministic matrix satisfying the RIP condition. In practice, one usually uses the random matrix in signal recovery. We will show that the performance of the RMP is similar to the OMP in this case.

4. Signal Recovery with Admissible Measurements

Since one could not construct a deterministic matrix satisfying the RIP condition, it is natural to consider a random matrix. In practice, almost all approaches to recovering sparse signals using greedy-type methods involve some kind of randomness. In this section, we choose a class of random matrices which have good properties. This class of matrices is called admissible matrices. We first recall the definition of this class of matrices. Then, we prove that the support of a k-sparse target signal in R d can be recovered with a high probability by applying the RMP algorithm. This shows that RMP is a powerful method of recovering sparse signals in high-dimensional Euclidean spaces.
Definition 2.
An admissible measurement matrix for a k-sparse signal in R d is an N × d random matrix Φ satisfying the following conditions:
  • (M1) The columns of Φ are stochastically independent.
  • (M2) E ( φ j 2 2 ) = 1 for j = 1 , , d .
  • (M3) Let { u m } be a sequence of k vectors whose 2 norm do not exceed 1. If φ is a column of Φ independent from { u m } , then
    P r o b max m | φ , u m | ε 1 2 k e C 3 ε 2 N
    for a positive constant C 3 .
  • (M4) For a given N × k submatrix Z from Φ, the kth largest singular value σ k ( Z ) satisfies
    Prob σ k ( Z ) 1 2 1 e C 4 N
    for a positive constant C 4 .
We illustrate some points about the conditions (M3) and (M4). The above condition (M3) offers a limitation of the singular value of the submatrix, which is analogous to the RIP condition (see [17]). Furthermore, the condition (M4) controls the inner product, since the selection of index at Step 2 of the RMP algorithm relates to the inner product. Two typical classes of admissible measurement matrices are the Gaussian matrix and Bernoulli matrix, which were first introduced by Candès and Tao for signal recovery and defined as follows (see [2]).
Definition 3.
A measurement matrix Φ is called a Gaussian measurement if every entry of Φ is selected from the Gaussian distribution N ( 0 , N 1 ) , of which the density function p ( x ) is p ( x ) = 1 2 π N e x 2 N / 2 , for x R
Definition 4.
A measurement matrix Φ is called a Bernoulli measurement if every entry of Φ is selected to be ± 1 N with the same probability, i.e., Prob { Φ i j = 1 / N } = Prob { Φ i j = 1 / N } = 1 2 , where i = 1 , , N , j = 1 , , d .
Obviously, for Gaussian and Bernoulli measurements, the properties (M1) and (M2) are straightforward to verify. We can check the other two properties using the following known results. See [17] for more details.
Proposition 1.
Suppose { u m } is a sequence of k-sparse vectors. The 2 norm of each u m does not exceed 1. Choose z independently to be a random vector with i . i . d . N ( 0 , N 1 ) entries. Then
Prob max m | z , u m | ε 1 k e 2 ε 2 N / 2
Proposition 2.
Let Z be an N × k matrix whose entries are all i . i . d . N ( 0 , N 1 ) or else i . i . d . uniform on ± 1 N . Then
1 2 x 2 Z 2 3 2 x 2 , for all x R k
with a probability of at least
1 2 · 24 k · e c N
For the RMP algorithm, we derive the following result.
Theorem 6.
Let x be a k-sparse signal in R d and Φ be an N × d admissible matrix independent from the signal. For the given data f = Φ x , if the RMP algorithm selects k elements in the first L k iterations, the correct support of x can be found with the probability exceeding 1 4 L ( d k ) e c N / k , where c is a constant dependent on the admissible measurement matrix Φ.
Proof of Theorem 6: 
Without a loss of generality, we may assume that the first k components of x are nonzero and the other remaining components of x are zeroes. Obviously, f can be expressed as a linear combination of the first k columns of measurement matrix Φ . We decompose the measurement matrix Φ into two parts Φ = [ Φ o p t | Ψ ] , where Φ o p t is the submatrix formed by the first k columns of Φ , and Ψ is the submatrix formed by the remaining d k columns. For Φ o p t , we assume that
Φ o p t = [ φ 1 , , φ k ] , φ 1 , , φ k Φ
and
f = Φ x = i = 1 k φ i x i
Furthermore, we can know that f = Φ x is independent from the random matrix Ψ .
Denote E s u c c as the event where the RMP algorithm recovers all k elements from the support of x correctly in the first L k iterations. Define Σ as the event where σ k ( Φ o p t ) 1 2 . Then, we have
Prob { E s u c c } Prob { E s u c c Σ } = Prob { E s u c c | Σ } · Prob { Σ }
For an N-dimensional vector r, define
ρ ( r ) : = Ψ T r Φ o p t T r = max ψ | ψ , r | Φ o p t T r
where ψ is a column of the matrix Ψ . If r is a residual vector derived by the Step 2 of the RMP algorithm and ρ ( r ) satisfies ρ ( r ) < 1 , it means that a column from matrix Φ o p t has been selected.
If we execute the RMP algorithm with input signal x and measurement matrix Φ o p t for L iterations, then we obtain a sequence of residual vectors r 0 , r 1 , , r L 1 . Obviously, the residual vectors are independent from the matrix Ψ and can be considered as the functions of x and Φ o p t . Instead, suppose that we execute the RMP algorithm with the input signal x and matrix Φ for L iterations. If the RMP algorithm recovers all the k elements of the support of x correctly, the columns of Φ o p t should be selected at each iteration. Thus, the sequence of residual vectors should be same as when we execute the algorithm with Φ o p t .
Therefore, the condition probability satisfies
Prob { E s u c c | Σ } Prob { max m ρ ( r m ) < 1 | Σ }
where r m is a random vector depending on Φ o p t and stochastically independent from Ψ .
Assume that Σ occurs. Since Φ o p t T r is a k-dimensional vector, then
ρ ( r m ) = max ψ | ψ , r m | Φ o p t T r m k max ψ | ψ , r m | Φ o p t T r m 2
By the basic property of singular values, we have
Φ o p t T r 2 r 2 σ k ( Φ o p t ) 1 2
for any vector r in the range of Φ o p t . Thus, for vector r m , we have u m 2 1 , where u m : = 1 2 r m Φ o p t T r m 2 . Then, the have the following:
ρ ( r m ) k max ψ | ψ , r m | Φ o p t T r m 2 = 2 k max ψ | ψ , r m 2 Φ o p t T r m 2 | = 2 k max ψ | ψ , u m |
for each index m = 0 , 1 , , L 1 .
Thus, we have
Prob { ρ ( r m ) < 1 | Σ } Prob max m max ψ | ψ , u m | < 1 2 k | Σ
Since the columns of Ψ are stochastically independent from each other, we can exchange the two maxima to obtain
Prob { max m ρ ( r m ) < 1 | Σ } ψ Prob max m | ψ , u m | < 1 2 k | Σ
Obviously, each column of Ψ is independent from { u m } and Σ . By condition (M3), we have
Prob { max m ρ ( r m ) < 1 | Σ } [ 1 2 L e C 3 N / 4 k ] d k
By the property (M4), we have
Prob { Σ } = Prob σ k ( Φ o p t ) 1 2 1 e C 4 N
Therefore, the following applies:
Prob { E s u c c } Prob { E s u c c | Σ } · Prob { Σ } Prob { max m ρ ( r m ) < 1 | Σ } · Prob { Σ } [ 1 2 L e C 3 N / 4 k ] d k · [ 1 e C 4 N ]
By using the following known inequality for k 1 , x 1 ,
( 1 x ) k 1 k x
we have
Prob { E s u c c } 1 2 L ( d k ) e C 3 N / 4 k e C 4 N 1 4 L ( d k ) e c N / k
We finish the proof of this theorem. □
We remark that a similar result holds for the OMP (see [17]). So, the performance of the RMP is similar to that of the OMP in the random case. In the next section, we will verify the conclusion of Theorem 6 via numerical experiments.

5. Simulation Results

In the present section, we check the efficiency of the Rescaled Matching Pursuit (RMP) algorithm defined in Section 2. We adopt the following model. Assume that x R d is the target signal. We want to recover it by using the following information:
f = Φ x
where Φ R N × d is a Bernoulli measurement matrix.
In our experiments, we generate a k-sparse signal x R d by randomly choosing k components, and each of them is selected from a standard normal distribution N ( 0 , 1 ) . We execute the RMP algorithm with a Bernoulli measurement matrix under the model (6). We measure the performance of the RMP algorithm with the mean square error (MSE), which is defined as
MSE = 1 d j = 1 d ( x j x ^ j ) 2
where x ^ R d is the estimate of the target signal x.
The first plot, Figure 1, displays the performances of the OMP and the RMP for an input signal with different sparsity indexes k, different dimensions d, and different numbers of measurements N. The red line denotes the original signal, the blue squares represent the approximation of the RMP, and the black dots denote the approximation of the OMP.
From Figure 1, we know that the RMP can obtain a good approximation for a sparse signal even in a high dimension or when N is much smaller than d. After repeating the test 100 times, we can figure out the mean square errors of the above four cases as follows:
(a) MSE(OMP) = 4.1767 × 10 6 ; MSE(RMP) = 2.3491 × 10 4
(b) MSE(OMP) = 1.5524 × 10 6 ; MSE(RMP) = 1.1254 × 10 4
(c) MSE(OMP) = 1.03 × 10 6 ; MSE(RMP) = 1.28 × 10 4
(d) MSE(OMP) = 5.44 × 10 5 ; MSE(RMP) = 9.422 × 10 3
Thus, taking a suitable number of measurements, the performance of the RMP is similar to the OMP within allowable limits.
Figure 2 shows that the RMP can approximate the 40 sparse signal well under noise, even in a high dimension, which implies that the RMP is stable with the noise.
Figure 3 displays the relation between the percentage (of 100 input signals) of the support that can be determined correctly and the number N of measurements in different dimensions d. It reveals how many measurements are necessary to reconstruct a k-sparse signal x R d with a high probability. A percentage equal to 100 % means that support for all 100 input signals can be found. That is to say, the support of the input signal can be exactly recovered. Furthermore, Figure 3 indicates that to guarantee the success of recovering the target signal, the number N of the measurements must increase when the sparsity level k increases.
The numerical experiments show that the RMP is efficient for the recovery of the sparse signals even in the noised case. The RMP can determine the support of the target signal in a high-dimension Euclidean space with a high probability, if the number N is large enough.

6. Conclusions

We design the Rescaled Matching Pursuit (RMP) algorithm for the recovery of the target signal in Euclidean space with a high dimension. The RMP algorithm performs with great advantages in terms of computational complexity over other greedy-based compressive sensing algorithms, such as the OMP algorithm, since its total cost is O ( N ) at m iterations. By using the RIP condition, we derive the convergence rate of the error between the original signal and its approximation for the RMP algorithm. Moreover, we prove that the RMP algorithm can recover the support of the original sparse signal by using an admissible matrix with a high probability. Our numerical simulation results also verify this conclusion. And, we show that the RMP can obtain a good approximation for original sparse signal under noise, namely, the RMP is stable with noise. Thus, in both theory and practice, we show that the RMP algorithm is powerful in random cases. Although the computational cost of the RMP is smaller than that of the OMP, the performance of the RMP is not better than the OMP in the deterministic case. However, in randomized cases, the two algorithms perform similarly. Thus, if the sparsity level k of the target signal is large and computational resources are limited, then the RMP with random measurements may be a good choice.

Author Contributions

W.L. and P.Y. contribute evenly to this paper. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially funded by the NSFC of China, grant no. 11671213.

Data Availability Statement

The data presented in this study are available on request from the corresponding author.

Acknowledgments

The authors thank Li Haifeng and Shen Yi for their useful comments. The authors also thank the four referees for their suggestions. The suggestions and comments helped the authors to improve the manuscript.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Robust uncertainty principles: Exact signal reconstruction from highly incomplete frequency information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  3. Yuan, X.; Haimi-Cohen, R. Image compression based on compressive sensing: End-to-end comparison with JPEG. IEEE Trans. Multimed. 2020, 22, 2889–2904. [Google Scholar] [CrossRef]
  4. Berger, C.R.; Wang, Z.; Huang, J.; Zhou, S. Application of compressive sensing to sparse channel estimation. IEEE Commun. Mag. 2010, 48, 164–174. [Google Scholar] [CrossRef]
  5. Barranca, V.J. Reconstruction of sparse recurrent connectivity and inputs from the nonlinear dynamics of neuronal networks. J. Comput. Neurosci. 2023, 51, 43–58. [Google Scholar] [CrossRef] [PubMed]
  6. Dai, W.; Sheikh, M.A.; Milenkovic, O.; Baraniuk, R.G. Compressive sensing DNA microarrays. EURASIP J. Bioinform. Syst. Biol. 2009, 2009, 162824. [Google Scholar] [CrossRef]
  7. Gross, D.; Liu, Y.K.; Flammia, S.T.; Becker, S.; Eisert, J. Quantum state tomography via compressed sensing. Phys. Rev. Lett. 2010, 105, 150401. [Google Scholar] [CrossRef] [PubMed]
  8. Candès, E.J.; Wakin, M.B. An introduction to compressive sampling. IEEE Signal Proc. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  9. Candès, E.J.; Tao, T. Decoding by linear programming. IEEE Trans. Inf. Theory 2005, 51, 4203–4215. [Google Scholar] [CrossRef]
  10. Cai, T.T.; Zhang, A.R. Sparse representation of a polytope and recovery of sparse signals and low-rank matrices. IEEE Trans. Inf. Theory 2014, 60, 122–132. [Google Scholar] [CrossRef]
  11. Kim, S.J.; Koh, K.; Lustig, M.; Boyd, S.; Gorinevsky, D. An interior-point method for large-scale 1-regularized least squares. IEEE J. Sel. Top. Signal Process. 2007, 1, 606–617. [Google Scholar] [CrossRef]
  12. Candès, E.J.; Tao, T. Near-optimal signal recovery from random projections: Universal encoding strategies? IEEE Trans. Inf. Theory 2006, 52, 5406–5425. [Google Scholar] [CrossRef]
  13. Aziz, A.; Salim, A.; Osamy, W. Sparse signals reconstruction via adaptive iterative greedy algorithm. Int. J. Comput. Appl. 2014, 90, 5–11. [Google Scholar]
  14. Goyal, P.; Singh, B. Greedy algorithms for sparse signal recovery based on temporally correlated experimental data in WSNs. Arab. J. Sci. Eng. 2018, 43, 7253–7264. [Google Scholar] [CrossRef]
  15. Chae, J.; Hong, S. Greedy algorithms for sparse and positive signal recovery based on bit-wise MAP detection. IEEE Trans. Signal Process. 2020, 68, 4017–4029. [Google Scholar] [CrossRef]
  16. Lv, X.L.; Wan, C.R.; Bi, G.A. Block orthogonal greedy algorithm for stable recovery of block-sparse signal representations. Signal Process. 2010, 90, 3265–3277. [Google Scholar] [CrossRef]
  17. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef]
  18. Huber, P.J. Projection pursuit. Ann. Stat. 1985, 13, 435–475. [Google Scholar] [CrossRef]
  19. DeVore, R.A.; Temlyakov, V.N. Some remarks on greedy algorithms. Adv. Comput. Math. 1996, 5, 173–187. [Google Scholar] [CrossRef]
  20. Barron, A.R.; Cohen, A.; Dahmen, W.; DeVore, R.A. Approximation and learning by greedy algorithms. Ann. Stat. 2008, 36, 64–94. [Google Scholar] [CrossRef]
  21. Pati, Y.C.; Rezaiifar, R.; Krishnaprasad, P.S. Orthogonal matching pursuit: Recursive function approximation with applications to wavelet decomposition. In Proceedings of the 27th Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 1–3 November 1993; Volume 1, pp. 40–44. [Google Scholar]
  22. Shao, C.F.; Wei, X.J.; Ye, P.X.; Xing, S. Efficiency of orthogonal matching pursuit for group sparse recovery. Axioms 2023, 12, 389. [Google Scholar] [CrossRef]
  23. Davis, G.; Mallat, S.; Avellaneda, M. Adaptive greedy approximations. Constr. Approx. 1997, 13, 57–98. [Google Scholar] [CrossRef]
  24. Zhang, T. Sparse recovery with orthogonal matching pursuit under RIP. IEEE Trans. Inf. Theory 2011, 57, 6215–6221. [Google Scholar] [CrossRef]
  25. Eldar, Y.C.; Kutyniok, G. Compressed Sensing: Theory and Applications; Cambridge University Press: Cambridge, UK, 2012. [Google Scholar]
  26. Petrova, G. Rescaled pure greedy algorithm for Hilbert and Banach spaces. Appl. Comput. Harmon. Anal. 2016, 41, 852–866. [Google Scholar] [CrossRef]
  27. Gao, Z.M.; Petrova, G. Rescaled pure greedy algorithm for convex optimization. Calcolo 2019, 56, 15. [Google Scholar] [CrossRef]
  28. Guo, Q.; Liu, X.H.; Ye, P.X. The learning performance of the weak rescaled pure greedy algorithms. J. Inequal. Appl. 2024, 2024, 30. [Google Scholar] [CrossRef]
  29. Zhang, W.H.; Ye, P.X.; Xing, S. Optimality of the rescaled pure greedy learning algorithms. Int. J. Wavelets Multiresolut. Inf. Process. 2023, 21, 2250048. [Google Scholar] [CrossRef]
  30. Jiang, B.; Ye, P.X. Efficiency of the weak rescaled pure greedy algorithm. Int. J. Wavelets Multiresolut. Inf. Process. 2021, 19, 2150001. [Google Scholar] [CrossRef]
  31. Zhang, W.H.; Ye, P.X.; Xing, S.; Xu, X. Optimality of the approximation and learning by the rescaled pure super greedy algorithms. Axioms 2022, 11, 437. [Google Scholar] [CrossRef]
  32. Nguyen, H.; Petrova, G. Greedy strategies for convex optimization. Calcolo 2017, 54, 207–224. [Google Scholar] [CrossRef]
  33. Temlyakov, V.N. Greedy Approximation; Cambridge University Press: Cambridge, UK, 2011. [Google Scholar]
  34. Shao, C.F.; Chang, J.C.; Ye, P.X.; Zhang, W.H.; Xing, S. Almost optimality of the orthogonal super greedy algorithm for μ-coherent dictionaries. Axioms 2022, 11, 186. [Google Scholar] [CrossRef]
Figure 1. The recovery of an input signal for the OMP and the RMP under different sparsity levels k, different dimensions d, and different numbers of measurements N. (a) k = 40, N = 250, d = 512; (b) k = 40, N = 250, d = 1024; (c) k = 40, N = 250, d = 2048, (d) k = 125, N = 500, d = 5000.
Figure 1. The recovery of an input signal for the OMP and the RMP under different sparsity levels k, different dimensions d, and different numbers of measurements N. (a) k = 40, N = 250, d = 512; (b) k = 40, N = 250, d = 1024; (c) k = 40, N = 250, d = 2048, (d) k = 125, N = 500, d = 5000.
Axioms 13 00288 g001
Figure 2. The recovery of an input signal with sparsity level k = 40 and number of measurments N = 250 via RMP in different dimensions d and different noise levels: (a) d = 512 , no noise; (b) d = 512 , Gaussian noise; (c) d = 1024 , no noise; (d) d = 1024 , Gaussian noise. The red line denotes the original signal. The blue squares stand for the approximation obtained by applying the RMP.
Figure 2. The recovery of an input signal with sparsity level k = 40 and number of measurments N = 250 via RMP in different dimensions d and different noise levels: (a) d = 512 , no noise; (b) d = 512 , Gaussian noise; (c) d = 1024 , no noise; (d) d = 1024 , Gaussian noise. The red line denotes the original signal. The blue squares stand for the approximation obtained by applying the RMP.
Axioms 13 00288 g002
Figure 3. The percentage of the support of 100 input signals being recovered correctly as a function of the number N of Bernoulli measurements for different sparsity levels k in different dimensions d via RMP: (a) d = 512 ; (b) d = 1024 .
Figure 3. The percentage of the support of 100 input signals being recovered correctly as a function of the number N of Bernoulli measurements for different sparsity levels k in different dimensions d via RMP: (a) d = 512 ; (b) d = 1024 .
Axioms 13 00288 g003aAxioms 13 00288 g003b
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Li, W.; Ye, P. Sparse Signal Recovery via Rescaled Matching Pursuit. Axioms 2024, 13, 288. https://doi.org/10.3390/axioms13050288

AMA Style

Li W, Ye P. Sparse Signal Recovery via Rescaled Matching Pursuit. Axioms. 2024; 13(5):288. https://doi.org/10.3390/axioms13050288

Chicago/Turabian Style

Li, Wan, and Peixin Ye. 2024. "Sparse Signal Recovery via Rescaled Matching Pursuit" Axioms 13, no. 5: 288. https://doi.org/10.3390/axioms13050288

APA Style

Li, W., & Ye, P. (2024). Sparse Signal Recovery via Rescaled Matching Pursuit. Axioms, 13(5), 288. https://doi.org/10.3390/axioms13050288

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop