Next Article in Journal
Invariant-Based Inverse Engineering for Balanced Displacement of a Cartpole System
Previous Article in Journal
A Comparative Analysis of Three Data Fusion Methods and Construction of the Fusion Method Selection Paradigm
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Novel Iterative Reweighted 1 Minimization for Sparse Recovery

1
Department of Computer Science, The Open University of China, 75 Fuxing Road, Beijing 100039, China
2
Engineering Research Center of Integration and Application of Digital Learning Technology, Ministry of Education, 2 Weigongcun Road, Beijing 100081, China
3
College of Economics & Management, Zhejiang University of Water Resources and Electric Power, 583 Xuelin Road, Hangzhou 310018, China
*
Author to whom correspondence should be addressed.
Mathematics 2025, 13(8), 1219; https://doi.org/10.3390/math13081219
Submission received: 4 March 2025 / Revised: 2 April 2025 / Accepted: 5 April 2025 / Published: 8 April 2025

Abstract

:
Data acquisition and high-dimensional signal processing often require the recovery of sparse representations of signals to minimize the resources needed for data collection. p quasi-norm minimization excels in exactly reconstructing sparse signals from fewer measurements, but it is NP-hard and challenging to solve. In this paper, we propose two distinct Iteratively Re-weighted 1 Minimization (IR 1 ) formulations for solving this non-convex sparse recovery problem by introducing two novel reweighting strategies. These strategies ensure that the ϵ -regularizations adjust dynamically based on the magnitudes of the solution components, leading to more effective approximations of the non-convex sparsity penalty. The resulting IR 1 formulations provide first-order approximations of tighter surrogates for the original p quasi-norm objective. We prove that both algorithms converge to the true sparse solution under appropriate conditions on the sensing matrix. Our numerical experiments demonstrate that the proposed IR 1 algorithms outperform the conventional approach in enhancing recovery success rate and computational efficiency, especially in cases with small values of p.

1. Introduction

Compressed sensing has revolutionized signal processing and data acquisition by enabling signal reconstruction from fewer measurements, making it crucial for applications such as medical imaging, sensor networks, and wireless communications, where reducing data acquisition costs and time is essential. At its core, compressed sensing relies on sparse recovery, which leverages the fact that many real-world signals have sparse representations. For example, in genomic studies, only a few hundred genes show significant changes among thousands measured. Similarly, communication signals and natural images are often compressible in the Fourier, discrete cosine, or wavelet bases [1]. Representing signals sparsely enables more efficient processing [2].
A key aspect of compressed sensing is reconstructing a signal from a small number of random linear measurements, provided that the signal has a sparse representation in some basis. Dictionary learning achieves this sparse representation by learning an overcomplete basis directly from data, capturing essential features of the data with minimal redundancy. However, signal reconstruction remains computationally demanding. Specifically, recovering a signal from reduced measurements requires sophisticated algorithms that are accurate and computationally feasible. This paper focuses on developing an efficient decoding algorithm in the context of compressed sensing.
Recovering sparse representations of a signal using redundant dictionaries can be framed as solving the linear system b = A x , where b = A u represents m measurements of the n-dimensional signal u, A R m × n is the sensing matrix, and x is the unknown coefficient vector. When m < n , the system is underdetermined with infinitely many solutions. Assuming that u is sparse, meaning it has only a few nonzero components, the goal is to find the solution with the smallest sparsity. A vector has sparsity level K if it contains at most K nonzero entries.
The problem of recovering a maximally sparse signal can be canonically cast as a combinatorial optimization problem minimizing the number of nonzero entries (the 0  norm):
min x 0 , s . t . A x = b ,
where x 0 = i = 1 n | x i | 0 . However, this problem is non-convex and computationally intractable, as solving it requires an exhaustive search over all possible nonzero component combinations, which scales exponentially with m and n.
As another point of departure, sparse recovery based on a non-convex p quasi-norm ( 0 < p < 1 ) penalty has been explored:
min x p p , s . t . A x = b ,
where x p p = i = 1 n | x i | p . For p ( 0 , 1 ) , the p quasi-norm behaves as a quasi-norm rather than a strict norm. The p quasi-norm has become an effective proxy for sparsity, as it provides a closer approximation to 0 norm than 1 norm and thus better promotes sparsity. Ref. [3] demonstrated through numerical simulations that using the non-convex p quasi-norm ( 0 < p < 1 ) rather than the 1 norm requires fewer measurements for exact reconstruction of sparse signals. Additionally, ref. [4] established sufficient conditions, in terms of the restricted isometry constants of A, under which the local minimizer of (2) can exactly recover the original signal.
Despite its advantages, (2) remains NP-hard and challenging to solve, as no closed-form solution exists when 0 < p < 1 [5]. As a result, heuristic methods are required to find local minima. A common approach involves iteratively reweighted schemes, with two notable variants: Iterative Reweighted Least Squares (IRLS) and Iteratively Reweighted 1 Minimization (IR 1 ) [6,7].
The IRLS method for solving (2) was first studied by [8], reformulating the non-convex p quasi-norm into a weighted 2 norm:
min i = 1 n w i x i 2 , s . t . A x = b ,
where the weights are updated based on the current iterate x k . The closed-form update based on (3) is given by x k + 1 = A n A T ( A A n A T ) 1 b , where A n is a diagonal matrix with entries 1 / w i . Choosing w i = ( x i k ) p 2 causes the objective in (3) to become a first-order approximation of the p quasi-norm, but it is undefined whenever x i k = 0 . To address this, ref. [9] introduced ϵ -regularization with w i = ( x i 2 + ϵ 2 ) p / 2 1 , where ϵ > 0 gradually reduces to zero. They numerically validated that this regularization strategy enables exact recovery with fewer measurements for much less sparse signals. Further enhancements include [10], which sorts absolute solution values to refine nonzero indices and has demonstrated improved recovery with a lower normalized root mean square error. Despite its effectiveness, the IRLS method suffers from costly matrix inversion for each iteration, making it impractical for large-scale datasets.
The IR 1 approach for solving (2) replaces the p quasi-norm objective with a weighted 1 norm:
min i = 1 n w i | x i | , s . t . A x = b .
In the conventional IR 1 approach by [11], the weights are defined from the current iterate x k as
w i = ( | x i k | + ϵ ) p 1 .
To keep the solutions from being trapped by local minima, ϵ is typically initialized with a large value and gradually reduced to zero. With (5), the objective in (4) serves as a first-order approximation of
i = 1 n ( | x i | + ϵ ) p .
Compared to IRLS, IR 1 generally requires fewer iterations despite its higher per-iteration cost [12]. It also allows easy integration of additional constraints (e.g., bounded activation or non-negativity constraints) without significantly increasing the computational burden. Although both the standard 1 minimization and its weighted version (4) can be formulated as linear programs, IR 1 tends to be slower than conventional 1 -based sparsity recovery methods due to iterative reweighting. However, it achieves better performance in sparsity recovery by more effectively penalizing nonzero coefficients in a balanced manner [11].
In related work, ref. [12] examined non-separable weight selection for generalized sparsity penalties, which cannot be expressed as a simple summation of functions of individual components. Ref. [13] proposed a nonuniform sparsity model, where the vector components are divided into two sets with different probabilities of being nonzero, and introduced an IR 1 approach based on this sparsity model. Recently, ref. [14] proposed an adaptively iterative reweighted algorithm for solving generalized non-convex and non-smooth sparse optimization problems; the algorithm extends the weighting strategy from (5) to construct a weighted 1 -norm-based convex smooth surrogate for the p quasi-norm sparsity-inducing function.
Despite advancements in IR 1 methods for sparse recovery, existing approaches still face key limitations. Conventional reweighting strategy (5) does not fully exploit the structure of the p quasi-norm and thus provides a relatively loose approximation of the p quasi-norm, which may limit its effectiveness in accurately reconstructing sparse signals. Additionally, existing IR 1 methods employ fixed weight update schemes, which fail to dynamically adapt to variations in signal magnitudes during iterations.
To address these limitations, we propose two novel reweighting strategies based on some ϵ -approximations of the p quasi-norm that offer a more refined approximation under the same perturbation magnitude. Our approaches dynamically adjust weights based on the magnitudes of the solution components. By improving both the accuracy of the weighting scheme and the adaptability of the reweighting process, our proposed methods can improve the performance of sparse recovery.

2. Algorithm Description

The IR 1 method iteratively approximates the original p quasi-norm objective in (2) by using a weighted 1 norm. The reweighting scheme dynamically adjusts the sparsity penalty at each iteration based on the current iterate.
In this paper, we propose two novel reweighting strategies within the IR 1 framework, each defining a new surrogate for the non-convex sparsity-inducing objective. These strategies are motivated by the ϵ -approximations of x p p that provide more refined approximations than (6).

2.1. The First Type of IR 1 Algorithm

We consider the following localized regularization approach:
w i = max ( | x i k | , ϵ ) p 1 .
Here, the weight w i is equal to | x i k | p 1 for large values of | x i k | , making the objective in (4) a first-order approximation of the original p quasi-norm sparsity penalty. On the other hand, when | x i k | is small (i.e., below the threshold ϵ ), the weight is capped at ϵ p 1 , ensuring that the regularization does not become overly aggressive. This reweighting strategy differs from the one used in the conventional IR 1 approach (5), which applies uniform regularization across all components of the iterate. Instead, (7) selectively regularizes only small values of | x i k | to prevent undefined values, thereby enabling more accurate sparsity recovery. With (7), the objective in (4) amounts to a first-order approximation to the following non-smooth yet locally Lipschitz continuous ϵ -approximation to the p quasi-norm:
i = 1 n min 0 s ϵ p 1 p ( | x i | s s q q ) ,
which can been shown to yield a more effective approximation than i = 1 n ( | x i | + ϵ ) p under the same perturbation [15].
Next, we introduce a new IR 1 minimization process by iteratively solving a sequence of weighted 1 norm minimization problems to address problem (2).
Furthermore, we establish that the sequence produced by Algorithm 1 converges to the true sparse signal u, under the unique representation property of the sensing matrix A. Notably, this property holds with a probability of one for a random Gaussian matrix A, provided that m 2 K .
Theorem 1. 
Let u R n be a vector of sparsity u 0 = K , and let A R m × n be a sensing matrix that satisfies the unique representation property: Any collection of 2 K columns of A is linearly independent. Consider the sequence of regularization parameters { ϵ k } , which is chosen such that ϵ k 0 as k . The sequence { x k } generated by Algorithm 1 converges to u.
Proof. 
The sequence { x k } is bounded due to the well-known property of IR 1 that ensures boundedness independent of the weights. For any 1 j n satisfying u j = 0, we have the following inequality:
i = 1 n w i | x i k | w j | x j k | = ( ϵ k ) p 1 | x j k | .
By the optimality of x k , we also have the following inequality:
i = 1 n w i | x i k | i = 1 n w i | u i | u p p .
Combining (9) and (10), we obtain
| x j k | ( ϵ k ) 1 p u p p .
Therefore, { x k } converges to zero outside the support of u. Consequently, any limit point x * of { x k } must satisfy A x * = b and have at most K nonzero components. Now, under the assumption that A satisfies the unique representation property, u is the unique solution to A x = b with x 0 K . This guarantees that x * = u , leading to exact recovery. If this property does not hold, then any limit point x * still satisfies A x * = b and has at most K nonzero components. In this case, x * may not be exactly equal to u, but it remains a sparse solution whose sparsity does not exceed K.    □
Algorithm 1 The first type of IR 1 algorithm
   1:
Input: Sensing matrix A, measurement vector b, quasi-norm parameter p, tolerance τ , and maximum iterations k max ;
   2:
Initialize: Set initial perturbation ϵ 0 , starting point x 0 , and iteration counter k = 0 ;
   3:
while  k < k max do
   4:
   for each component i do
   5:
     if  | x i k | ϵ k  then
   6:
         w i | x i k | p 1
{Compute weights}
   7:
     else
   8:
         w i ( ϵ k ) p 1
   9:
     end if
 10:
   end for
 11:
   Solve the weighted 1 -minimization problem (4) for the optimal solution x k + 1 .
 12:
   if  x k + 1 x k 2 x k 2 < τ  then
 13:
     break;
{Checking convergence}
 14:
   end if
 15:
   Reduce ϵ k to obtain ϵ k + 1 ;
 16:
   Increment iteration counter: k k + 1 ;
 17:
end while
 18:
Output: Recovered signal x * = x k + 1 .

2.2. The Second Type of IR 1 Algorithm

We now focus on a more sophisticated ϵ -approximation to the p quasi-norm that takes the following form:
i = 1 n ( x i 2 + ϵ 2 ) p 2 .
The function ( x i 2 + ϵ 2 ) p 2 , unlike ( | x i | + ϵ ) p , does not retain the concavity exhibited by | x i | p over R + . Instead, it is convex for x i [ 0 , x ¯ i ] and concave for x i [ x ¯ i , + ) , where x ¯ i = 1 1 p ϵ i .
Based on (11), we consider the following reweighting strategy:
w i : = x ¯ i k ( ( x ¯ i k ) 2 + ( ϵ ) 2 ) p 2 1 ,
with x ¯ i k : = max ( | x i k | , 1 1 p ϵ ) . When | x i k | lies in the concave region of the function ( x i 2 + ϵ 2 ) p 2 , this weight approximates | x i k | p 1 based on the first derivative of ( x i 2 + ϵ 2 ) p 2 evaluated at | x i k | . When | x i k | lies in the convex region of the function ( x i 2 + ϵ 2 ) p 2 (i.e., below a predefined threshold x ¯ i k ), we use the first derivative of ( x i 2 + ϵ 2 ) p 2 evaluated at x ¯ i k to approximate | x i | p 1 , where x ¯ i k corresponds to the point where the convexity of the function ( x i 2 + ϵ 2 ) p 2 switches to concavity.
With (12), the objective in (4) serves as a first-order approximation to a continuously differentiable piecewise-defined surrogate for i = 1 n | x i | p , which has been shown to yield a more refined approximation than i = 1 n ( | x i | + ϵ ) p under the same amount of perturbation [16].
Next, we propose another IR 1 minimization process by iteratively solving a sequence of weighted 1 norm minimization problems to address problem (2).
A preliminary result for the convergence proof is presented as follows.
Lemma 1. 
For any ϵ > 0 , t ¯ ( t ¯ 2 + ϵ 2 ) p 2 1 | t | | t | p holds for all t R , where t ¯ : = max ( | t | , 1 1 p ϵ ) .
Proof. 
When | t | 1 1 p ϵ , t ¯ ( t ¯ 2 + ϵ 2 ) p 2 1 | t | = ( t 2 + ϵ 2 ) p 2 1 t 2 | t | p . When 0 | t | < 1 1 p ϵ , t ¯ ( t ¯ 2 + ϵ 2 ) p 2 1 | t | = 1 1 p ϵ ( ( 1 1 p ϵ ) 2 + ϵ 2 ) p 2 1 | t | ( 1 1 p ϵ ) p 1 | t | | t | p .    □
Similarly, we prove that the sequence of iterates generated by the proposed procedure converges to the true sparse signal u under the same conditions on the sensing matrix A.
Theorem 2. 
Let u R n be a vector of sparsity u 0 = K , and A R m × n be a sensing matrix that satisfies the unique representation property [17]: Any collection of 2 K columns of A is linearly independent. Consider the sequence of regularization parameters { ϵ k } , which is chosen such that ϵ k 0 as k . The sequence { x k } generated by Algorithm 2 converges to u.
Algorithm 2 The second type of IR 1 algorithm
   1:
Input: Sensing matrix A, measurement vector b, quasi-norm parameter p, tolerance τ , and maximum iterations k max ;
   2:
Initialize: Set initial perturbation ϵ 0 , starting point x 0 , and iteration counter k = 1 ;
   3:
while  k < k max   do
   4:
   for each component i do
   5:
     if  | x i k | ϵ k 1 p  then
   6:
         x ¯ i k | x i k | ;
   7:
     else
   8:
         x ¯ i k ϵ k 1 p ;
   9:
     end if
 10:
      w i = x ¯ i k ( ( x ¯ i k ) 2 + ( ϵ k ) 2 ) p 2 1 ;
{Compute weights}
 11:
   end for
 12:
   Solve the weighted 1 -minimization problem (4) for the optimal solution x k + 1 ;
 13:
   if  x k + 1 x k 2 x k 2 < τ  then
 14:
     break;
{Check convergence}
 15:
   end if
 16:
   Reduce ϵ k to obtain ϵ k + 1 ;
 17:
   Increment iteration counter: k k + 1 ;
 18:
end while
 19:
Output: Recovered signal x * = x k + 1 .
Proof. 
The sequence { x k } is bounded due to the well-known property of IR 1 that ensures boundedness independent of the weights. For any 1 j n satisfying u j = 0, we have the following inequality:
i = 1 n w i | x i k | w j | x j k | = 1 1 p ϵ k ( ( 1 1 p ϵ k ) 2 + ( ϵ k ) 2 ) p 2 1 | x j k | = 2 p 1 p ( 1 p ) ( ϵ k ) p 1 | x j k | .
By the optimality of x k , we also have the inequality
i = 1 n w i | x i k | i = 1 n w i | u i | u p p ,
where the last inequality follows from Lemma 1. Combining (13) and (14), we obtain
| x j k | 1 p ( 1 p ) 2 p ( ϵ k ) 1 p u p p .
Thus, we conclude that { x k } tends to zero outside the support of u. The rest of the proof follows a line of reasoning similar to that of Theorem 1. □

3. Numerical Validation

In this section, we present numerical results to evaluate and compare the proposed IR 1 algorithms with the conventional IR 1 from [11]. Specifically, we refer to the three IR 1 algorithms using the reweighting strategies from (5), (7), (12) as IR 1 -1, IR 1 -2, and IR 1 -3, respectively, with IR 1 -2 and IR 1 -3 being the newly proposed algorithms. The experiments were conducted on a MacBook Pro running Mac OS Monterey 12.0.1 with 8 GB of memory.

3.1. Experimental Setup and Methodology

We randomly generated a sensing matrix A R 100 × 256 with entries independently drawn from a mean-zero Gaussian distribution. The support of the sparse signal u was randomly selected, with the number of nonzero elements given by the sparsity level K. These nonzero entries were sampled from a Gaussian distribution with mean 0 and standard deviation σ . We performed experiments for two different values of σ : σ = 1 , representing relatively small signal magnitudes, and σ = 10 , representing larger signal magnitudes. Within each experiment, the value of σ remained fixed for all trials to ensure consistency when comparing the three IR 1 algorithms.
The experiments were performed for varying values of p and K, with 100 independent trials conducted for each instance. To examine the effectiveness of the proposed algorithms under different sparsity levels and quasi-norm formulations, we varied the sparsity level K and the quasi-norm parameter p. Specifically, K was set to values in the range between 32 and 54 in increments of 2, and p was varied from 0.1 to 0.9 in increments of 0.1. This ensured a comprehensive evaluation across different sparsity regimes and non-convex formulations.
In every trial, the same sensing matrix A and sparse signal u were used across the three IR 1 algorithms. The regularization parameter ϵ was initialized to 0.1, and the starting point x 0 was chosen as the minimum 2 -norm solution of A x = b , which is explicitly given by the closed-form expression A T ( A A T ) 1 b . As the iteration proceeded, whenever the change in 2 -norm between successive iterates was smaller than ϵ / 100 , ϵ was reduced by a factor of 10. This process continued until the relative 2 -norm change was smaller than 1 × 10 3 .
To assess performance, we used the relative error metric, defined as
Relative Error : = x * u 2 u 2 ,
where x * denotes the recovered signal produced by an algorithm. A trial was considered successful recovery if the relative error was below 1 × 10 3 . We also set the maximum number of iterations at 1000.

3.2. Discussion of Results

Recovery performance is visualized on the 9 × 12 grid in Figure 1 and Figure 2. In these diagrams, the x-axis indicates the sparsity level K, while the y-axis represents the parameter p characterizing the quasi-norm. The color at each grid point indicates the recovery success rate averaged over 100 trials. Darker regions denote higher recovery success, while lighter regions indicate failure cases.
The experimental results indicate that the performance of the proposed IR 1 algorithms depends significantly on the signal magnitude and the choice of parameter p. When signal components have small magnitudes ( σ = 1 ), IR 1 -2 achieves a notably higher success rate than IR 1 -1, especially for smaller values of p. Meanwhile, IR 1 -3 performs best at low p, successfully recovering the broadest range of signals. As p approaches one, the performance of IR 1 -1 and IR 1 -2 converges, whereas that of IR 1 -3 deteriorates.
When signal magnitudes are relatively large ( σ = 10 ), the recovery behavior changes. IR 1 -3 consistently achieves the highest recovery success, outperforming both IR 1 -1 and IR 1 -2, except at p = 0.9 , where the latter two perform slightly better. The advantage of IR 1 -3 increases as p gets smaller. In this scenario, IR 1 -1 and IR 1 -2 exhibit comparable performance.
To evaluate the robustness of our algorithms to noise, we introduced white noise sampled from a Gaussian distribution with a mean of zero and variance of φ into the measurement vector b. Table 1 presents the average increase in relative recovery error across 100 trials at K = 36 . The results indicate that IR 1 -2 exhibits noise robustness comparable to that of IR 1 -1, while IR 1 -3 is slightly more sensitive to noise, especially when p is close to one or the noise level is higher.
We also recorded the CPU time for each IR 1 algorithm at K = 36 , as shown in Figure 3. Each data point on the chart represents the average absolute CPU time over 100 trials. While computational times generally decrease as p increases, aligning with expectations since the optimization problem becomes closer to a convex formulation, we do not compare computational efficiency across different values of p. Instead, we focus on evaluating the relative performance of different IR 1 algorithms under identical parameter settings.
From an efficiency standpoint, IR 1 -3 generally outperforms the other algorithms, with the performance gap widening as p decreases. When the signal component magnitudes are small, IR 1 -2 is consistently faster than IR 1 -1 but tends to be slightly slower than IR 1 -3. When the signal magnitudes are relatively large, IR 1 -1 and IR 1 -2 exhibit comparable runtimes, while IR 1 -3 maintains its efficiency advantage. To further assess efficiency, we also examine the average number of iterations required for convergence, which follows a similar trend to Figure 3. Since each IR 1 method involves similar per-iteration operations with roughly the same expense, the primary determinant of computational cost is the number of iterations. The observed efficiency improvements in the two newly proposed algorithms stem from more accurate approximations of the original p quasi-norm objective, which accelerates convergence.
To summarize, IR 1 -2 consistently improves recovery success rate and computational efficiency compared to IR 1 when signal magnitudes are small, while maintaining comparable robustness to noise. On the other hand, IR 1 -3 shows the most significant improvements in recovery success rate and efficiency, particularly for large-magnitude signals. However, it is more sensitive to noise in the measurement process, and its performance deteriorates as the sparsity penalty becomes nearly convex (i.e., as p approaches 1).

4. Conclusions

In this paper, we focus on iteratively reweighted 1 -minimization methods for solving the sparse recovery problem based on optimizing the p quasi-norm. Specifically, we propose two novel reweighting strategies, with weights derived from ϵ -approximations of x p p that provide enhanced approximation under the same perturbation magnitude. Both reweighting strategies dynamically adjust the weights based on the magnitudes of the current iterate, leading to two distinct IR 1 formulations. Each provides a first-order approximation to a tighter surrogate for the original p quasi-norm objective, thereby facilitating more effective recovery of sparse signals.
The two proposed IR 1 algorithms have been shown to converge to the true sparse solution under appropriate conditions on the sensing matrix. Our experimental results demonstrate that the proposed algorithms show significant improvements in terms of signal recovery success rate and computational efficiency compared to the conventional IR 1 algorithm. The degree of improvement varies depending on the characteristics of the sparse signal. The improvements demonstrated by the proposed IR 1 algorithms become prominent when the sparsity penalty is highly concave (i.e., for smaller p, which favors sparsity). In such cases, the recovered signal is highly likely to match the minimum 0 -norm solution of A x = b as given in (1).
While our proposed methods improve recovery success and efficiency, they also have certain limitations. One potential drawback is that the performance of our algorithms may be sensitive to noise in the measurement process, which could impact robustness in practical scenarios. Addressing these limitations through adaptive strategies or alternative formulations is an important direction for future research.
Beyond theoretical contributions, our proposed methods have significant practical applications in signal processing, for example, in compressed sensing for efficient signal reconstruction, in medical imaging (MRI and CT) to reduce measurement acquisition while preserving image quality, and in wireless communications for improving channel estimation and reducing interference. By enhancing the accuracy and efficiency of sparse recovery, our algorithms have the potential to contribute to advancements in these critical fields.
This work contributes to the expanding field of sparse recovery by providing more accurate and computationally efficient algorithms for signal reconstruction. Future research may focus on enhancing robustness to noise, and extending these methods to tackle more complex problems in signal processing and machine learning, particularly in applications such as compressed sensing and image recovery.

Author Contributions

Conceptualization, Methodology: Q.A.; Original draft preparation: Q.A. and L.W. Numerical analysis, Writing—review and editing: N.Z. and Q.A. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the Innovation Fund of the Engineering Research Center of Integration and Application of Digital Learning Technology, Ministry of Education (1421014).

Data Availability Statement

No new data were created or analyzed in this study.

Conflicts of Interest

The authors declare no conflicts of interest.

References

  1. Mallat, S. A Wavelet Tour of Signal Processing: The Sparse Way, 3rd ed.; Academic: Cambridge, MA, USA, 2008. [Google Scholar]
  2. Qin, Z.; Fan, J.; Liu, Y.; Gao, Y.; Li, G.Y. Sparse representation for wireless communications: A compressive sensing approach. IEEE Signal Process. Mag. 2018, 35, 40–58. [Google Scholar] [CrossRef]
  3. Chartrand, R. Exact reconstruction of sparse signals via non-convex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar]
  4. Chartr, R.; Staneva, V. Restricted isometry properties and non-convex compressive sensing. Inverse Probl. 2008, 24, 035020. [Google Scholar] [CrossRef]
  5. Ge, D.; Jiang, X.; Ye, Y. A note on the complexity of Lp minimization. Math. Program. 2011, 129, 285–299. [Google Scholar] [CrossRef]
  6. Sun, S.; Pong, T.K. Doubly iteratively reweighted algorithm for constrained compressed sensing models. Comput. Optim. Appl. 2023, 85, 583–619. [Google Scholar] [CrossRef]
  7. Yu, P.; Pong, T.K. Iteratively reweighted algorithms with extrapolation. Comput. Optim. Appl. 2019, 73, 353–386. [Google Scholar]
  8. Rao, B.D.; Kreutz-Delgado, K. An affine scaling methodology for best basis selection. IEEE Trans. Signal Process. 1999, 47, 187–200. [Google Scholar]
  9. Chartrand, R.; Yin, W. Iteratively reweighted algorithms for compressive sensing. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing, Las Vegas, NV, USA, 31 March–4 April 2008; pp. 3869–3872. [Google Scholar]
  10. Tausiesakul, B. Iteratively reweighted least squares minimization with nonzero index update. In Proceedings of the 2021 Smart Technologies, Communication and Robotics (STCR), Sathyamangalam, India, 9–10 October 2021; pp. 1–6. [Google Scholar]
  11. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing sparsity by reweighted 1 minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar]
  12. Wipf, D.; Nagarajan, S. Iterative reweighted 1 and 2 methods for finding sparse solutions. IEEE J. Sel. Top. Signal Process. 2010, 4, 317–329. [Google Scholar] [CrossRef]
  13. Khajehnejad, M.A.; Xu, W.; Avestimehr, A.S.; Hassibi, B. Analyzing weighted 1 minimization for sparse recovery with nonuniform sparse models. IEEE Trans. Signal Process. 2011, 59, 1985–2001. [Google Scholar]
  14. Wang, H.; Zhang, F.; Shi, Y.; Hu, Y. Nonconvex and nonsmooth sparse optimization via adaptively iterative reweighted methods. J. Glob. Optim. 2021, 81, 717–748. [Google Scholar]
  15. Lu, Z. Iterative reweighted minimization methods for regularized unconstrained nonlinear programming. Math. Program. 2014, 147, 277–307. [Google Scholar]
  16. An, Q.; Zhang, N.; Jiang, S. Iterative methods for projection onto the p quasi-norm ball. Optim. Methods Softw. 2024, 1–22. [Google Scholar] [CrossRef]
  17. Gorodnitsky, I.F.; Rao, B.D. Sparse signal reconstruction from limited data using FOCUSS: A re-weighted minimum norm algorithm. IEEE Trans. Signal Process. 1997, 45, 600–616. [Google Scholar] [CrossRef]
Figure 1. Success rates of exact recovery for the algorithms (a) IR 1 -1, (b) IR 1 -2, and (c) IR 1 -3, with σ = 1 .
Figure 1. Success rates of exact recovery for the algorithms (a) IR 1 -1, (b) IR 1 -2, and (c) IR 1 -3, with σ = 1 .
Mathematics 13 01219 g001
Figure 2. Success rates of exact recovery for the algorithms (a) IR 1 -1, (b) IR 1 -2, and (c) IR 1 -3, with σ = 10 .
Figure 2. Success rates of exact recovery for the algorithms (a) IR 1 -1, (b) IR 1 -2, and (c) IR 1 -3, with σ = 10 .
Mathematics 13 01219 g002
Figure 3. Average computational time of three algorithms with (a) σ = 1 and (b) σ = 10 .
Figure 3. Average computational time of three algorithms with (a) σ = 1 and (b) σ = 10 .
Mathematics 13 01219 g003
Table 1. Average increase in relative error of three algorithms under additive white noise.
Table 1. Average increase in relative error of three algorithms under additive white noise.
φ = 0.001 φ = 0.005
pIR 1 -1IR 1 -2IR 1 -3IR 1 -1IR 1 -2IR 1 -3
σ = 1 0.10.0248%0.0290%0.0235%0.1263%0.1267%0.1402%
0.20.0207%0.0219%0.0222%0.1184%0.1261%0.1302%
0.30.0237%0.0251%0.0265%0.1234%0.1216%0.1268%
0.40.0262%0.0267%0.0265%0.1363%0.1443%0.1507%
0.50.0244%0.0251%0.0374%0.1277%0.1464%0.1879%
0.60.0298%0.0332%0.0410%0.1337%0.1266%0.1964%
0.70.0297%0.0356%0.0394%0.1416%0.1230%0.2572%
0.80.0325%0.0367%0.0345%0.2519%0.1804%0.9638%
0.90.0416%0.0381%0.0926%0.4835%0.3896%1.8213%
pIR 1 -1IR 1 -2IR 1 -3IR 1 -1IR 1 -2IR 1 -3
σ = 10 0.10.0022%0.0022%0.0023%0.0117%0.0120%0.0107%
0.20.0020%0.0024%0.0023%0.0133%0.0119%0.0129%
0.30.0024%0.0025%0.0023%0.0120%0.0118%0.0122%
0.40.0027%0.0028%0.0029%0.0134%0.0125%0.0120%
0.50.0026%0.0025%0.0027%0.0119%0.0118%0.0124%
0.60.0024%0.0024%0.0023%0.0125%0.0123%0.0130%
0.70.0029%0.0029%0.0030%0.0121%0.0125%0.0145%
0.80.0029%0.0030%0.0033%0.0138%0.0150%0.0186%
0.90.0027%0.0028%0.0034%0.0199%0.0204%0.0307%
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

An, Q.; Wang, L.; Zhang, N. Novel Iterative Reweighted 1 Minimization for Sparse Recovery. Mathematics 2025, 13, 1219. https://doi.org/10.3390/math13081219

AMA Style

An Q, Wang L, Zhang N. Novel Iterative Reweighted 1 Minimization for Sparse Recovery. Mathematics. 2025; 13(8):1219. https://doi.org/10.3390/math13081219

Chicago/Turabian Style

An, Qi, Li Wang, and Nana Zhang. 2025. "Novel Iterative Reweighted 1 Minimization for Sparse Recovery" Mathematics 13, no. 8: 1219. https://doi.org/10.3390/math13081219

APA Style

An, Q., Wang, L., & Zhang, N. (2025). Novel Iterative Reweighted 1 Minimization for Sparse Recovery. Mathematics, 13(8), 1219. https://doi.org/10.3390/math13081219

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop