Next Article in Journal
An Approach for the Dynamic Measurement of Ring Gear Strains of Planetary Gearboxes Using Fiber Bragg Gratings
Next Article in Special Issue
Fog-Based Two-Phase Event Monitoring and Data Gathering in Vehicular Sensor Networks
Previous Article in Journal
Internet of Things (IoT) Based Design of a Secure and Lightweight Body Area Network (BAN) Healthcare System
Previous Article in Special Issue
MinT: Middleware for Cooperative Interaction of Things
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for L p -Regularization Using the Multiple Sub-Dictionary Representation

1
College of Electronic and Optical Engineering & College of Microelectronics, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
2
College of Telecommunication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210023, China
3
Research Organization of Electrical Communication, Tohoku University, Sendai 980-8577, Japan
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(12), 2920; https://doi.org/10.3390/s17122920
Submission received: 19 October 2017 / Revised: 9 December 2017 / Accepted: 13 December 2017 / Published: 15 December 2017
(This article belongs to the Special Issue New Paradigms in Data Sensing and Processing for Edge Computing)

Abstract

:
Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 ), which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

1. Introduction

Compressed sensing (CS) [1,2] and sparse representation [3,4] have been widely used in the field of wireless communications [5,6,7] and image processing [8,9,10]. CS implies that it is possible to reconstruct the sparse signal/image from incomplete data if some prior knowledge and reconstruction constraints are satisfied. Mathematically, the unconstrained L 0 minimization is the optimal model to obtain the sparsest solution x ^ l 0 :
x ^ l 0 = arg   min x ^     { γ y Φ x 2 2 + λ x 0 } ,
where x 0 denotes the zero-norm function to find the number of nonzero elements in x ; Φ R M × N denotes the down-sampling measurement matrix; y and x represent the observed vector and the unknown sparse image, respectively; λ is the regularization parameter to balance between the fidelity of the image and the sparsity; and γ > 0 is a small positive constant, e.g., γ = 1 / 2 . Unfortunately, this problem (1) is an NP (non-deterministic) problem, and thus, it is difficult to efficiently solve. When the matrix Φ satisfies some necessary conditions [11], an alternative convex relaxation method are developed using the L 1 regularization method as:
x ^ l p = arg   min x ^     { γ y Φ x 2 2 + λ x 1 } ,
where x 1 = i = 1 n | x i | denotes the L 1 -norm. Then, the NP problem (1) is converted into problem (2), which is a typical a convex optimization problem and can be solved efficiently, such as with the alternating direction method of multipliers (ADMM) [12,13], fast iterative shrinkage-thresholding algorithm (FISTA) [14], Nesterov’s algorithm (NESTA) [15], and approximate message passing (AMP) [16]. However, the method of L 1 regularization can only obtain a suboptimal solution and usually requires much more measurements. Theoretical analysis of CS implies that better performance can be obtained by taking advantage of sparser information in many systems, especially in the presence of strong noise interference.

1.1. The Non-Convex Penalties

Many state-of-the-art algorithms have been proposed to improve the performance of the L 1 regularization algorithms. The non-convex penalty regularization algorithms are among the most effective algorithms for sparse recovery problems. Research shows that the non-convex penalty-based optimization methods can more closely approximate the sparsest solution over the L 1 -norm penalty in problem (2), which requires a weaker incoherent condition and fewer measurement data [17]. There have been many non-convex functions proposed as relaxations of the L 0 -norm penalty, such as the smoothly clipped absolute deviation penalty (SCAD) [18], the L p ,   ( 0 < p < 1 ) -norm penalty [17] and the minimax concave penalty (MCP) [19]. By replacing the L 1 -norm with the L p -norm, the non-convex L p -norm regularization optimization method is described as:
x ^ l p = arg   min x ^     { γ y Φ x 2 2 + λ x p p } ,   0 < p < 1 ,
where x p p = i = 1 n | x i | p . Unfortunately, when 0 < p < 1 , problem (3) becomes a non-convex, non-smooth, and non-Lipschitz optimization problem. Thus, the L p -norm optimization is always difficult to efficiently address.

1.2. The Iterative Thresholding Algorithm of L p Regularization

There are two main classes of algorithms to solve the non-convex L p -norm optimization problem. One is the iteratively reweighted algorithm [20], and the other is the iterative thresholding algorithm (ITA). As one of the most effective and efficient methods, the ITA has been employed for many sparse recovery optimization problems due to its low computational complexities, including the iterative hard thresholding for L 0 regularization [21], the iterative soft thresholding for L 1 regularization [22] and the iterative L p thresholding for L p regularization [23]. L 1 / 2 and L 2 / 3 regularizations are two special and important cases, not only their solutions can be expressed in closed-forms, but also their importance on sparse modeling. Recent studies show that L 1 / 2 regularizer can be taken as the most representative L p regularizer [24] and L 2 / 3 regularization is more effective in image deconvolution problem [25]. Hence, in this paper, we focus on the L 1 / 2 and L 2 / 3 regularizations, which is described in Equation (4) and (5):
x ^ l 1 / 2 = arg   min x ^     { γ y Φ x 2 2 + λ x 1 / 2 1 / 2 } ,
x ^ l 2 / 3 = arg   min x ^     { γ y Φ x 2 2 + λ x 2 / 3 2 / 3 } ,

1.3. New Multiple-State Sparse Transform Based L 1 Regularization Algorithm

Recently, some new multiple-state sparse transform based algorithms were proposed to exploit more a priori knowledge of the signal/image by employing some new sparsifying transform strategies. A shearlet-based multiple level sparse representation algorithmic framework was proposed in [26,27] for the unconstrained L 1 regularization by adaptively incorporating the iteratively reweighted shrinkage step. To enhance the sparsity, the algorithm [27] is specifically adapted to the sparse structure of the multiscale coefficients based on the ADMM [12,13]. Similarly, considering the fact that the sparsity of a certain signal/image will change under different sparsifying transform dictionaries, a sparsity-induced composite regularization algorithm was proposed for the unconstrained L 1 regularization problem (Co-L1) [28]. The novel Co-L1 method is described as:
x ^ l d , 1 t = arg   min x ^   { Φ x y 2 2 + d = 1 D λ d , 1 Ψ d x 1 } ,
where Ψ d = [ ψ 1 | ψ 2 | | ψ N d ] R N d × N ,   ( d = 1 , , D ) are different dictionaries, d denotes the number of Ψ d , and N d represents the dictionary size. The regularization parameters λ d , 1 = N d ε + Ψ d x 1 play the roles of weighting the L 1 -norm of the sparsifying transform coefficients Ψ d x . Multiplying by the weighting parameter λ d , 1 t = [ N 1 ε + Ψ 1 x 1 ,   , N d ε + Ψ d x 1 ] T , the regularizer d = 1 D λ d , 1 Ψ d x 1 1 is indeed a composition of multiple dictionary based regularizers. The algorithm [28] can significantly improve the image reconstruction performance over the fast iterative shrinkage-thresholding algorithm (FISTA) [29] by iteratively and adaptively weighting the composite regularizer. We define these new emerged sparsifying transforms as the “multiple-state” transform. The common property of these methods is how to exploit the prior information from the multiple-state sparsifying transform to improve the conditioning of the sparse recovery problems.
In this paper, benefiting from the improvement of the L p regularization algorithm [24,25,30], and motivated by recent advances in the iterative reweighted algorithms, we propose a new iteratively-weighted algorithm framework for L p ,   p { 1 / 2 ,   2 / 3 } , norm minimization using the multiple-state sparsifying transform, i.e., multiple sub-dictionary sparse representation [28]. The contributions of this paper are summarized as follows. (1) Based on the multiple sub-dictionary sparse representation, we develop a new iteratively-weighted L p ( p { 1 / 2 ,   2 / 3 } ) thresholding algorithm, which is called as SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) . (2) By comparison with the existing iteratively-reweighted parameter scheme, we propose an updating regularization parameter for weighting the sub-dictionary. (3) L 1 / 2 regularization and L 2 / 3 regularization are special and important on sparse modeling, particularly on sparse recovery. However, related studies are rare, this paper also extend the applications to sparse image recovery and Magnetic resonance imaging (MRI) and get good results.
The organization of the rest of the paper is as follows: in Section 2, we propose the multiple sub-dictionary L p -regularization in the SAITA- L p algorithm, including the multiple sub-dictionary sparsifying transforms and the iterative reweighted scheme for the SAITA- L p regularizer. Then, in Section 3, we develop a new L p   norm minimization, iteratively thresholding algorithm SAITA- L p . To confirm the proposed algorithm, we conduct simulations and applications in image restorations in Section 4. In Section 5, we further validate our proposed algorithm using three applications. Finally, conclusions are given in Section 6.

2. The Proposed Multiple Sub-Dictionary-Based L p Regularization

The multiple dictionary sparsifying transform method for the L 1 regularization optimization problem was proposed in [28], which extends the well-known Lasso problem into a composite regularization problem. Motivated by the composite regularization method for the L 1 -norm, this paper employs the multiple sub-dictionary method for the L p regularization problem.
Suppose x R N × 1 is the non-sparse, raw signal. We can obtain the sparse coefficients Ψ x through an analysis dictionary Ψ R N 1 × N . The shearlet transform [31] and the wavelet transform [32] are two typical sparsifying transforms. We choose the wavelet transform as the ideal transform because of its effectiveness to compress natural images. The main steps to design the multiple sub-dictionary sparsifying transform based L p regularization method are:
First, we construct an ( D N ) × N over-complete sparsifying transform matrix Ψ by:
Ψ = [ Ψ 1 Ψ d Ψ D ] R ( D N ) × N ,
and:
Ψ d = [ ψ 1 ψ 2 ψ N d ] R N d × N ,
where D denotes the number of sub-dictionaries Ψ d in the over-complete sparsifying transform matrix Ψ , the N d × N sub-dictionary matrix Ψ d ,   ( d = 1 , , D ) is acquired by a collection of row vectors { ψ i } i = 1 N d , such as the “dbN” wavelet basis [33], and N d represents the number of column in ψ i . From Equation (8) we can see that each Ψ d is composed of a set of rows from the ( D N ) × N over-complete sparsifying transform matrix Ψ , and:
N 1 + N 2 + + N d = D N ,
By splitting the matrix Ψ into several sub-dictionaries Ψ d , we convert the sparsifying transform Ψ x to a composition of several Ψ d x ,   d = 1 , 2 , , D with different sparse structures. Intuitively, we can utilize these differences to improve the recovery performance in sparse recovery problems. In this paper, we choose N 1 = N 2 = = N d = N , so Ψ d R N × N , which are orthogonal matrixes.
Then, a new multiple non-convex L p regularization method is proposed:
x ^ l d , p t = arg   min x ^   { f d ( x ) = γ Φ x y 2 2 + R S A I T A } ,
where R S A I T A is a linearly weighted combination of multiple sub-dictionary based L p regularizers Ψ d x p p :
R S A I T A = d = 1 D λ d , p t Ψ d x p p = λ 1 , p t Ψ d x p p + λ 2 , p t Ψ d x p p + + λ D , p t Ψ d x p p ,
the λ d , p ,   d = 1 , 2 , D denotes the iterative weighted regularization parameter. Hence, the contribution of each sub-dictionary is controlled adaptively and iteratively with the weighted parameter λ d , p , and the regularizer Ψ d x p p will vary across the sub-dictionary index d . Intuitively, the variation of each sub-dictionary based regularizer is best weighted by the parameter λ d , p to improve the sparse recovery problem.

3. The Proposed SAITA- L p Algorithm

The major disadvantage of the L p ( 0 < p < 1 ) minimization is that it is nonconvex, making it difficult to efficiently solve. In this section, we first introduce the iteratively thresholding representation theory for the conventional L p , ( p { 1 / 2 ,   2 / 3 } ) algorithm according to the existing series of algorithms in [25,34]. Then, we deduce the SAITA- L p algorithm combined with the proposed weighted scheme λ d , p .

3.1. The Relationship of the SAITA- L p and Conventional L p Methods

Considering the multiple sub-dictionary L p , ( p { 1 / 2 ,   2 / 3 } ) regularization problem in Equation (10), when γ = 1 , we have:
x ^ l d , p = arg   min x ^   { f d ( x ) = Φ x y 2 2 + d = 1 D λ d , p Ψ d x p p } ,
where the f d ( x ) denotes the objective functions. Correspondingly, the conventional single dictionary Ψ R N × N based analysis L p -norm minimization problem is as follows:
x ^ l p t = arg   min x ^   { f ( x ) = Φ x y 2 2 + λ p Ψ x p p } ,
The proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) methods of (12) and the conventional method (13) are nearly identical, and the major difference is how to weight the contribution of the L p -norm by the regularization parameter [28]. Compared with the conventional method, the SAITA method can exploit more prior knowledge of the sparse signal/image for reconstruction. Figure 1 depicts the relationship between the two methods. In the case of (A), the number of sub-dictionaries d is reduced to 1, and the multiple dictionaries Ψ d . convert into a single Ψ . Then, the proposed SAITA- L p algorithm converts to the conventional single dictionary L p method [24,25]. In case (B), with the increase of the number d , the conventional single dictionary L p method converts to the proposed SAITA- L p method.

3.2. The Thresholding Representation Theory for SAITA-Lp

According to the relationship between the proposed SAITA- L p method and the conventional L p method shown in Figure 1, we first consider the conventional single dictionary analysis L p problem (13). The first order optimality condition of x is described as:
f ( x ) = 2 Ψ Φ T ( Φ x y ) + λ ( Ψ x p p )
in which the operator ( · ) denotes the gradient. Letting f ( x ) = 0 , we have:
Ψ Φ T ( y Φ x ) = 1 2 ( λ ( Ψ x p p ) ) ,
Multiplying by any parameter τ to control the step size and adding Ψ x in both sides of Equation (15):
Ψ x + τ Ψ Φ T ( y Φ x ) = Ψ x + 1 2 ( λ τ ( Ψ x p p ) ) ,
Then, we can immediately obtain:
( I + λ τ 2 (   ·   p p ) Ψ x = Ψ x + τ Ψ Φ T ( y Φ x )
That is:
Ψ x = ( I + λ τ 2 (   ·   p p ) 1 Ψ ( x + τ Φ T ( y Φ x ) )
To this end, when p { 1 / 2 ,   2 / 3 } , the resolvent operator [24,25,30] is denoted as:
H λ , p ( · ) = ( I + λ τ 2 (   ·   p p ) 1 ,
where λ and τ are the regularization parameter and the step tunning parameter, respectively. Then:
Ψ x = H λ , p ( Ψ ( x + τ Φ x T ( y Φ x ) ) )
where τ > 0 (e.g., τ = 0.99 Φ 2 2 , or τ = 0.99 Φ 2 ) controls the step size in each iteration.
According to the Equation (20), we immediately imply:
x n + 1 = ( Ψ ) 1 H λ , p ( θ ( x n ) ) ,
in which:
θ ( x n ) = Ψ ( x n + τ Φ T ( y Φ x n ) ) ,
where x n represents the n -th iterative solution. The resolvent operator H λ , p ( · ) is defined as:
H λ , p ( x ) = ( h λ , p ( x 1 ) , h λ , p ( x 2 ) , , h λ , p ( x N ) ) T ,
where the h λ , p ( x i ) is the nonlinear function defined by:
h λ , p ( x i ) = {   φ λ d , p ( x i ) ,   | x i | > T   0 ,   o t h e r w i s e ,
when p = 1 2 ; T = 3 2 3 4 ( λ d , 1 / 2 τ ) 2 / 3 is the threshold value, and [24]:
φ λ , 1 / 2 ( x i ) = 2 3 x i ( 1 + cos ( 2 π 3 2 3 c o s 1 ( λ d , 1 / 2 τ 8 ( | x i | 3 ) 3 2 ) ) ) ,
when p = 2 3 ; T = 2 3 4 3 ( λ d , 2 / 3 τ ) 3 / 4 is the related threshold value, and [25]:
φ λ , 2 / 3 ( x i ) = ( ϑ λ , 2 / 3 ( x i ) + 2 | x | | ϑ λ , 2 / 3 ( x i ) | | ϑ λ d , 2 / 3 ( x i ) | 2 2 ) 3 · sgn ( t ) ,
where sgn ( ) denotes the sign function and
ϑ λ d , 2 / 3 ( x i ) = 2 3 ( λ d , 2 / 3 τ ) 1 / 4 ( cos h ( 1 3 a r c c o s h ( 27 16 ( λ d , 2 / 3 τ ) 3 / 2 x i 2 ) ) ) 1 / 2

3.3. The Proposed SAITA-Lp Algorithm

As mentioned above, the iteratively weighted parameter plays a key role during the optimization process for the sparse recovery problem. For the proposed SAITA-Lp method, the iteratively weighted parameter λ d , p mainly plays two roles. One is the role of controlling the tradeoff of the fidelity and the prior knowledge between the quadratic term and the regularizer, and the other role is controlling the contribution of each regularizer. Unfortunately, it is not clear how to do this because setting an ideal parameter is not straightforward. In [28], a iteratively updating parameter was reduced by applying a Maximization-Minimization algorithm shown as:
λ d , 1 = N d ϵ + Ψ d x 1 1
where N d controls the sub-dictionary size, ϵ > 0 is a small constant which prevent the denominator form zero. From Equation (28) we can obtain some useful information about setting a proper regularization parameter. Firstly, the contribution of denominator in Equation (28) is to counterweigh each sub-dictionary based regularizer; Secondly, the numerator N d control the size of each sub-dictionary. Inspired by the above insights, in this paper, we design the important iteratively weighted parameter λ d , p as:
λ d , p = N d ( ϵ + Ψ d x 2 2 ) α ,
where N d controls the sub-dictionary size as showed in [28], α ( 0 , 2 ) is a small constant to tune it that is determined from the following experimental results. Then the parameter λ d , p t plays the role of weighting each L p -norm regularizer adaptively.
The following are the analytical justifications. (i) When signal x is sparser under any dictionary of Ψ d than others, the value of each regularizer Ψ d x p p is smaller. Hence, the dictionary of Ψ d is more appropriate, and on the other hand, a smaller regularizer Ψ d x p p will be beneficial to minimizing the objective function. Thus, the weight of the regularizer should be enhanced. (ii) When the signal x may not be sparse enough under another dictionary Ψ d , that is, the dictionary of Ψ d is not ideal, the value of Ψ d x p p will be larger. The larger Ψ d x p p will not be helpful to minimizing the objective function; thus the weight of the Ψ d x p p should be smaller to counterweigh the regularizer.
For the main comparison, the conventional single-dictionary based L p ,   p { 1 / 2 ,   2 / 3 } -regularization method in problem (13) will be considered, and the tradeoff parameter λ p is a fixed constant, which is shown as:
λ p = 1 ( ϵ + Ψ x 2 2 ) α .
From Equation (30), we find that the conventional single-dictionary based parameter λ p is a constant and will not vary.
Moreover, we employ the forward-backward linear strategy to accelerate the convergence of the proposed algorithm as [14]:
μ n + 1 = 1 + 1 + 4 ( μ n ) 2 2 ,
and:
x n + 1 = x n + μ n 1 μ n + 1 ( x n x n 1 ) ,
The proposed iteratively-weighted SAITA- L p algorithm can be described in Algorithm 1.
Algorithm 1: The proposed SAITA- L p algorithm.
  Problem: x n + 1 = arg min γ Φ x y 2 2 + d = 1 D λ d , p Ψ d x p p ;
  1: Input: { Ψ d } d = 1 D , y , Φ ; L d ; γ = 1 ; ϵ > 0 ;
  2: Initialization: n = 0 ; ε = 0.01 ; τ = 1 ε Φ 2 ; λ d , 1 / 2 ( 1 ) = 1 ; α .
  3: for n = 1 ,   2 ,   3 ,  
  4: Calling the conventional analysis L p algorithm in (13):
   While not converged do
    Step 1: Compute θ ( x n ) = Ψ ( x n + Φ T ( y Φ x n ) ) in Equation (22);
    Step 2: Compute x n + 1 = ( Ψ ) 1 H λ , p ( θ ( x n ) ) in Equation (21);
    Step 3: Update the value of μ using μ n + 1 = 1 + 1 + 4 ( μ n ) 2 2 in Equation (31);
    Step 4: Update the solution x n + 1 = x n + μ n 1 μ n + 1 ( x n x n 1 ) in Equation (32);
   End
  5: Updating: λ d , p ( n + 1 ) N d ( ϵ + Ψ d x ( n ) 2 2 ) α in Equation (29);
  6: end
  7: Output x t

4. Performance Analysis and Discussion

We first conduct some experiments to determine the value of α and verify the performance of the proposed λ d , p t , then we evaluate the superiority of the proposed SAITA algorithm compared with the conventional single dictionary analysis L p iterative thresholding algorithms [24,25] and the Co-L1 [28]. All the experiments in this paper were conducted on a personal computer (2.21 GHz, 16 GB RAM) in a MATLAB (R2014a) platform.
Assuming the clean image x , we construct a measurement matrix Φ using the “Spread Spectrum” operator [35], so the measurement image shows: y = Φ x + n , where n denotes the additive noise. Since the wavelet is known to compress natural raw images very efficiently, we choose the wavelet transform as the sparsifying transform operator. Then, we construct the sparsifying transform matrix Ψ R 8 N × N by concatenating the undecimated ‘db1’ and ‘db2’ wavelet transform with 2-levels of decomposition. Thus, we can obtain the sub-dictionaries: Ψ d R N × N ,   d = 1 , 2 , 8 .
The SNR measurement is adopted to measure the noise level, which is defined as m S N R = y 2 2 M σ 2 , where M and σ 2 denote the number of y and the variance of the white Gaussian noise, respectively. The higher the value of mSNR, the weaker of the noise level is. We evaluate the performance by the popular recovery SNR: R S N R = 20 log ( x x ^ 2 x 2 ) , where x ^ denotes the estimated sparse image. The higher the value of R S N R , the better the performance. We conduct the experiments by utilizing the well-known figure of “Cameraman” with mSNR = M / N = 40   dB , which is shown in Figure 2a. To reduce the computation time, we choose only a part of the figure, shown in Figure 2b.

4.1. The Value Range of α in λ d , p

In Section 4.1, we first determine the value range for α in λ d , p by evaluating the performances with different values of α . The results are shown in Figure 3 and Figure 4. From the results, we can find that when α ( 0 , p ) , the proposed algorithms perform well and enjoy strong robustness. With the increase of α , the performances deteriorate rapidly. Therefore, we estimate that the value of α should be [ 0 ,   p ] to obtain the adaptive weighting. We specially set:
λ d , p t = N d ( ϵ + Ψ d x 2 2 ) 1 p 2 ,

4.2. The Recovery SNR Performances Versus Sampling Ratio

In Section 4.2, we evaluate the robustness of the proposed algorithm by considering the RSNR versus the sampling ratio. Specifically, we set three mSNR levels to 25   dB ,   30   dB   and   35   dB . Figure 5 depicts the RSNR versus the sampling ratio. Based on the results, the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm performs better than the conventional single dictionary based algorithm, and the robustness of the proposed algorithm is good.

4.3. The Recovery SNR Performance Versus Measurement SNR

For our third experiment, we investigate the influence of different noise levels on the proposed algorithm and compare the L p algorithm and Co-L1 [28]. Similarly, we consider three sampling ratio levels of M / N { 0.15 ,   0.20 ,   0.25 } . We evaluate the performance by the RSNR versus the lower mSNR (20 dB~40 dB) of the image, and the results are presented in Figure 6.
From the results, we can find that the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm can obtain a higher recovery SNR and has better robustness and fidelity than the Co-L1. The robustness and fidelity of the corresponding L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm will deteriorate with the increase of the signal measurement SNR.

4.4. The Relative Error Performances Versus the Number of Iterations

We study the convergence and the reconstruction error by the relative error performances versus the number of iterations. Choosing the relative error as the second quality measurement, the formula is given:
R e l a t i v e   E r r o r = x x ^ 2 x 2
Considering the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) and the corresponding L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm from the result shown in Figure 7, when the sampling ratio is 0.2 (shown in (a)), the relative errors of the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm are significant smaller, and converge faster than the corresponding L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm. When the sampling ratio is 0.5 (shown in (b)), though the final relative errors are close, the convergence speed of the proposed algorithm is higher (the number of iterations are approximately 15 and 7, respectively). While compared to Co-L1 [28], our proposed SAITA- L p algorithm can obtain a markedly lower relative error when the sampling ratio is M / N { 0.2 ,   0.5 } . In addition, one can observe that the relative error is slightly smaller than for p = 2 / 3 , while the convergence rate is faster than the p = 1 / 2 .

5. Practical Experiments

The proposed algorithms can be applied in many practical applications [36,37,38,39,40,41,42]. In this section, we conduct some typical applications about sparse image recovery and medical imaging to extend the applications of L 1 / 2 and L 2 / 3 regularizations and illustrate the excellent robustness and adaptation of the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm. The conventional single dictionary analysis L p iterative thresholding algorithms [24,25] and the Co-L1 [28] as the standard algorithms for comparison.

5.1. Application 1: Image Sparse Restoration

In the first application, the proposed SAITA- L p algorithm is applied to restoring the “Cameraman” image shown in Figure 2 versus sampling ratio M / N . We use the reduced N = 96 × 104 cameraman image as the objective image. Figure 8a,c show the recovery results using the conventional single dictionary algorithm of L 1 and L p , ( p { 1 / 2 ,   2 / 3 } ) , respectively. Figure 8b,d show the recovery images using the corresponding multiple sub-dictionary algorithm of L 1 and L p ( p { 1 / 2 ,   2 / 3 } ) , respectively. The experiments show that all the algorithms can recover the images, and the proposed multiple sub-dictionary algorithms significantly outperform the conventional single-dictionary algorithms. In Figure 9, we depict the RSNR of four algorithms vs. different sampling ratios. When p = 1 / 2 and p = 2 / 3 , it can be observed that the proposed SAITA algorithm can obtain a lager RSNR compared with the conventional L p algorithms. There is no obvious improvement between the two cases of p = 1 2 and p = 2 3 , and the SAITA- L 1 / 2 algorithm outperforms the SAITA- L 2 / 3 algorithm with a weak advantage.

5.2. Application 2: Medical Imaging

In the application 2, we extend the applications of our proposed algorithm to solve typical medical construction problems. We first consider the well-known Shepp-Logan phantom, and then we construct a high-quality dMRI cardiac cine [8,28].

5.2.1. Test 1 for the Shepp-Logan Model

In the test 1, we first consider the well-known Shepp-Logan phantom of N = 96 × 96 with an m S N R = 40   dB . We construct the compressed noisy measurement signal y by utilizing the “Spread Spectrum” operator as the measurement matrix Φ [35], and we conduct a sparsifying transform with the constructed operator Ψ R ( 4 N ) × N (‘db3’, N = 1 ).
From the experimental results shown in Figure 10, we find that the proposed SAITA- L p algorithm can recover the images perfectly, as shown in Figure 10b,d, while the conventional single dictionary algorithms failed to recover the image, which is shown in Figure 10a,c. The proposed SAITA- L p algorithm of p = 2 / 3 can obtain the best effect compared with the other algorithms, and the next best is the SAITA- L p algorithm of p = 1 / 2 . In Figure 11, we depict the RSNR of the respective algorithms versus different sampling ratios M / N . When p = 1 / 2 and p = 2 / 3 , it can be observed that the proposed SAITA- L p algorithms can obtain a larger RSNR than the L p algorithms with M / N ( 0.1 ,   0.2 ) significantly.

5.2.2. Test 2 for Real-World Data (2D MRI)

MRI is a typical medical inverse problem that can be solved well by CS [8]. In this experiment, we apply the proposed algorithm to real-world medical data. We investigate a simplified “dynamic MRI” problem [8] and use the high-quality MRI cardiac cine as the ground truth and select a spatio-temporal slice of 144 × 48 [28]. We construct the sparse matrix Ψ R 3 N × N with a vertical concatenation of ‘db1’ and ‘db2’ orthogonal discrete wavelet bases with two levels of decomposition (‘db1’, ‘db2’, and N = 2 ). Figure 12 presents the constructed MRI images using the SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) algorithm and L p   ( p { 1 / 2 ,   2 / 3 } ) algorithms. From the experiment results, we can see that the proposed SAITA algorithm can reconstruct the images perfectly, as shown in Figure 12b,d, while the conventional algorithms failed to restore the image in Figure 12a,c. The proposed multiple sub-dictionaries algorithm of p = 2 / 3 obtained the best effect and the corresponding recovery SNR is 23.1872 dB. Next is the proposed algorithm of p = 1 / 2 with recovery SNRs of 21.0189 dB. In Figure 13, we depict the RSNR of four algorithms versus different sampling ratios M / N . When p = 1 / 2 and p = 2 / 3 , it can be observed that the proposed SAITA- L p algorithms can obtain a lager RSNR compared with the conventional single dictionary L p algorithms. The results also show that the algorithms of p = 2 / 3 outperform the methods of p = 1 / 2 . That is to say, the L 2 / 3 regularization can exploit more prior knowledge than L 1 / 2 regularization in MRI.

6. Conclusions

In this paper, we propose a novel adaptive iteratively weighted thresholding algorithm (SAITA- L p ) based on the conventional L 1 / 2 and L 2 / 3 thresholding algorithm by incorporating the multiple sub-dictionary sparse representation strategy. We make the following conclusions from the above experiments:
(1)
Using the proposed multiple sub-dictionary sparsifying transforms strategy, we construct a multiple sub-dictionary based L p regularization method to exploit more priori knowledge of images for the sparse image recovery problem. By multiplying by the proposed adaptive weighting parameter λ d , p t = N d ( ϵ + Ψ d x 2 2 ) 1 p 2 , we can gain more control of weighting the contribution of each sub-dictionary based regularizer. Experiments show that the proposed algorithms appear to perform better than the conventional single-dictionary algorithms, especially when the sampling rate is very low (e.g., 0.1~0.3);
(2)
Compared with the L 1 -norm regularization based work, the nonconvex L p ( 0 < p < 1 ) -norm penalty can more closely approximate the L 0 -norm minimization over the L 1 -norm, which gives a sparser solution and needs fewer measurement data.
(3)
In our experiments, we find that the recovery performances between the L p   ( p = 1 / 2 ) and L p ( p = 2 / 3 ) are close, even when the corresponding p = 2 / 3 algorithm can obtain a better performance over the p = 1 / 2 . Hence, a proper p need to be selected in practical applications.
(4)
Moreover, the proposed SAITA- L p method also indicates that it is feasible to improve the recovery performance by exploiting the signal inner sparse structure and designing a proper sparse representation dictionary. Thus, it is beneficial to exploit the signal sparse structure with a dictionary learning method, which will be the subject of future work.
(5)
The proposed SAITA- L p algorithm can be extended to other non-convex penalties include smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP).

Acknowledgments

This work was supported by National Natural Science Foundation of China grants (No. 61401069, No. 61701258); Jiangsu Specially Appointed Professor Grant (RK002STP16001); Innovation and Entrepreneurship of Jiangsu High-level Talent Grant (CZ0010617002), High-level talent startup grant of Nanjing University of Posts and Telecommunications (XK0010915026), Natural Science Foundation of Jiangsu Province Grant (No. BK20170906), Natural Science Foundation of Jiangsu Higher Education Institutions Grant (No. 17KJB510044) “1311 Talent Plan” of Nanjing University of Posts and Telecommunications as well as Postgraduate Research Innovation Program, Jiangsu (KYLX16_0647).

Author Contributions

Yunyi Li, Jie Zhang, Jian Xiong proposed the SAITA algorithm. Jie Yang and Jian Xiong conceived and designed the experiments; Shanggang Fan performed the practical experiments; Yunyi Li and Guan Gui analyzed the data; Yunyi Li, Guan Gui and Jie Yang wrote the paper. Xiefeng Cheng, Hikmet Sari and Fumiyuki Adachi checked this paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Robust Uncertainty Principles : Exact Signal Frequency Information. IEEE Trans. Inf. Theory 2006, 52, 489–509. [Google Scholar] [CrossRef]
  3. Wright, J.; Yang, A.Y.; Ganesh, A.; Sastry, S.S.; Ma, Y. Robust face recognition via sparse representation. IEEE Trans. Pattern Anal. Mach. Intell. 2009, 31, 210–227. [Google Scholar] [CrossRef] [PubMed]
  4. Mairal, J.; Elad, M.; Sapiro, G. Sparse representation for color image restoration. IEEE Trans. Image Process. 2008, 17, 53–69. [Google Scholar] [CrossRef] [PubMed]
  5. Gao, Z.; Dai, L.; Qi, C.; Yuen, C.; Wang, Z. Near-Optimal Signal Detector Based on Structured Compressive Sensing for Massive SM-MIMO. IEEE Trans. Veh. Technol. 2016, 9545, 1–5. [Google Scholar] [CrossRef]
  6. Gao, Z.; Dai, L.; Wang, Z.; Member, S.; Chen, S.; Hanzo, L. Compressive Sensing Based Multi-User Detector for the Large-Scale SM-MIMO Uplink. IEEE Trans. Veh. Technol. 2015, 9545, 1–14. [Google Scholar]
  7. Gui, G.; Xu, L.; Adachi, F. Variable step-size based sparse adaptive filtering algorithm for estimating channels in broadband wireless communication systems. EURASIP J. Wirel. Commun. Netw. 2014, 2014, 1–10. [Google Scholar] [CrossRef]
  8. Herman, M.; Strohmer, T. Compressed sensing radar. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 30 March–4 April 2008; pp. 1509–1512. [Google Scholar]
  9. Chen, G.; Gui, G.; Li, S. Recent results in compressive sensing based image inpainiting algorithms and open problems. In Proceedings of the 2015 8th International Congress on Image and Signal Processing, Liaoning, China, 14–16 October 2015. [Google Scholar]
  10. Oh, P.; Lee, S.; Kang, M.G. Colorization-based RGB-white color interpolation using color filter array with randomly sampled pattern. Sensors 2017, 17, 1523. [Google Scholar] [CrossRef] [PubMed]
  11. Candès, E.J. The restricted isometry property and its implications for compressed sensing. C. R. Math. 2008, 346, 589–592. [Google Scholar] [CrossRef]
  12. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A.T. Fast image recovery using variable splitting and constrained optimization. IEEE Trans. Image Process. 2010, 19, 2345–2356. [Google Scholar] [CrossRef] [PubMed]
  13. Yang, J.; Zhang, Y. Alternating Direction Algorithms for L1 Problems in Compressive Sensing. SIAM J. Sci. Comput. 2009, 33, 250–278. [Google Scholar] [CrossRef]
  14. Beck, A.; Teboulle, M. A Fast Iterative Shrinkage-Thresholding Algorithm for Linear Inverse Problems. SIAM J. Imaging Sci. 2009, 2, 183–202. [Google Scholar] [CrossRef]
  15. Becker, S.; Bobin, J.; Candes, E. NESTA: A Fast and Accurate First-order Method for Sparse Recovery. SIAM J. Imaging Sci. 2009, 4, 1–39. [Google Scholar] [CrossRef]
  16. Borgerding, M.; Schniter, P.; Vila, J. Generalized Approximate Message Passing for Cosparse Analysis compressive sensing. In Proceedings of the 2015 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), South Brisbane, Australia, 19–24 April 2015; pp. 3756–3760. [Google Scholar]
  17. Chartrand, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar] [CrossRef]
  18. Fan, J.; Li, R. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  19. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 38, 894–942. [Google Scholar] [CrossRef]
  20. Chartrand, R.; Yin, W. Iteratively reweighted algorithms for compressive sensing. In Proceedings of the 2008 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Las Vegas, NV, USA, 1 March–4 April 2008; pp. 3869–3872. [Google Scholar]
  21. Blumensath, T.; Davies, M.E. Iterative hard thresholding for compressed sensing. Appl. Comput. Harmon. Anal. 2009, 27, 265–274. [Google Scholar] [CrossRef]
  22. Fornasier, M.; Rauhut, H. Iterative thresholding algorithms. Appl. Comput. Harmon. Anal. 2008, 25, 187–208. [Google Scholar] [CrossRef]
  23. Marjanovic, G.; Solo, V. On Lq Optimization and Matrix Completion. IEEE Trans. Signal Process. 2012, 60, 5714–5724. [Google Scholar] [CrossRef]
  24. Xu, Z.; Chang, X.; Xu, F.; Zhang, H. L 1/2 Regularization : A Thresholding Representation Theory and a Fast Solver. IEEE Trans. Neural Netw. Learn. Syst. 2012, 23, 1013–1027. [Google Scholar] [CrossRef] [PubMed]
  25. Cao, W.; Sun, J.; Xu, Z. Fast image deconvolution using closed-form thresholding formulas of L q (q = 1/2, 2/3) regularization. J. Vis. Commun. Image Represent. 2013, 24, 31–41. [Google Scholar] [CrossRef]
  26. Ma, J.; März, M.; Funk, S.; Schulz-Menger, J.; Kutyniok, G.; Schaeffter, T.; Kolbitsch, C. Shearlet-based compressed sensing for fast 3D cardiac MR imaging using iterative reweighting. arXiv, 2017; arXiv:1705.00463. [Google Scholar]
  27. Ma, J.; März, M. A multilevel based reweighting algorithm with joint regularizers for sparse recovery. arXiv, 2016; arXiv:1604.06941. [Google Scholar]
  28. Ahmad, R.; Schniter, P. Iteratively Reweighted L1 Approaches to Sparse Composite Regularization. IEEE Trans. Comput. Imaging 2015, 1, 220–235. [Google Scholar] [CrossRef]
  29. Tan, Z.; Eldar, Y.C.; Beck, A.; Nehorai, A. Smoothing and decomposition for analysis sparse recovery. IEEE Trans. Signal Process. 2014, 62, 1762–1774. [Google Scholar]
  30. Zhang, Y.; Ye, W. L2/3 regularization: Convergence of iterative thresholding algorithm. J. Vis. Commun. Image Represent. 2015, 33, 350–357. [Google Scholar] [CrossRef]
  31. Lim, W.Q. The discrete shearlet transform: A new directional transform and compactly supported shearlet frames. IEEE Trans. Image Process. 2010, 19, 1166–1180. [Google Scholar] [PubMed]
  32. Zhang, W.; Gao, F.; Yin, Q. Blind Channel Estimation for MIMO-OFDM Systems with Low Order Signal Constellation. IEEE Commun. Lett. 2015, 19, 499–502. [Google Scholar] [CrossRef]
  33. Daubechies, I. Ten Lectures on Wavelets; Society for Industrial and Applied Mathematics: Philadelphia, PA, USA, 1992. [Google Scholar]
  34. Zeng, J.; Lin, S.; Wang, Y.; Xu, Z. L1/2 regularization: Convergence of iterative half thresholding algorithm. IEEE Trans. Signal Process. 2014, 62, 2317–2329. [Google Scholar] [CrossRef]
  35. Puy, G.; Vandergheynst, P.; Gribonval, R.; Wiaux, Y. Universal and efficient compressed sensing by spread spectrum and application to realistic Fourier imaging techniques. EURASIP J. Adv. Signal Process. 2012, 2012. [Google Scholar] [CrossRef] [Green Version]
  36. Liu, A.; Zhang, Q.; Li, Z.; Choi, Y.J.; Li, J.; Komuro, N. A green and reliable communication modeling for industrial internet of things. Comput. Electr. Eng. 2017, 58, 364–381. [Google Scholar] [CrossRef]
  37. Li, H.; Dong, M.; Ota, K.; Guo, M. Pricing and Repurchasing for Big Data Processing in Multi-Clouds. IEEE Trans. Emerg. Top. Comput. 2016, 4, 266–277. [Google Scholar] [CrossRef]
  38. Chen, Z.; Liu, A.; Li, Z.; Choi, Y.J.; Sekiya, H.; Li, J. Energy-efficient broadcasting scheme for smart industrial wireless sensor networks. Mob. Inform. Syst. 2017. [Google Scholar] [CrossRef]
  39. Wu, J.; Ota, K.; Dong, M.; Li, J.; Wang, H. Big data analysis based security situational awareness for smart grid. IEEE Trans. Big Data 2016, 99, 1–11. [Google Scholar] [CrossRef]
  40. Hu, Y.; Dong, M.; Ota, K.; Liu, A.; Guo, M. Mobile target detection in wireless sensor networks with adjustable sensing frequency. IEEE Syst. J. 2016, 10, 1160–1171. [Google Scholar] [CrossRef]
  41. Chen, Z.; Liu, A.; Li, Z.; Choi, Y.J.; Li, J. Distributed Duty cycle control for delay improvement in wireless sensor networks. Peer-to-Peer Netw. Appl. 2017, 10, 559–578. [Google Scholar] [CrossRef]
  42. Kato, N.; Fadlullah, Z.M.; Mao, B.; Tang, F.; Akashi, O.; Inoue, T.; Mizutani, K. The Deep Learning Vision for Heterogeneous Network Traffic Control: Proposal, Challenges, and Future Perspective. IEEE Wirel. Commun. 2017, 24, 146–153. [Google Scholar] [CrossRef]
Figure 1. The relationship between the conventional single dictionary based L p thresholding method and the proposed SAITA L p method.
Figure 1. The relationship between the conventional single dictionary based L p thresholding method and the proposed SAITA L p method.
Sensors 17 02920 g001
Figure 2. (a) the cropped Cameraman image of size N = 256 × 256 . (b) the selected image portion of size N = 96 × 104 .
Figure 2. (a) the cropped Cameraman image of size N = 256 × 256 . (b) the selected image portion of size N = 96 × 104 .
Sensors 17 02920 g002
Figure 3. The RSNR of the proposed SAITA- L p algorithm of α ( 0 ,   2 p ) and p = 1 / 2 . (a) The RSNR of α { 1 6 , 1 4 , 1 3 , 5 12 , 1 2 } . (b) The RSNR of α { 1 2 , 7 12 , 2 3 , 3 4 , 5 6 } .
Figure 3. The RSNR of the proposed SAITA- L p algorithm of α ( 0 ,   2 p ) and p = 1 / 2 . (a) The RSNR of α { 1 6 , 1 4 , 1 3 , 5 12 , 1 2 } . (b) The RSNR of α { 1 2 , 7 12 , 2 3 , 3 4 , 5 6 } .
Sensors 17 02920 g003
Figure 4. The recovery S N R of the proposed SAITA- L p algorithm of α ( 0 ,   2 p ) and p = 2 / 3 . (a) The RSNR versus Sampling Ratio of α { 1 6 , 1 3 , 1 2 , 2 3 , 5 6 } ; (b) The RSNR versus Sampling Ratio of α { 1 , 13 12 , 7 6 , 5 4 , 4 3 } .
Figure 4. The recovery S N R of the proposed SAITA- L p algorithm of α ( 0 ,   2 p ) and p = 2 / 3 . (a) The RSNR versus Sampling Ratio of α { 1 6 , 1 3 , 1 2 , 2 3 , 5 6 } ; (b) The RSNR versus Sampling Ratio of α { 1 , 13 12 , 7 6 , 5 4 , 4 3 } .
Sensors 17 02920 g004
Figure 5. The RSNR performances of the proposed SAITA- L p algorithm and the L p algorithm with mRSN { 25 ,   30 ,   35 }   dB . (a) p = 1 / 2 . (b) p = 2 / 3 .
Figure 5. The RSNR performances of the proposed SAITA- L p algorithm and the L p algorithm with mRSN { 25 ,   30 ,   35 }   dB . (a) p = 1 / 2 . (b) p = 2 / 3 .
Sensors 17 02920 g005
Figure 6. (a) The case of p = 1 / 2 for the sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } of the Cameraman images. (b) The case of p = 2 / 3 for the sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } of the cameraman images. The RSNR versus mSNR of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) algorithm and the L p ( p { 1 / 2 ,   2 / 3 } ) algorithm for three low sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } .
Figure 6. (a) The case of p = 1 / 2 for the sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } of the Cameraman images. (b) The case of p = 2 / 3 for the sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } of the cameraman images. The RSNR versus mSNR of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) algorithm and the L p ( p { 1 / 2 ,   2 / 3 } ) algorithm for three low sampling ratios M / N { 0.15 ,   0.20 ,   0.25 } .
Sensors 17 02920 g006
Figure 7. (a) The relative error for the lower sampling ratio M / N = 0.2 in the cameraman image. (b) The relative error for the higher sampling ratio of M / N = 0.5 in the cameraman image. The relative errors verse the iteration number of the proposed SAITA- L p algorithm and the L p algorithm.
Figure 7. (a) The relative error for the lower sampling ratio M / N = 0.2 in the cameraman image. (b) The relative error for the higher sampling ratio of M / N = 0.5 in the cameraman image. The relative errors verse the iteration number of the proposed SAITA- L p algorithm and the L p algorithm.
Sensors 17 02920 g007
Figure 8. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the conventional L p ( p { 1 / 2 ,   2 / 3 } ) algorithms for the M / N = 0.2 and m S N R = 40   dB cameraman image. (a) L 1 / 2 , RSNR = 15.7316   dB ; (b) SAITA- L p algorithm ( p = 1 / 2 ), RSNR = 20.5714   dB ; (c) L 2 / 3 , RSNR = 16.3098   dB ; and (d) SAITA- L p algorithm ( p = 2 / 3 ), RSNR = 20.1259   dB .
Figure 8. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the conventional L p ( p { 1 / 2 ,   2 / 3 } ) algorithms for the M / N = 0.2 and m S N R = 40   dB cameraman image. (a) L 1 / 2 , RSNR = 15.7316   dB ; (b) SAITA- L p algorithm ( p = 1 / 2 ), RSNR = 20.5714   dB ; (c) L 2 / 3 , RSNR = 16.3098   dB ; and (d) SAITA- L p algorithm ( p = 2 / 3 ), RSNR = 20.1259   dB .
Sensors 17 02920 g008
Figure 9. The RSNR of the SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) algorithms vs. the sampling ratio M / N for the mSNR = 40   dB cameraman image.
Figure 9. The RSNR of the SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) algorithms vs. the sampling ratio M / N for the mSNR = 40   dB cameraman image.
Sensors 17 02920 g009
Figure 10. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the corresponding L p ( p { 1 / 2 ,   2 / 3 } ) algorithm for the M / N = 0.140 , mSNR = 40   dB Shepp-Logan image. (a) L 1 / 2 , RSNR = 3.5436   dB ; (b) SAITA- L p algorithm ( p = 1 / 2 ), RSNR = 43.7016   dB ; (c) L 2 / 3 , RSNR = 27.2450   dB ; and (d) SAITA- L p algorithm ( p = 2 / 3 ), RSNR = 44.9549   dB .
Figure 10. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the corresponding L p ( p { 1 / 2 ,   2 / 3 } ) algorithm for the M / N = 0.140 , mSNR = 40   dB Shepp-Logan image. (a) L 1 / 2 , RSNR = 3.5436   dB ; (b) SAITA- L p algorithm ( p = 1 / 2 ), RSNR = 43.7016   dB ; (c) L 2 / 3 , RSNR = 27.2450   dB ; and (d) SAITA- L p algorithm ( p = 2 / 3 ), RSNR = 44.9549   dB .
Sensors 17 02920 g010
Figure 11. The RSNR of the proposed SAITA- L p   ( p { 1 / 2 ,   2 / 3 } ) algorithm and the conventional L p   ( p { 1 / 2 ,   2 / 3 } ) algorithm vs. the sampling ratio M / N for the mSNR = 40   dB Shepp-Logan image.
Figure 11. The RSNR of the proposed SAITA- L p   ( p { 1 / 2 ,   2 / 3 } ) algorithm and the conventional L p   ( p { 1 / 2 ,   2 / 3 } ) algorithm vs. the sampling ratio M / N for the mSNR = 40   dB Shepp-Logan image.
Sensors 17 02920 g011
Figure 12. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the corresponding L 1 / 2 and L 2 / 3 algorithm for the M / N = 0.2 , mSNR = 40   dB 2D MRI image. (a)   L 1 / 2 algorithm, RSNR = 3.6346   dB ; (b) SAITA- L 1 / 2 algorithm, RSNR = 21.0189   dB ; (c) L 2 / 3 algorithm, R S N R = 7.7718   dB ; and (d) SAITA- L 2 / 3 algorithm, RSNR = 23.1872   dB .
Figure 12. The recovery effects of the proposed SAITA- L p ( p { 1 / 2 ,   2 / 3 } ) and the corresponding L 1 / 2 and L 2 / 3 algorithm for the M / N = 0.2 , mSNR = 40   dB 2D MRI image. (a)   L 1 / 2 algorithm, RSNR = 3.6346   dB ; (b) SAITA- L 1 / 2 algorithm, RSNR = 21.0189   dB ; (c) L 2 / 3 algorithm, R S N R = 7.7718   dB ; and (d) SAITA- L 2 / 3 algorithm, RSNR = 23.1872   dB .
Sensors 17 02920 g012
Figure 13. The RSNR of the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm and the L p , ( p { 1 / 2 ,   2 / 3 } ) algorithm versus the sampling ratio for the mSNR = 40   dB 2D MRI image.
Figure 13. The RSNR of the proposed SAITA- L p ,   ( p { 1 / 2 ,   2 / 3 } ) algorithm and the L p , ( p { 1 / 2 ,   2 / 3 } ) algorithm versus the sampling ratio for the mSNR = 40   dB 2D MRI image.
Sensors 17 02920 g013

Share and Cite

MDPI and ACS Style

Li, Y.; Zhang, J.; Fan, S.; Yang, J.; Xiong, J.; Cheng, X.; Sari, H.; Adachi, F.; Gui, G. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for L p -Regularization Using the Multiple Sub-Dictionary Representation. Sensors 2017, 17, 2920. https://doi.org/10.3390/s17122920

AMA Style

Li Y, Zhang J, Fan S, Yang J, Xiong J, Cheng X, Sari H, Adachi F, Gui G. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for L p -Regularization Using the Multiple Sub-Dictionary Representation. Sensors. 2017; 17(12):2920. https://doi.org/10.3390/s17122920

Chicago/Turabian Style

Li, Yunyi, Jie Zhang, Shangang Fan, Jie Yang, Jian Xiong, Xiefeng Cheng, Hikmet Sari, Fumiyuki Adachi, and Guan Gui. 2017. "Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for L p -Regularization Using the Multiple Sub-Dictionary Representation" Sensors 17, no. 12: 2920. https://doi.org/10.3390/s17122920

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop