Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation

Both L1/2 and L2/3 are two typical non-convex regularizations of Lp (0<p<1), which can be employed to obtain a sparser solution than the L1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted Lp thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based Lp regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based Lp case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.


Introduction
Compressed sensing (CS) [1,2] and sparse representation [3,4] have been widely used in the field of wireless communications [5][6][7] and image processing [8][9][10]. CS implies that it is possible to reconstruct the sparse signal/image from incomplete data if some prior knowledge and reconstruction constraints are satisfied. Mathematically, the unconstrained L 0 minimization is the optimal model to obtain the sparsest solutionx l 0 : where x 0 denotes the zero-norm function to find the number of nonzero elements in x; Φ ∈ R M×N denotes the down-sampling measurement matrix; y and x represent the observed vector and the unknown sparse image, respectively; λ is the regularization parameter to balance between the fidelity of the image and the sparsity; and γ > 0 is a small positive constant, e.g., γ = 1/2. Unfortunately, this problem (1) is an NP (non-deterministic) problem, and thus, it is difficult to efficiently solve. When the matrix Φ satisfies some necessary conditions [11], an alternative convex relaxation method are developed using the L 1 regularization method as: where x 1 = ∑ n i=1 |x i | denotes the L 1 -norm. Then, the NP problem (1) is converted into problem (2), which is a typical a convex optimization problem and can be solved efficiently, such as with the alternating direction method of multipliers (ADMM) [12,13], fast iterative shrinkage-thresholding algorithm (FISTA) [14], Nesterov's algorithm (NESTA) [15], and approximate message passing (AMP) [16]. However, the method of L 1 regularization can only obtain a suboptimal solution and usually requires much more measurements. Theoretical analysis of CS implies that better performance can be obtained by taking advantage of sparser information in many systems, especially in the presence of strong noise interference.

The Non-Convex Penalties
Many state-of-the-art algorithms have been proposed to improve the performance of the L 1 regularization algorithms. The non-convex penalty regularization algorithms are among the most effective algorithms for sparse recovery problems. Research shows that the non-convex penalty-based optimization methods can more closely approximate the sparsest solution over the L 1 -norm penalty in problem (2), which requires a weaker incoherent condition and fewer measurement data [17]. There have been many non-convex functions proposed as relaxations of the L 0 -norm penalty, such as the smoothly clipped absolute deviation penalty (SCAD) [18], the L p , (0 < p < 1)-norm penalty [17] and the minimax concave penalty (MCP) [19]. By replacing the L 1 -norm with the L p -norm, the non-convex L p -norm regularization optimization method is described as: where x p p = ∑ n i=1 |x i | p . Unfortunately, when 0 < p < 1, problem (3) becomes a non-convex, non-smooth, and non-Lipschitz optimization problem. Thus, the L p -norm optimization is always difficult to efficiently address.

The Iterative Thresholding Algorithm of L p Regularization
There are two main classes of algorithms to solve the non-convex L p -norm optimization problem. One is the iteratively reweighted algorithm [20], and the other is the iterative thresholding algorithm (ITA). As one of the most effective and efficient methods, the ITA has been employed for many sparse recovery optimization problems due to its low computational complexities, including the iterative hard thresholding for L 0 regularization [21], the iterative soft thresholding for L 1 regularization [22] and the iterative L p thresholding for L p regularization [23]. L 1/2 and L 2/3 regularizations are two special and important cases, not only their solutions can be expressed in closed-forms, but also their importance on sparse modeling. Recent studies show that L 1/2 regularizer can be taken as the most representative L p regularizer [24] and L 2/3 regularization is more effective in image deconvolution problem [25]. Hence, in this paper, we focus on the L 1/2 and L 2/3 regularizations, which is described in Equation (4) x l 2/3 = arg min x {γ y − Φx 2 2 + λ x 2/3 2/3 },

New Multiple-State Sparse Transform Based L 1 Regularization Algorithm
Recently, some new multiple-state sparse transform based algorithms were proposed to exploit more a priori knowledge of the signal/image by employing some new sparsifying transform strategies. A shearlet-based multiple level sparse representation algorithmic framework was proposed in [26,27] for the unconstrained L 1 regularization by adaptively incorporating the iteratively reweighted shrinkage step. To enhance the sparsity, the algorithm [27] is specifically adapted to the sparse structure of the multiscale coefficients based on the ADMM [12,13]. Similarly, considering the fact that the sparsity of a certain signal/image will change under different sparsifying transform dictionaries, a sparsity-induced composite regularization algorithm was proposed for the unconstrained L 1 regularization problem (Co-L1) [28]. The novel Co-L1 method is described as: where is indeed a composition of multiple dictionary based regularizers. The algorithm [28] can significantly improve the image reconstruction performance over the fast iterative shrinkage-thresholding algorithm (FISTA) [29] by iteratively and adaptively weighting the composite regularizer. We define these new emerged sparsifying transforms as the "multiple-state" transform. The common property of these methods is how to exploit the prior information from the multiple-state sparsifying transform to improve the conditioning of the sparse recovery problems.
In this paper, benefiting from the improvement of the L p regularization algorithm [24,25,30], and motivated by recent advances in the iterative reweighted algorithms, we propose a new iteratively-weighted algorithm framework for L p , p ∈ {1/2, 2/3}, norm minimization using the multiple-state sparsifying transform, i.e., multiple sub-dictionary sparse representation [28]. The contributions of this paper are summarized as follows. (1) Based on the multiple sub-dictionary sparse representation, we develop a new iteratively-weighted L p (p ∈ {1/2, 2/3}) thresholding algorithm, which is called as SAITA-L p (p ∈ {1/2, 2/3}). (2) By comparison with the existing iteratively-reweighted parameter scheme, we propose an updating regularization parameter for weighting the sub-dictionary. (3) L 1/2 regularization and L 2/3 regularization are special and important on sparse modeling, particularly on sparse recovery. However, related studies are rare, this paper also extend the applications to sparse image recovery and Magnetic resonance imaging (MRI) and get good results.
The organization of the rest of the paper is as follows: in Section 2, we propose the multiple sub-dictionary L p -regularization in the SAITA-L p algorithm, including the multiple sub-dictionary sparsifying transforms and the iterative reweighted scheme for the SAITA-L p regularizer. Then, in Section 3, we develop a new L p norm minimization, iteratively thresholding algorithm SAITA-L p . To confirm the proposed algorithm, we conduct simulations and applications in image restorations in Section 4. In Section 5, we further validate our proposed algorithm using three applications. Finally, conclusions are given in Section 6.

The Proposed Multiple Sub-Dictionary-Based L p Regularization
The multiple dictionary sparsifying transform method for the L 1 regularization optimization problem was proposed in [28], which extends the well-known Lasso problem into a composite regularization problem. Motivated by the composite regularization method for the L 1 -norm, this paper employs the multiple sub-dictionary method for the L p regularization problem. Suppose x ∈ R N×1 is the non-sparse, raw signal. We can obtain the sparse coefficients Ψx through an analysis dictionary Ψ ∈ R N 1 ×N . The shearlet transform [31] and the wavelet transform [32] are two typical sparsifying transforms. We choose the wavelet transform as the ideal transform because of its effectiveness to compress natural images. The main steps to design the multiple sub-dictionary sparsifying transform based L p regularization method are: First, we construct an (DN) × N over-complete sparsifying transform matrix Ψ by: and: where D denotes the number of sub-dictionaries Ψ d in the over-complete sparsifying transform , such as the "dbN" wavelet basis [33], and N d represents the number of column in ψ i . From Equation (8) we can see that each Ψ d is composed of a set of rows from the (DN) × N over-complete sparsifying transform matrix Ψ, and: By splitting the matrix Ψ into several sub-dictionaries Ψ d , we convert the sparsifying transform Ψx to a composition of several Ψ d x, d = 1, 2, · · · , D with different sparse structures. Intuitively, we can utilize these differences to improve the recovery performance in sparse recovery problems. In this paper, we choose N 1 = N 2 = · · · = N d = N, so Ψ d ∈ R N×N , which are orthogonal matrixes.
Then, a new multiple non-convex L p regularization method is proposed: where R SAITA is a linearly weighted combination of multiple sub-dictionary based L p regularizers Ψ d x p p : the λ d,p , d = 1, 2, · · · D denotes the iterative weighted regularization parameter. Hence, the contribution of each sub-dictionary is controlled adaptively and iteratively with the weighted parameter λ d,p , and the regularizer Ψ d x p p will vary across the sub-dictionary index d. Intuitively, the variation of each sub-dictionary based regularizer is best weighted by the parameter λ d,p to improve the sparse recovery problem.

The Proposed SAITA-L p Algorithm
The major disadvantage of the L p (0 < p < 1) minimization is that it is nonconvex, making it difficult to efficiently solve. In this section, we first introduce the iteratively thresholding representation theory for the conventional L p , (p ∈ {1/2, 2/3}) algorithm according to the existing series of algorithms in [25,34]. Then, we deduce the SAITA-L p algorithm combined with the proposed weighted scheme λ d,p .

The Relationship of the SAITA-L p and Conventional L p Methods
Considering the multiple sub-dictionary L p , (p ∈ {1/2, 2/3}) regularization problem in Equation (10), when γ = 1, we have: where the f d (x) denotes the objective functions. Correspondingly, the conventional single dictionary Ψ ∈ R N×N based analysis L p -norm minimization problem is as follows: The proposed SAITA-L p (p ∈ {1/2, 2/3}) methods of (12) and the conventional method (13) are nearly identical, and the major difference is how to weight the contribution of the L p -norm by the regularization parameter [28]. Compared with the conventional method, the SAITA method can exploit more prior knowledge of the sparse signal/image for reconstruction. Figure 1 depicts the relationship between the two methods. In the case of (A), the number of sub-dictionaries d is reduced to 1, and the multiple dictionaries Ψ d . convert into a single Ψ. Then, the proposed SAITA-L p algorithm converts to the conventional single dictionary L p method [24,25]. In case (B), with the increase of the number d, the conventional single dictionary L p method converts to the proposed SAITA-L p method. series of algorithms in [25,34]. Then, we deduce the SAITA-algorithm combined with the proposed weighted scheme , .

The Relationship of the SAITA-and Conventional Methods
Considering the multiple sub-dictionary , ( ∈ {1/2, 2/3}) regularization problem in Equation (10), when = 1, we have: where the ( ) denotes the objective functions. Correspondingly, the conventional single dictionary ∈ × based analysis -norm minimization problem is as follows: The proposed SAITA-( ∈ {1/2, 2/3}) methods of (12) and the conventional method (13) are nearly identical, and the major difference is how to weight the contribution of the -norm by the regularization parameter [28]. Compared with the conventional method, the SAITA method can exploit more prior knowledge of the sparse signal/image for reconstruction. Figure 1 depicts the relationship between the two methods. In the case of (A), the number of sub-dictionaries is reduced to 1, and the multiple dictionaries convert into a single . Then, the proposed SAITAalgorithm converts to the conventional single dictionary method [24,25]. In case (B), with the increase of the number , the conventional single dictionary method converts to the proposed SAITA-method.

The Thresholding Representation Theory for SAITA-Lp
According to the relationship between the proposed SAITA-method and the conventional method shown in Figure 1, we first consider the conventional single dictionary analysis problem (13). The first order optimality condition of is described as: in which the operator ∇(•) denotes the gradient. Letting ∇ ( ) = 0, we have: Multiplying by any parameter to control the step size and adding in both sides of Equation (15): Then, we can immediately obtain:

The Thresholding Representation Theory for SAITA-L p
According to the relationship between the proposed SAITA-L p method and the conventional L p method shown in Figure 1, we first consider the conventional single dictionary analysis L p problem (13). The first order optimality condition of x is described as: in which the operator ∇(·) denotes the gradient. Letting ∇ f (x) = 0, we have: Multiplying by any parameter τ to control the step size and adding Ψ x in both sides of Equation (15): Then, we can immediately obtain: That is: To this end, when p ∈ {1/2, 2/3}, the resolvent operator [24,25,30] is denoted as: where λ and τ are the regularization parameter and the step tunning parameter, respectively. Then: where τ > 0 (e.g., τ = 0.99 ) controls the step size in each iteration.
According to the Equation (20), we immediately imply: in which: where x n represents the n-th iterative solution. The resolvent operator H λ,p (·) is defined as: where the h λ,p (x i ) is the nonlinear function defined by: when p = 1 2 ; T = 3 3 √ 2 4 (λ d,1/2 τ) 2/3 is the threshold value, and [24]: is the related threshold value, and [25]: where sgn(·) denotes the sign function and

The Proposed SAITA-L p Algorithm
As mentioned above, the iteratively weighted parameter plays a key role during the optimization process for the sparse recovery problem. For the proposed SAITA-L p method, the iteratively weighted parameter λ d,p mainly plays two roles. One is the role of controlling the tradeoff of the fidelity and the prior knowledge between the quadratic term and the regularizer, and the other role is controlling the contribution of each regularizer. Unfortunately, it is not clear how to do this because setting an ideal parameter is not straightforward. In [28], a iteratively updating parameter was reduced by applying a Maximization-Minimization algorithm shown as: (28) where N d controls the sub-dictionary size, > 0 is a small constant which prevent the denominator form zero. From Equation (28) we can obtain some useful information about setting a proper regularization parameter. Firstly, the contribution of denominator in Equation (28) is to counterweigh each sub-dictionary based regularizer; Secondly, the numerator N d control the size of each sub-dictionary. Inspired by the above insights, in this paper, we design the important iteratively weighted parameter λ d,p as: where N d controls the sub-dictionary size as showed in [28], α ∈ (0, 2) is a small constant to tune it that is determined from the following experimental results. Then the parameter λ t d,p plays the role of weighting each L p -norm regularizer adaptively.
The following are the analytical justifications. (i) When signal x is sparser under any dictionary of Ψ d than others, the value of each regularizer Ψ d x p p is smaller. Hence, the dictionary of Ψ d is more appropriate, and on the other hand, a smaller regularizer Ψ d x p p will be beneficial to minimizing the objective function. Thus, the weight of the regularizer should be enhanced. (ii) When the signal x may not be sparse enough under another dictionary Ψ d , that is, the dictionary of Ψ d is not ideal, the value of Ψ d x p p will be larger. The larger Ψ d x p p will not be helpful to minimizing the objective function; thus the weight of the Ψ d x p p should be smaller to counterweigh the regularizer. For the main comparison, the conventional single-dictionary based L p , p ∈ {1/2, 2/3}regularization method in problem (13) will be considered, and the tradeoff parameter λ p is a fixed constant, which is shown as: From Equation (30), we find that the conventional single-dictionary based parameter λ p is a constant and will not vary.
Moreover, we employ the forward-backward linear strategy to accelerate the convergence of the proposed algorithm as [14]: and: The proposed iteratively-weighted SAITA-L p algorithm can be described in Algorithm 1.

Performance Analysis and Discussion
We first conduct some experiments to determine the value of α and verify the performance of the proposed λ t d,p , then we evaluate the superiority of the proposed SAITA algorithm compared with the conventional single dictionary analysis L p iterative thresholding algorithms [24,25] and the Co-L1 [28]. All the experiments in this paper were conducted on a personal computer (2.21 GHz, 16 GB RAM) in a MATLAB (R2014a) platform.
Assuming the clean image x, we construct a measurement matrix Φ using the "Spread Spectrum" operator [35], so the measurement image shows: y = Φx + n, where n denotes the additive noise. Since the wavelet is known to compress natural raw images very efficiently, we choose the wavelet transform as the sparsifying transform operator. Then, we construct the sparsifying transform matrix Ψ ∈ R 8N×N by concatenating the undecimated 'db1' and 'db2' wavelet transform with 2-levels of decomposition. Thus, we can obtain the sub-dictionaries: The SNR measurement is adopted to measure the noise level, which is defined as mSNR = y 2 2 Mσ 2 , where M and σ 2 denote the number of y and the variance of the white Gaussian noise, respectively. The higher the value of mSNR, the weaker of the noise level is. We evaluate the performance by the popular recovery SNR: RSNR = −20 log ( ), wherex denotes the estimated sparse image. The higher the value of RSNR, the better the performance. We conduct the experiments by utilizing the well-known figure of "Cameraman" with mSNR = M/N = 40 dB, which is shown in Figure 2a While not converged do Step 1: Compute ( ) = ′ ( + ( − )) in Equation (22); Step 2: Compute = ( ′ ) H , ( ) in Equation (21); Step 3: Update the value of using = ( ) in Equation (31); Step 4: Update the solution = + ( − ) in Equation (32);

Performance Analysis and Discussion
We first conduct some experiments to determine the value of and verify the performance of the proposed , , then we evaluate the superiority of the proposed SAITA algorithm compared with the conventional single dictionary analysis iterative thresholding algorithms [24,25] and the Co-L1 [28]. All the experiments in this paper were conducted on a personal computer (2.21 GHz, 16 GB RAM) in a MATLAB (R2014a) platform.
Assuming the clean image , we construct a measurement matrix using the "Spread Spectrum" operator [35], so the measurement image shows: = + , where n denotes the additive noise. Since the wavelet is known to compress natural raw images very efficiently, we choose the wavelet transform as the sparsifying transform operator. Then, we construct the sparsifying transform matrix ∈ R × by concatenating the undecimated 'db1' and 'db2' wavelet transform with 2-levels of decomposition. Thus, we can obtain the sub-dictionaries: ∈ × , = 1,2, ⋯ 8. The SNR measurement is adopted to measure the noise level, which is defined as where and denote the number of and the variance of the white Gaussian noise, respectively. The higher the value of mSNR, the weaker of the noise level is. We evaluate the performance by the popular recovery SNR: where denotes the estimated sparse image. The higher the value of , the better the performance. We conduct the experiments

The Value Range of α in λ d,p
In Section 4.1, we first determine the value range for α in λ d,p by evaluating the performances with different values of α. The results are shown in Figures 3 and 4. From the results, we can find that when α ∈ (0, p), the proposed algorithms perform well and enjoy strong robustness. With the increase of α, the performances deteriorate rapidly. Therefore, we estimate that the value of α should be [0, p] to obtain the adaptive weighting. We specially set: Sensors 2017, 17, 2920 9 of 17

The Value Range of in ,
In Section 4.1, we first determine the value range for in , by evaluating the performances with different values of . The results are shown in Figures 3 and 4. From the results, we can find that when α ∈ (0, ), the proposed algorithms perform well and enjoy strong robustness. With the increase of α, the performances deteriorate rapidly. Therefore, we estimate that the value of α should be [0, ] to obtain the adaptive weighting. We specially set:  In Section 4.1, we first determine the value range for in , by evaluating the performances with different values of . The results are shown in Figures 3 and 4. From the results, we can find that when α ∈ (0, ), the proposed algorithms perform well and enjoy strong robustness. With the increase of α, the performances deteriorate rapidly. Therefore, we estimate that the value of α should be [0, ] to obtain the adaptive weighting. We specially set:

The Recovery SNR Performances Versus Sampling Ratio
In Section 4.2, we evaluate the robustness of the proposed algorithm by considering the RSNR versus the sampling ratio. Specifically, we set three mSNR levels to 25 dB, 30 dB and 35 dB. Figure 5 depicts the RSNR versus the sampling ratio. Based on the results, the proposed SAITA-L p , (p ∈ {1/2, 2/3}) algorithm performs better than the conventional single dictionary based algorithm, and the robustness of the proposed algorithm is good. In Section 4.2, we evaluate the robustness of the proposed algorithm by considering the RSNR versus the sampling ratio. Specifically, we set three mSNR levels to 25 dB, 30 dB and 35 dB. Figure 5 depicts the RSNR versus the sampling ratio. Based on the results, the proposed SAITA-, ( ∈ {1/2, 2/3}) algorithm performs better than the conventional single dictionary based algorithm, and the robustness of the proposed algorithm is good.

The Recovery SNR Performance Versus Measurement SNR
For our third experiment, we investigate the influence of different noise levels on the proposed algorithm and compare the algorithm and Co-L1 [28]. Similarly, we consider three sampling ratio levels of / ∈ {0.15, 0.20, 0.25}. We evaluate the performance by the RSNR versus the lower mSNR (20 dB~40 dB) of the image, and the results are presented in Figure 6.

The Recovery SNR Performance Versus Measurement SNR
For our third experiment, we investigate the influence of different noise levels on the proposed algorithm and compare the L p algorithm and Co-L1 [28]. Similarly, we consider three sampling ratio levels of M/N ∈ {0.15, 0.20, 0.25}. We evaluate the performance by the RSNR versus the lower mSNR (20 dB~40 dB) of the image, and the results are presented in Figure 6. In Section 4.2, we evaluate the robustness of the proposed algorithm by considering the RSNR versus the sampling ratio. Specifically, we set three mSNR levels to 25 dB, 30 dB and 35 dB. Figure 5 depicts the RSNR versus the sampling ratio. Based on the results, the proposed SAITA-, ( ∈ {1/2, 2/3}) algorithm performs better than the conventional single dictionary based algorithm, and the robustness of the proposed algorithm is good.

The Recovery SNR Performance Versus Measurement SNR
For our third experiment, we investigate the influence of different noise levels on the proposed algorithm and compare the algorithm and Co-L1 [28]. Similarly, we consider three sampling ratio levels of / ∈ {0.15, 0.20, 0.25}. We evaluate the performance by the RSNR versus the lower mSNR (20 dB~40 dB) of the image, and the results are presented in Figure 6. From the results, we can find that the proposed SAITA-L p , (p ∈ {1/2, 2/3}) algorithm can obtain a higher recovery SNR and has better robustness and fidelity than the Co-L1. The robustness and fidelity of the corresponding L p , (p ∈ {1/2, 2/3}) algorithm will deteriorate with the increase of the signal measurement SNR.

The Relative Error Performances Versus the Number of Iterations
We study the convergence and the reconstruction error by the relative error performances versus the number of iterations. Choosing the relative error as the second quality measurement, the formula is given: Considering the proposed SAITA-L p , (p ∈ {1/2, 2/3}) and the corresponding L p , (p ∈ {1/2, 2/3}) algorithm from the result shown in Figure 7, when the sampling ratio is 0.2 (shown in (a)), the relative errors of the proposed SAITA-L p , (p ∈ {1/2, 2/3}) algorithm are significant smaller, and converge faster than the corresponding L p , (p ∈ {1/2, 2/3}) algorithm. When the sampling ratio is 0.5 (shown in (b)), though the final relative errors are close, the convergence speed of the proposed algorithm is higher (the number of iterations are approximately 15 and 7, respectively). While compared to Co-L1 [28], our proposed SAITA-L p algorithm can obtain a markedly lower relative error when the sampling ratio is M/N ∈ {0.2, 0.5}. In addition, one can observe that the relative error is slightly smaller than for p = 2/3, while the convergence rate is faster than the p = 1/2. From the results, we can find that the proposed SAITA-, ( ∈ {1/2, 2/3}) algorithm can obtain a higher recovery SNR and has better robustness and fidelity than the Co-L1. The robustness and fidelity of the corresponding , ( ∈ {1/2, 2/3}) algorithm will deteriorate with the increase of the signal measurement SNR.

The Relative Error Performances Versus the Number of Iterations
We study the convergence and the reconstruction error by the relative error performances versus the number of iterations. Choosing the relative error as the second quality measurement, the formula is given: Considering the proposed SAITA-, ( ∈ {1/2, 2/3}) and the corresponding , ( ∈ {1/2, 2/ 3}) algorithm from the result shown in Figure 7, when the sampling ratio is 0.2 (shown in (a)), the relative errors of the proposed SAITA-, ( ∈ {1/2, 2/3}) algorithm are significant smaller, and converge faster than the corresponding , ( ∈ {1/2, 2/3}) algorithm. When the sampling ratio is 0.5 (shown in (b)), though the final relative errors are close, the convergence speed of the proposed algorithm is higher (the number of iterations are approximately 15 and 7, respectively). While compared to Co-L1 [28], our proposed SAITAalgorithm can obtain a markedly lower relative error when the sampling ratio is / ∈ {0.2, 0.5}. In addition, one can observe that the relative error is slightly smaller than for = 2/3, while the convergence rate is faster than the = 1/2.

Practical Experiments
The proposed algorithms can be applied in many practical applications [36][37][38][39][40][41][42]. In this section, we conduct some typical applications about sparse image recovery and medical imaging to extend the applications of / and / regularizations and illustrate the excellent robustness and adaptation of the proposed SAITA-, ( ∈ {1/2, 2/3}) algorithm. The conventional single dictionary analysis iterative thresholding algorithms [24,25] and the Co-L1 [28] as the standard algorithms for comparison.

Practical Experiments
The proposed algorithms can be applied in many practical applications [36][37][38][39][40][41][42]. In this section, we conduct some typical applications about sparse image recovery and medical imaging to extend the applications of L 1/2 and L 2/3 regularizations and illustrate the excellent robustness and adaptation of the proposed SAITA-L p , (p ∈ {1/2, 2/3}) algorithm. The conventional single dictionary analysis L p iterative thresholding algorithms [24,25] and the Co-L1 [28] as the standard algorithms for comparison.

Application 1: Image Sparse Restoration
In the first application, the proposed SAITA-L p algorithm is applied to restoring the "Cameraman" image shown in Figure 2 versus sampling ratio M/N. We use the reduced N = 96 × 104 cameraman image as the objective image. Figure 8a,c show the recovery results using the conventional single dictionary algorithm of L 1 and L p , (p ∈ {1/2, 2/3}), respectively. Figure 8b,d show the recovery images using the corresponding multiple sub-dictionary algorithm of L 1 and L p (p ∈ {1/2, 2/3}), respectively. The experiments show that all the algorithms can recover the images, and the proposed multiple sub-dictionary algorithms significantly outperform the conventional single-dictionary algorithms. In Figure 9, we depict the RSNR of four algorithms vs. different sampling ratios. When p = 1/2 and p = 2/3, it can be observed that the proposed SAITA algorithm can obtain a lager RSNR compared with the conventional L p algorithms. There is no obvious improvement between the two cases of p = 1 2 and p = 2 3 , and the SAITA-L 1/2 algorithm outperforms the SAITA-L 2/3 algorithm with a weak advantage.

Application 1: Image Sparse Restoration
In the first application, the proposed SAITA-algorithm is applied to restoring the "Cameraman" image shown in Figure 2 versus sampling ratio M/N. We use the reduced = 96 × 104 cameraman image as the objective image. Figures 8a,c show the recovery results using the conventional single dictionary algorithm of and , ( ∈ {1/2, 2/3}), respectively. Figures 8b,d show the recovery images using the corresponding multiple sub-dictionary algorithm of and ( ∈ {1/2, 2/3}), respectively. The experiments show that all the algorithms can recover the images, and the proposed multiple sub-dictionary algorithms significantly outperform the conventional single-dictionary algorithms. In Figure 9, we depict the RSNR of four algorithms vs. different sampling ratios. When = 1/2 and = 2/3, it can be observed that the proposed SAITA algorithm can obtain a lager RSNR compared with the conventional algorithms. There is no obvious improvement between the two cases of = and = , and the SAITA-/ algorithm outperforms the SAITA-/ algorithm with a weak advantage.

Application 1: Image Sparse Restoration
In the first application, the proposed SAITA-algorithm is applied to restoring the "Cameraman" image shown in Figure 2 versus sampling ratio M/N. We use the reduced = 96 × 104 cameraman image as the objective image. Figures 8a,c show the recovery results using the conventional single dictionary algorithm of and , ( ∈ {1/2, 2/3}), respectively. Figures 8b,d show the recovery images using the corresponding multiple sub-dictionary algorithm of and ( ∈ {1/2, 2/3}), respectively. The experiments show that all the algorithms can recover the images, and the proposed multiple sub-dictionary algorithms significantly outperform the conventional single-dictionary algorithms. In Figure 9, we depict the RSNR of four algorithms vs. different sampling ratios. When = 1/2 and = 2/3, it can be observed that the proposed SAITA algorithm can obtain a lager RSNR compared with the conventional algorithms. There is no obvious improvement between the two cases of = and = , and the SAITA-/ algorithm outperforms the SAITA-/ algorithm with a weak advantage.

Application 2: Medical Imaging
In the application 2, we extend the applications of our proposed algorithm to solve typical medical construction problems. We first consider the well-known Shepp-Logan phantom, and then we construct a high-quality dMRI cardiac cine [8,28].

Test 1 for the Shepp-Logan Model
In the test 1, we first consider the well-known Shepp-Logan phantom of N = 96 × 96 with an mSNR = 40 dB. We construct the compressed noisy measurement signal y by utilizing the "Spread Spectrum" operator as the measurement matrix Φ [35], and we conduct a sparsifying transform with the constructed operator Ψ ∈ R (4N)×N ('db3', N = 1).
Sensors 2017, 17,2920 13 of 17 In the application 2, we extend the applications of our proposed algorithm to solve typical medical construction problems. We first consider the well-known Shepp-Logan phantom, and then we construct a high-quality dMRI cardiac cine [8,28].

Test 1 for the Shepp-Logan Model
In the test 1, we first consider the well-known Shepp-Logan phantom of = 96 × 96 with an = 40 dB. We construct the compressed noisy measurement signal by utilizing the "Spread Spectrum" operator as the measurement matrix [35], and we conduct a sparsifying transform with the constructed operator ∈ ( )× ('db3', = 1).  In the application 2, we extend the applications of our proposed algorithm to solve typical medical construction problems. We first consider the well-known Shepp-Logan phantom, and then we construct a high-quality dMRI cardiac cine [8,28].

Test 1 for the Shepp-Logan Model
In the test 1, we first consider the well-known Shepp-Logan phantom of = 96 × 96 with an = 40 dB. We construct the compressed noisy measurement signal by utilizing the "Spread Spectrum" operator as the measurement matrix [35], and we conduct a sparsifying transform with the constructed operator ∈ ( )× ('db3', = 1).  From the experimental results shown in Figure 10, we find that the proposed SAITA-L p algorithm can recover the images perfectly, as shown in Figure 10b,d, while the conventional single dictionary algorithms failed to recover the image, which is shown in Figure 10a,c. The proposed SAITA-L p algorithm of p = 2/3 can obtain the best effect compared with the other algorithms, and the next best is the SAITA-L p algorithm of p = 1/2. In Figure 11, we depict the RSNR of the respective algorithms versus different sampling ratios M/N. When p = 1/2 and p = 2/3, it can be observed that the proposed SAITA-L p algorithms can obtain a larger RSNR than the L p algorithms with M/N ∈ (0.1, 0.2) significantly.

Test 2 for Real-World Data (2D MRI)
MRI is a typical medical inverse problem that can be solved well by CS [8]. In this experiment, we apply the proposed algorithm to real-world medical data. We investigate a simplified "dynamic MRI" problem [8] and use the high-quality MRI cardiac cine as the ground truth and select a spatio-temporal slice of 144 × 48 [28]. We construct the sparse matrix Ψ ∈ R 3N×N with a vertical concatenation of 'db1' and 'db2' orthogonal discrete wavelet bases with two levels of decomposition ('db1', 'db2', and N = 2). Figure 12 presents the constructed MRI images using the SAITA-L p (p ∈ {1/2, 2/3}) algorithm and L p (p ∈ {1/2, 2/3}) algorithms. From the experiment results, we can see that the proposed SAITA algorithm can reconstruct the images perfectly, as shown in Figure 12b,d, while the conventional algorithms failed to restore the image in Figure 12a,c. The proposed multiple sub-dictionaries algorithm of p = 2/3 obtained the best effect and the corresponding recovery SNR is 23.1872 dB. Next is the proposed algorithm of p = 1/2 with recovery SNRs of 21.0189 dB. In Figure 13, we depict the RSNR of four algorithms versus different sampling ratios M/N. When p = 1/2 and p = 2/3, it can be observed that the proposed SAITA-L p algorithms can obtain a lager RSNR compared with the conventional single dictionary L p algorithms. The results also show that the algorithms of p = 2/3 outperform the methods of p = 1/2. That is to say, the L 2/3 regularization can exploit more prior knowledge than L 1/2 regularization in MRI. From the experimental results shown in Figure 10, we find that the proposed SAITAalgorithm can recover the images perfectly, as shown in Figures 10b,d, while the conventional single dictionary algorithms failed to recover the image, which is shown in Figures 10a,c. The proposed SAITA-algorithm of = / can obtain the best effect compared with the other algorithms, and the next best is the SAITA-algorithm of = / . In Figure 11, we depict the RSNR of the respective algorithms versus different sampling ratios / . When = / and = / , it can be observed that the proposed SAITA-algorithms can obtain a larger RSNR than the algorithms with / ∈ ( . , . ) significantly.

Test 2 for Real-World Data (2D MRI)
MRI is a typical medical inverse problem that can be solved well by CS [8]. In this experiment, we apply the proposed algorithm to real-world medical data. We investigate a simplified "dynamic MRI" problem [8] and use the high-quality MRI cardiac cine as the ground truth and select a spatiotemporal slice of 144 × 48 [28]. We construct the sparse matrix ∈ × with a vertical concatenation of 'db1' and 'db2' orthogonal discrete wavelet bases with two levels of decomposition ("db1", "db2", and = 2 ). Figure 12 presents the constructed MRI images using the  Figure 13, we depict the RSNR of four algorithms versus different sampling ratios / . When = 1/2 and = 2/3 , it can be observed that the proposed SAITAalgorithms can obtain a lager RSNR compared with the conventional single dictionary algorithms. The results also show that the algorithms of = 2/3 outperform the methods of = 1/2 . That is to say, the / regularization can exploit more prior knowledge than / regularization in MRI.

Conclusions
In this paper, we propose a novel adaptive iteratively weighted thresholding algorithm (SAITA-) based on the conventional / and / thresholding algorithm by incorporating the multiple sub-dictionary sparse representation strategy. We make the following conclusions from the above experiments: (1) Using the proposed multiple sub-dictionary sparsifying transforms strategy, we construct a multiple sub-dictionary based regularization method to exploit more priori knowledge of images for the sparse image recovery problem. By multiplying by the proposed adaptive weighting parameter , = ( ‖ ‖ ) , we can gain more control of weighting the contribution of each sub-dictionary based regularizer. Experiments show that the proposed algorithms appear to perform better than the conventional single-dictionary algorithms, especially when the sampling rate is very low (e.g., 0.1~0.3); (2) Compared with the -norm regularization based work, the nonconvex (0 < < 1)-norm penalty can more closely approximate the -norm minimization over the -norm, which gives a sparser solution and needs fewer measurement data.
(3) In our experiments, we find that the recovery performances between the ( = 1/2) and ( = 2/3) are close, even when the corresponding = 2/3 algorithm can obtain a better performance over the = 1/2. Hence, a proper need to be selected in practical applications. (4) Moreover, the proposed SAITA-method also indicates that it is feasible to improve the recovery performance by exploiting the signal inner sparse structure and designing a proper sparse representation dictionary. Thus, it is beneficial to exploit the signal sparse structure with a dictionary learning method, which will be the subject of future work. (5) The proposed SAITA-algorithm can be extended to other non-convex penalties include smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP). The RSNR of the proposed SAITA-L p , (p ∈ {1/2, 2/3}) algorithm and the L p , (p ∈ {1/2, 2/3}) algorithm versus the sampling ratio for the mSNR = 40 dB 2D MRI image.

Conclusions
In this paper, we propose a novel adaptive iteratively weighted thresholding algorithm (SAITA-L p ) based on the conventional L 1/2 and L 2/3 thresholding algorithm by incorporating the multiple sub-dictionary sparse representation strategy. We make the following conclusions from the above experiments: (1) Using the proposed multiple sub-dictionary sparsifying transforms strategy, we construct a multiple sub-dictionary based L p regularization method to exploit more priori knowledge of images for the sparse image recovery problem. By multiplying by the proposed adaptive weighting parameter λ t d,p = , we can gain more control of weighting the contribution of each sub-dictionary based regularizer. Experiments show that the proposed algorithms appear to perform better than the conventional single-dictionary algorithms, especially when the sampling rate is very low (e.g., 0.1~0.3); (2) Compared with the L 1 -norm regularization based work, the nonconvex L p (0 < p < 1)-norm penalty can more closely approximate the L 0 -norm minimization over the L 1 -norm, which gives a sparser solution and needs fewer measurement data. (3) In our experiments, we find that the recovery performances between the L p (p = 1/2) and L p (p = 2/3) are close, even when the corresponding p = 2/3 algorithm can obtain a better performance over the p = 1/2. Hence, a proper p need to be selected in practical applications. (4) Moreover, the proposed SAITA-L p method also indicates that it is feasible to improve the recovery performance by exploiting the signal inner sparse structure and designing a proper sparse representation dictionary. Thus, it is beneficial to exploit the signal sparse structure with a dictionary learning method, which will be the subject of future work. (5) The proposed SAITA-L p algorithm can be extended to other non-convex penalties include smoothly clipped absolute deviation (SCAD) and minimax concave penalty (MCP).

Conflicts of Interest:
The authors declare no conflict of interest.