Nonlocal Low-Rank Regularization Combined with Bilateral Total Variation for Compressive Sensing Image Reconstruction

: The use of non-local self-similarity prior between image blocks can improve image reconstruction performance signiﬁcantly. We propose a compressive sensing image reconstruction algorithm that combines bilateral total variation and nonlocal low-rank regularization to overcome over-smoothing and degradation of edge information which result from the prior reconstructed image. The proposed algorithm makes use of the preservation of image edge information by bilateral total variation operator to enhance the edge details of the reconstructed image. In addition, we use weighted nuclear norm regularization as a low-rank constraint for similar blocks of the image. To solve this convex optimization problem, the Alternating Direction Method of Multipliers (ADMM) is employed to optimize and iterate the algorithm model effectively. Experimental results show that the proposed algorithm can obtain better image reconstruction quality than conventional algorithms with using total variation regularization or considering the nonlocal structure of the image only. At 10% sampling rate, the peak signal-to-noise ratio gain is up to 2.39 dB in noiseless measurements compared with Nonlocal Low-rank Regularization (NLR-CS). Reconstructed image comparison shows that the proposed algorithm retains more high frequency components. In noisy measurements, the proposed algorithm is robust to noise and the reconstructed image retains more detail information.


Introduction
Compressive sensing (CS) [1][2][3] is a burgeoning signal acquisition and reconstruction method that breaks through the frequency limit of the Nyquist-Shannon sampling theorem. CS theory points out that the perfect reconstruction of an original signal can be realized by using a small number of random measurements if the signal is sparse or can be sparsely expressed in a certain transform domain, such as Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The measurements are generated by a random Gaussian matrix or partial Fourier matrix in the way of sampling and data compressing at the same time. CS has the advantages of low sampling rate and high acquisition efficiency, which has been widely used in various fields including 3 D imaging [4], video acquisition [5], image encryption transmission [6], optical microscopy [7], target tracking [8], digital holography [9] and multimode fiber [10,11]. The accurate and high-quality reconstruction of signal is the core of the research of CS. In the process of image reconstruction, the prior information of image plays an important role. How to fully explore the prior information of image as much as possible and construct effective constraint becomes the key of image reconstruction. The most commonly used image prior is sparse prior, that is, building a sparse model based on sparse representation in a transform domain to obtain the optimal solution. The classical reconstruction algorithms mainly include greedy algorithms and convex optimization algorithms. The greedy algorithms based on the l 0 norm minimization model include Orthogonal Match Pursuit (OMP) [12], Subspace Pursuit (SP) [13], Compressed Sampling Match Pursuit (CoSaMP) [14], etc; The convex optimization algorithms based on the l 1 norm minimization model include Basis Pursuit (BP) [15], Iterative Shrinkage Threshold (IST) [16], Gradient Projection (GP) [17], Total Variation (TV) [18], etc.
Among them, the total variation algorithm uses the sparse gradient prior as a constraint to reconstruct the image, which can remove the noise and retain the detail information of the image better. However, some problems such as staircase effect still exist in the total variation algorithm. Many variants are proposed and applied to CS reconstruction gradually, such as fractional-order total variation [19], reweighted total variation [20] and bilateral total variation [21], etc. These reconstruction algorithms using the image sparse prior have achieved good image reconstruction performance. Recently, image restoration models based on nonlocal self-similarity prior have received extensive attention. With the application of nonlocal self-similarity prior [22][23][24] in image denoising, many researchers applied it to CS to realize image reconstruction. Zhang et al. [25] introduced nonlocal mean filter as a regularization term into the total variation model and used the correlation of noise between image blocks to set weights for filtering, which achieves excellent constraint and reconstruction results. Egiazarian et al. [26] presented an algorithm for joint filtering in block matching and sparse three-dimensional transformation domain, which is based on the nonlocal self-similarity among image blocks. Through the collaborative Wiener filtering of similar blocks in the wavelet transform domain, excellent denoising effect and reconstruction performance are achieved. Dong et al. [27] further explored the relationship between structured sparsity and nonlocal self-similarity of images; They presented a nonlocal low-rank regularized image reconstruction algorithm, which takes full advantage of the low-rank features of similar image blocks, removes redundant information and artifacts effectively in images and achieves excellent image restoration results by combining sparse encoding of images.
At present, reconstruction algorithms utilize the non-local self-similarity of image by adopting a block matching strategy to get similar image blocks. Since there are some duplicate structures in the image and the disturb of noise, the optimization model based on low-rank instead of sparsity constraint will inevitably remove these duplicate structures, and result in the problems of over-smoothing and edge information degradation of the reconstructed image [19]. In this paper, we add the bilateral total variation constraint as a global information prior to the reconstruction model based on nonlocal low-rank to propose an optimized scheme. The bilateral total variation operator is used to enhance the texture details of the reconstructed image. The low-rank minimization problem of the original image is usually non-convex and difficult to solve. The Weighted Nuclear Norm (WNN) [28] is employed to approximate the image low-rank in our model. In the stage of solving the optimization problem, the Alternating Direction Method of Multipliers (ADMM) [29] is used to iterate the model effectively.
The remainder of this paper is organized as follows: In Section 2, we introduce our reconstruction model combining bilateral total variation and weighted nuclear norm lowrank regularization. In Section 3, the process of using ADMM to recover compressed images is described. In Section 4, we evaluate the performance of the proposed algorithm and other mainstream CS algorithms. In Section 5, we give a conclusion.

Reconstruction Algorithm Combining Bilateral Total Variation and Nonlocal Low-Rank Regularization
According to CS theory, given a one-dimensional discrete vector signal x ∈ R N×1 with length N, the measured value y ∈ R M×1 can be obtained by M random projection: where Φ ∈ R M×N (M N) is the sampling matrix. Since M N, Equation (1) is an underdetermined equation and has no unique solution, it is impossible to reconstruct the original signal x directly with the measured value y. However, when the measurement matrix Φ satisfies the restricted isometry property (RIP) [1,2], which means the length of measured value y collected by the measurement matrix Φ must be longer than the sparsity of the signal x, CS theory can guarantee the accurate reconstruction of sparse (or compressible) signal x.
According to the sparse prior of signal x, the traditional optimization algorithm can be written in the following unconstrained form: where y − Φx 2 2 is the fidelity term of reconstruction, x p (0 ≤ p ≤ 1) is the sparse regularization term of signal, and λ is the properly selected regularization parameter.

The Bilateral Total Variation Model
For an image x, its total variation model is defined as: where D = [D h , D v ], D h and D v represent the gradient difference operators in the horizontal and vertical directions respectively, corresponds to the pixel position in the image x. The gradient of the image can be used as the reconstruction constraint to retain the detail information of the image. However, restoring the image with only the traditional total variation constraint has staircase effect in the smooth region. The local details of image are easy to be lost since the penalty of gradient is uniform. Therefore, the reweighted total variation method [20] is proposed, which applies a certain weight to the gradient to avoid local over-smoothing and staircase effect. The reweighted total variation model is defined as: where w k+1 = 1/ Dx k 1 + ε , ε is a small positive constant, weight w is iteratively updated by image x.
The bilateral filtering [30] is a non-linear filtering method, which combines the spatial proximity and pixel value similarity of the image. Considering both spatial information and gray level similarity, it can achieve edge preserving and noise reduction. We introduce bilateral filtering into the total variation model. The bilateral total variation model is defined as: where S l h represents that the image is shifted l pixel horizontally, S t v represents that the image is shifted t pixels vertically, represents the difference of image x at each pixel scale. Weight α(0 < α ≤ 1) is used to control the spatial attenuation of the regularization term, and p(p ≥ 1) is the window size of the filter kernel.
The bilateral total variation model is essentially an extension of the traditional total variation model. When the weight α = 1, set l = 1, t = 0 or l = 0, t = 1, define Q h = I − S h , Q v = I − S v , I is the unit matrix, then the bilateral total variation model is transformed into: it can be seen that the above expression is consistent with Equation (3) of the total variation model.
For the reweighted bilateral total variation model, it can be expressed in the following form: here we set the weight

The Weighted Nuclear Norm Low-Rank Model
The image restoration model based on image nonlocal self-similarity prior consists of two parts: one is the block matching strategy used to characterize the image self-similarity, the other is the low-rank approximation used to characterize the sparsity constraint. The block matching strategy refers to block grouping of similar blocks for an image x. For a n × n block x i at position i in the image, m similar image blocks are searched based on Euclidean distance in the search window (e.g., 20 × 20), i.e., , where T is a predefined threshold, and G i denotes the collection of positions corresponding to those similar blocks. After block grouping, we obtain a data matrix Due to the similar structure of these image blocks, the matrix X i composed by them has the property of low-rank. There is also noise pollution in the actual image x, so the similar block matrix can be modeled as: X i = L i + N i , where L i and N i represent lowrank matrix and Gaussian noise matrix respectively. Then, the low-rank matrix L i can be recovered by solving the following optimization problems: where x is a similar block matrix composed of each image block after block matching, i.e., X i . The solution of Equation (8) is a rank minimization problem, which is non-convex and difficult to solve. In this paper, the Weighted Nuclear Norm (WNN) [28] is used to replace the rank of matrix. For a matrix X, its weighted nuclear norm is defined as: where σ j (X) corresponds to the j-th singular value in X, w = [w 1 , w 2 , . . . , w n ], w j ≥ 0 is the weight assigned to the corresponding σ j (X). As we all know, the larger singular value in X, the more important characteristic component in matrix. So the larger singular value should give a smaller shrinkage in weight distribution. Here, we set the weight w j = 1/ σ j (X) + ε , and ε is a small positive constant. In this way, the optimization problem of replacing the rank of matrix with weighted nuclear norm is transformed into:

The Joint Model
We add the proposed bilateral total variation constraint as a global information prior to the reconstruction based on the weighted nuclear norm low-rank model, and get the following joint model: The joint reconstruction model seems to be relatively complicated. In order to facilitate the calculation, we simplify the bilateral total variation term, replace (11) can be abbreviated as:

Compressed Image Reconstruction Process
The ADMM [29] can be used to solve the optimization problem of Equation (12) to recover image x. First, auxiliary variables are introduced for replacement: using the augmented Lagrangian function, Equation (13) is transformed into an unconstrained form: where η 1 and η 2 are penalty parameters, and a and b are Lagrange multipliers. Then the multiplier iteration is adopted as follows: where k is the number of iterations. According to the ADMM, the original problem can be divided into the following four sub-problems to solve.

Solving the Sub-Problem of L i
Having fixed x, u, z, the optimization problem of L i is as follows: this optimization problem of the weighted nuclear norm is generally solved by singular value threshold (SVT) [31] operation: where UΣV T is the singular value decomposition of X i , and S w,τ (Σ) is the threshold operator to perform threshold operation on each element in the diagonal matrix Σ: here τ = µ/2λ 1 , and w j is the weight, as described in Section 2.2, we set and the singular value of L k i is sorted in descending order, so the weight is increasing. In order to reduce the amount of computation, we relocate similar blocks every T iterations rather than searching similar blocks in every iteration.

Solving the Sub-Problem of u
Having fixed x, L i , z, the optimization problem of u is as follows: Equation (19) has a closed-form solution: where ∑ i R T i R i is the diagonal matrix, each term in it corresponds to the image pixel position, and its value is the number of overlapping image blocks covering the pixel position.
is the average value result of blocks, that is, average the similar blocks collected by each image block.

Solving the Sub-Problem of z
Having fixed x, L i , u, the optimization problem of z is as follows: Equation (21) can be solved according to soft threshold shrinkage [32]: The soft threshold shrinkage operator soft(x, t) = sgn(x) · max(|x| − t, 0). The gradient weight w b is updated according to the following form: where ε is a small positive constant, and the weight w b is updated according to the gradient at the pixel point in the k-th iteration of image.

Solving the Sub-Problem of x
Having fixed L i , u, z, the optimization problem of x is as follows: Equation (24) has a closed-form solution: Since the inverse of matrix in Equation (25) is large, it is not easy to solve directly. Here, we use the conjugate gradient algorithm [33] to deal with this problem.
After solving each sub-problem, the multipliers are updated according to Equation (15). The whole process of image reconstruction is summarized in Algorithm 1. A description of the relevant symbols is given in Appendix A.

Algorithm 1: The proposed CS reconstruction algorithm
Input: The measurements y and sampling matrix Φ Initialization: The traditional CS algorithm (DCT, DWT, etc.) is used to estimate the initial image x 1 ; Set parameters λ 1 , λ 2 , α, p, η 1 , η 2 , K, T; Set the nuclear norm weight w j = [1, 1, . . . , 1] T ; Set the gradient weight w b = 1; Outer loop: for k = 1, 2, . . . , K do Using the similar block matching strategy to search and group the similar blocks in the image to get the similar block matrix X i ; Set (17); End for Calculate u k+1 according to Equation (20); Calculate z k+1 according to Equation (22), update the gradient weight w k+1 b according to Equation (23); Calculate x k+1 according to Equation (25); Update Lagrange multipliers a k+1 , b k+1 according to Equation (15); if mod(k, T) = 0, relocate the similar blocks position and update similar blocks grouping; End for Output: The final reconstructed imagex = x K .

Experiments
We compare the proposed algorithm with several mainstream CS algorithms, including TVAL3 [32], TVNLR [25], BM3D-CS [26] and NLR-CS [27]. The comparison algorithms are all based on the total variation constraint ( [25,32]) or non-local redundant structure of the image ( [26,27]). These algorithms are obtained through the website of the relevant authors. All experimental results are obtained by running Matlab R2016b on a personal computer equipped with an Intel ® Core™i5 processor and 8 GB memory running the Windows 10 operating system. We choose eight standard images and two resolution charts from the USC-SIPI image database to test the performance of our algorithm. All the test images are 256 × 256 gray-scale images, as shown in Figure 1. In order to evaluate the reconstruction performance objectively, we choose peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as evaluation indexes. PSNR and SSIM are calculated as follows: SSIM(x, y) = 2µ x µ y + C 1 2σ xy + C 2 In the above formulas, MSE is the mean square error, x is the reference image, y is the reconstructed image, and their size is M × N. The larger the PSNR value, the closer y and x are, which means the better the evaluation result of image quality is. µ x , µ y are the mean value of x and y, σ x , σ y are the standard deviation of x and y, respectively, and σ xy is the covariance between two images. C 1 and C 2 are small positive constants. The value of SSIM is between 0 and 1, and the larger the value, the better the image quality.
In order to make full use of the nonlocal self-similarity features in images as much as possible, it is necessary to select an appropriate block matching strategy. First of all, we give the PSNR results and the reconstruction time of the Monarch image reconstructed by the proposed algorithm. In Figure 2a, we set different image block sizes (n 2 ) and search window sizes (s 2 ). It can be seen that with the increase of image block size, the reconstruction time of the proposed algorithm is significantly shortened. With the increase of the search window size, the reconstruction time of the image will increase linearly. In general, the PSNR of the image does not change significantly. In Figure 2b, we set different number of similar blocks (m). It can also be seen that when the number of similar blocks is similar, the improvement of PSNR of image reconstructed by the proposed algorithm is relatively consistent with the increase of reconstruction time. However, when the number of selected similar blocks increases to a certain amount (m = 45), the changes of the two are completely opposite. Moreover, when too many similar blocks are selected (m = 54), the reconstruction time of the image will increase dramatically.

Parameters Selection
Since our proposed algorithm combines bilateral total variation, it is necessary to select appropriate spatial attenuation term α and filter window size p. Figure 3 shows the PSNR results and reconstruction time of the Monarch image reconstructed by the proposed algorithm with corresponding bilateral parameters. With the increase of spatial attenuation term α, the PSNR of the image gradually increases, reaching the peak at α = 0.7, and then decreases slightly, but decreases sharply when α = 1. When the filter window size p = 2 and p = 3, the reconstruction results of the algorithm is approximately the same, but the final decline of the latter is faster. When p = 1, the overall PSNR results are worse than the former two. In addition, we compared the corresponding reconstruction time. It can be found that the main factor affecting the reconstruction time of the algorithm is the filter window size p. The change of spatial attenuation term α makes the reconstruction time fluctuate slightly, but the change is not significant. When p = 1 and p = 2, their reconstruction times are close to each other, and the latter is slightly longer. When p = 3, the reconstruction time seems to be much longer. A larger filter window size p obviously increases the reconstruction time of the algorithm. In the experiment, the sampling matrix Φ is a partial Fourier matrix. In the block matching strategy, based on the previous discussion, we select an image block with the size of n 2 = 6 × 6 and the number of similar blocks is m = 45 for trade-off between PSNR and reconstruction time. In order to reduce the computational complexity, we extract an image block every 5 pixels along the horizontal and vertical directions, and the search window size of block matching is s 2 = 20 × 20. Lagrange multipliers a and b are initially set to null matrix, K = 100, T = 4, i.e., the total number of iterations is 100, and the similar block positions are relocated every 4 iterations. The initial image is estimated by the same standard DCT algorithm as NLR-CS. The regularization parameters λ 1 , λ 2 are set separately according to different sampling rates. According to the analysis results of Figure 3, we choose to set the spatial attenuation weight α of bilateral total variation term to 0.7, and the filter window size p = 2. The penalty parameter used in ADMM iteration η 1 = η 2 = 0.01. We first present the experimental results for noiseless CS measurements and then report the results using noisy CS measurements. Table 1 shows the PSNR and SSIM of the proposed algorithm and comparison algorithms at different sampling rates (5%, 10%, 15%, 20%, 25%) respectively. It can be seen from Table 1 that at a lower sampling rate, the total variation model based on the sparse gradient prior, including TVAL3 and TVNLR, is worse than other algorithms. In contrast, BM3D-CS and NLR-CS can extract the nonlocal self-similarity prior information of the image, so even if the sampling rate is very low, the PSNR reconstructed by these algorithms are also great. The proposed algorithm, owing to the joint total variation and nonlocal rank minimization, achieves better reconstruction performance than NLR-CS. For example, at 10% sampling rate, different test images in Table 1 Figure 4. It can be seen that our proposed algorithm achieves better PSNR at each sampling rate. When reconstructing Testpat1 image, we find that the PSNR result of BM3D-CS is better than that of NLR-CS and the proposed algorithm. We speculate that the initial image estimated by the standard DCT algorithm affects the performance of the proposed algorithm. It also can be seen that at a lower sampling rate, the SSIM of TVAL3 and TVNLR are relatively lower, while those of nonlocal self-similarity algorithms are relatively higher. Among them, the SSIM of the proposed algorithm is closer to 1, which indicates that the reconstructed image quality is better. In order to compare the reconstructed image quality subjectively, Figure 5 shows the results of reconstruction of five test images (Boats, Cameraman, House, Peppers, Testpat2) by the proposed algorithm and comparison algorithms at 10% sampling rate. There are a lot of nonlocal similar redundant structures in these images, so we choose them for comparison. The above comparison results show that at a lower sampling rate, the details of TVAL3 and TVNLR are seriously lost. For the House image, the texture of the house becomes very blurred, and the bricks and tiles on the eaves of the house cannot be distinguished. The image reconstructed by BM3D-CS with nonlocal similar structure is better, but there are still some artifacts, and the details are also blurred. NLR-CS and the proposed algorithm look clearer visually. Because the proposed algorithm adds bilateral total variation as a constraint to the nonlocal low-rank sparse model, compared with NLR-CS, it plays an important role in maintaining edge texture. Compared with the bricks and tiles details in the middle area on the right side of the image, we can see that the texture of the proposed algorithm is richer, while the NLR-CS is too smooth. Similarly, for the Boats image, we can see from the local detail image that the result of NLR-CS reconstruction is too smooth resulting in a serious loss of rope details, while the details of the proposed algorithm look better.  In order to compare the texture details of the reconstructed image more intuitively, we use DCT to separate the high and low frequency components of the test images. Figure 6 shows the comparison of high frequency details of the reconstructed Cameraman and Peppers images. It can be seen from Figure 6 that there are many artifacts in the image restored by BM3D-CS, resulting in a large number of artifacts interfering with high frequency components and serious loss of details. The high frequency details of TVAL3 and TVNLR are obviously less. Compared with NLR-CS, the proposed algorithm can reconstruct the high frequency profile more clearly and more details of the texture can be found. Figure 7 shows the relative errors and SSIM results of high frequency components of the Cameraman image restored by different algorithms at different sampling rates (5%, 10%, 15%, 20%, 25%). In Figure 7a, the relative errors of TVAL3, TVNLR and BM3D-CS are relatively large at a low sampling rate, but with the increase of sampling rates, the relative errors will decrease greatly. The relative errors of NLR-CS are very low at the beginning. With the increase of sampling rates, the amplitude of NLR-CS decreases slightly, but it is always in the leading position. The downward trend of our proposed algorithm is similar to that of NLR-CS, and its relative errors are lower. The SSIM curve in Figure 7b seems to be consistent, but the result of the proposed algorithm is always optimal. It is worth noting that both in Figure 7a,b it can be seen that the recovery results of BM3D-CS are relatively poor. Obviously, as shown in Figure 6, the high frequency details of the algorithm are seriously damaged due to the interference of artifacts. The comparison of the above results shows that the proposed algorithm can retain more details of high frequency components of the restored image.

Noisy CS Measurements
We also test the robustness of the proposed algorithm to noisy measurements, by adding Gaussian noise to CS measurements, and the variation of standard deviation of noise will produce a signal-to-noise ratio (SNR) between 10 dB and 30 dB. In order to compare the results reasonably, we choose a 20% sampling rate to evaluate algorithms. Table 2 shows the PSNR results of test images reconstructed by different comparison algorithms under different SNR conditions. As can be seen from Table 2, it is different from noiseless CS measurements. In noisy CS measurements, the anti-noise performance of the algorithm based on total variation model is obviously inferior to that based on nonlocal self-similarity prior. For a more intuitive comparison, we selected the Boats and Parrots images to draw the corresponding PSNR curves. The results are shown in Figure 8. It can be seen that the reconstruction performance of each comparison algorithm is relatively poor at low SNR, but the reconstruction algorithms based on nonlocal self-similarity prior are much better than those based on sparse gradient prior. At the beginning, the proposed algorithm has no difference with BM3D-CS and NLR-CS. With the improvement of SNR, the PSNR of TVAL3 and TVNLR increased significantly, at higher SNR, the PSNR improvement of them became gentle. In contrast, the reconstruction performance of the proposed algorithm and NLR-CS are improved steadily, and there is a certain disparity between the proposed algorithm and BM3D-CS at higher SNR. Compared with NLR-CS, the proposed algorithm achieves better reconstruction results under all SNR, although the gain amplitude of PSNR is not large. Figure 9 shows the subjective visual comparison results of the reconstruction of the Boats and Parrots images by each algorithm from noisy measurements. Here noise environment with SNR of 25 dB is selected. It can be seen from Figure 9 that even if the sampling rate is increased, the quality of image restoration is much worse than that of noiseless measurements. From the perspective of local detail images, the reconstruction algorithm based on nonlocal self-similarity prior is better. There are still some artifacts in BM3D-CS. Compared with NLR-CS, the texture details of the proposed algorithm are richer, such as the details of parrot head feathers. The comparison between PSNR and subjective quality results shows that the proposed algorithm is robust to noisy measurements.    Figure 10 shows the average reconstruction time required by several comparison algorithms to restore ten test images at different sampling rates (5%, 10%, 15%, 20%, 25%). All the results were compared in the same iterative environment. From Figure 10, the reconstruction time of TVAL3 which based on total variation model is quite short, because of utilizing the steepest descent method, the advantage of this algorithm is fast reconstruction. The reconstruction time of BM3D-CS is relatively short, because the processing of its similar blocks is carried out in the wavelet transform domain. Figure 10. The average reconstruction time(s) of ten test images reconstructed by different algorithms at different sampling rates. The results are obtained by running Matlab R2016 b on a personal computer with Win10 operating system, Intel ® Core™i5 processor and 8 GB memory.

Reconstruction Time
As the nonlocal filtering operation takes a lot of time in iteration, the reconstruction time of TVNLR is the longest. Compared with TVAL3, the computational complexity of the model increases due to the consideration of nonlocal structure information, similar block searching and matching. The reconstruction time of NLR-CS and the proposed algorithm is relatively longer, because the low-rank approximation of similar blocks requires singular value decomposition of the matrix, and the computational complexity will be further increased. Although the reconstruction time of the proposed algorithm will be a little longer, the reconstruction quality is improved significantly by adding bilateral total variation constraints for a joint solution. Compared with NLR-CS, the average time increase is about only 3 seconds, the performance gain is acceptable.

Discussion and Conclusions
In this paper, we proposed a CS image reconstruction algorithm, which combines bilateral total variation and weighted nuclear norm low-rank regularization. In this algorithm, the bilateral total variation constraint is added to the reconstruction model based on nonlocal low-rank as a global information prior, and the texture details of the reconstructed image are enhanced by using the bilateral total variation operator to maintain the edge of the image. Experimental results on standard test images demonstrate that the proposed algorithm works well. Compared with traditional algorithms which using total variation constraint or considering image nonlocal structure only, although the reconstruction time of the algorithm increases a little, the proposed algorithm obtains better reconstruction results both subjectively and objectively, and retains more detail information of the image, which shows the effectiveness of the proposed algorithm.

Conflicts of Interest:
The authors declare no conflict of interest.