Next Article in Journal
A Framework for Component Selection Considering Dark Sides of Artificial Intelligence: A Case Study on Autonomous Vehicle
Previous Article in Journal
An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Nonlocal Low-Rank Regularization Combined with Bilateral Total Variation for Compressive Sensing Image Reconstruction

Institute of Fiber-Optic Communication and Information Engineering, College of Information Engineering, Zhejiang University of Technology, Hangzhou 310023, China
*
Author to whom correspondence should be addressed.
Electronics 2021, 10(4), 385; https://doi.org/10.3390/electronics10040385
Submission received: 13 January 2021 / Revised: 28 January 2021 / Accepted: 1 February 2021 / Published: 4 February 2021
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
The use of non-local self-similarity prior between image blocks can improve image reconstruction performance significantly. We propose a compressive sensing image reconstruction algorithm that combines bilateral total variation and nonlocal low-rank regularization to overcome over-smoothing and degradation of edge information which result from the prior reconstructed image. The proposed algorithm makes use of the preservation of image edge information by bilateral total variation operator to enhance the edge details of the reconstructed image. In addition, we use weighted nuclear norm regularization as a low-rank constraint for similar blocks of the image. To solve this convex optimization problem, the Alternating Direction Method of Multipliers (ADMM) is employed to optimize and iterate the algorithm model effectively. Experimental results show that the proposed algorithm can obtain better image reconstruction quality than conventional algorithms with using total variation regularization or considering the nonlocal structure of the image only. At 10% sampling rate, the peak signal-to-noise ratio gain is up to 2.39 dB in noiseless measurements compared with Nonlocal Low-rank Regularization (NLR-CS). Reconstructed image comparison shows that the proposed algorithm retains more high frequency components. In noisy measurements, the proposed algorithm is robust to noise and the reconstructed image retains more detail information.

1. Introduction

Compressive sensing (CS) [1,2,3] is a burgeoning signal acquisition and reconstruction method that breaks through the frequency limit of the Nyquist–Shannon sampling theorem. CS theory points out that the perfect reconstruction of an original signal can be realized by using a small number of random measurements if the signal is sparse or can be sparsely expressed in a certain transform domain, such as Discrete Cosine Transform (DCT) and Discrete Wavelet Transform (DWT). The measurements are generated by a random Gaussian matrix or partial Fourier matrix in the way of sampling and data compressing at the same time. CS has the advantages of low sampling rate and high acquisition efficiency, which has been widely used in various fields including 3 D imaging [4], video acquisition [5], image encryption transmission [6], optical microscopy [7], target tracking [8], digital holography [9] and multimode fiber [10,11]. The accurate and high-quality reconstruction of signal is the core of the research of CS. In the process of image reconstruction, the prior information of image plays an important role. How to fully explore the prior information of image as much as possible and construct effective constraint becomes the key of image reconstruction. The most commonly used image prior is sparse prior, that is, building a sparse model based on sparse representation in a transform domain to obtain the optimal solution. The classical reconstruction algorithms mainly include greedy algorithms and convex optimization algorithms. The greedy algorithms based on the l0 norm minimization model include Orthogonal Match Pursuit (OMP) [12], Subspace Pursuit (SP) [13], Compressed Sampling Match Pursuit (CoSaMP) [14], etc; The convex optimization algorithms based on the l1 norm minimization model include Basis Pursuit (BP) [15], Iterative Shrinkage Threshold (IST) [16], Gradient Projection (GP) [17], Total Variation (TV) [18], etc.
Among them, the total variation algorithm uses the sparse gradient prior as a constraint to reconstruct the image, which can remove the noise and retain the detail information of the image better. However, some problems such as staircase effect still exist in the total variation algorithm. Many variants are proposed and applied to CS reconstruction gradually, such as fractional-order total variation [19], reweighted total variation [20] and bilateral total variation [21], etc. These reconstruction algorithms using the image sparse prior have achieved good image reconstruction performance. Recently, image restoration models based on nonlocal self-similarity prior have received extensive attention. With the application of nonlocal self-similarity prior [22,23,24] in image denoising, many researchers applied it to CS to realize image reconstruction. Zhang et al. [25] introduced nonlocal mean filter as a regularization term into the total variation model and used the correlation of noise between image blocks to set weights for filtering, which achieves excellent constraint and reconstruction results. Egiazarian et al. [26] presented an algorithm for joint filtering in block matching and sparse three-dimensional transformation domain, which is based on the nonlocal self-similarity among image blocks. Through the collaborative Wiener filtering of similar blocks in the wavelet transform domain, excellent denoising effect and reconstruction performance are achieved. Dong et al. [27] further explored the relationship between structured sparsity and nonlocal self-similarity of images; They presented a nonlocal low-rank regularized image reconstruction algorithm, which takes full advantage of the low-rank features of similar image blocks, removes redundant information and artifacts effectively in images and achieves excellent image restoration results by combining sparse encoding of images.
At present, reconstruction algorithms utilize the non-local self-similarity of image by adopting a block matching strategy to get similar image blocks. Since there are some duplicate structures in the image and the disturb of noise, the optimization model based on low-rank instead of sparsity constraint will inevitably remove these duplicate structures, and result in the problems of over-smoothing and edge information degradation of the reconstructed image [19]. In this paper, we add the bilateral total variation constraint as a global information prior to the reconstruction model based on nonlocal low-rank to propose an optimized scheme. The bilateral total variation operator is used to enhance the texture details of the reconstructed image. The low-rank minimization problem of the original image is usually non-convex and difficult to solve. The Weighted Nuclear Norm (WNN) [28] is employed to approximate the image low-rank in our model. In the stage of solving the optimization problem, the Alternating Direction Method of Multipliers (ADMM) [29] is used to iterate the model effectively.
The remainder of this paper is organized as follows: In Section 2, we introduce our reconstruction model combining bilateral total variation and weighted nuclear norm low-rank regularization. In Section 3, the process of using ADMM to recover compressed images is described. In Section 4, we evaluate the performance of the proposed algorithm and other mainstream CS algorithms. In Section 5, we give a conclusion.

2. Reconstruction Algorithm Combining Bilateral Total Variation and Nonlocal Low-Rank Regularization

According to CS theory, given a one-dimensional discrete vector signal x R N × 1 with length N, the measured value y R M × 1 can be obtained by M random projection:
y = Φ x ,  
where Φ R M × N M N is the sampling matrix. Since M N , Equation (1) is an underdetermined equation and has no unique solution, it is impossible to reconstruct the original signal x directly with the measured value y. However, when the measurement matrix Φ satisfies the restricted isometry property (RIP) [1,2], which means the length of measured value y collected by the measurement matrix Φ must be longer than the sparsity of the signal x, CS theory can guarantee the accurate reconstruction of sparse (or compressible) signal x.
According to the sparse prior of signal x, the traditional optimization algorithm can be written in the following unconstrained form:
x = arg min x 1 2 y Φ x 2 2 + λ x p ,  
where y Φ x 2 2 is the fidelity term of reconstruction, x p 0 p 1 is the sparse regularization term of signal, and λ is the properly selected regularization parameter.

2.1. The Bilateral Total Variation Model

For an image x, its total variation model is defined as:
x T V = D x 1 = D h x 1 + D v x 1 ,
where D = D h , D v , D h and D v represent the gradient difference operators in the horizontal and vertical directions respectively, D h x i , j = x i + 1 , j x i , j , D v x i , j = x i , j + 1 x i , j , i , j corresponds to the pixel position in the image x. The gradient of the image can be used as the reconstruction constraint to retain the detail information of the image. However, restoring the image with only the traditional total variation constraint has staircase effect in the smooth region. The local details of image are easy to be lost since the penalty of gradient is uniform. Therefore, the reweighted total variation method [20] is proposed, which applies a certain weight to the gradient to avoid local over-smoothing and staircase effect. The reweighted total variation model is defined as:
x R T V = w D x 1 ,
where w k + 1 = 1 / D x k 1 + ε , ε is a small positive constant, weight w is iteratively updated by image x.
The bilateral filtering [30] is a non-linear filtering method, which combines the spatial proximity and pixel value similarity of the image. Considering both spatial information and gray level similarity, it can achieve edge preserving and noise reduction. We introduce bilateral filtering into the total variation model. The bilateral total variation model is defined as:
x B T V = l = p p t = p p α l + t x S h l S v t x 1 ,
where S h l represents that the image is shifted l pixel horizontally, S v t represents that the image is shifted t pixels vertically, x S h l S v t x 1 represents the difference of image x at each pixel scale. Weight α 0 < α 1 is used to control the spatial attenuation of the regularization term, and p p 1 is the window size of the filter kernel.
The bilateral total variation model is essentially an extension of the traditional total variation model. When the weight α = 1 , set l = 1 , t = 0 or l = 0 , t = 1 , define Q h = I S h , Q v = I S v , I is the unit matrix, then the bilateral total variation model is transformed into:
x B T V = Q h x 1 + Q v x 1 ,
it can be seen that the above expression is consistent with Equation (3) of the total variation model.
For the reweighted bilateral total variation model, it can be expressed in the following form:
x R B T V = l = p p t = p p α l + t w b x S h l S v t x 1 ,  
here we set the weight w b k + 1 = 1 / x k S h l S v t x k 1 + ε .

2.2. The Weighted Nuclear Norm Low-Rank Model

The image restoration model based on image nonlocal self-similarity prior consists of two parts: one is the block matching strategy used to characterize the image self-similarity, the other is the low-rank approximation used to characterize the sparsity constraint. The block matching strategy refers to block grouping of similar blocks for an image x. For a n × n block x i at position i in the image, m similar image blocks are searched based on Euclidean distance in the search window (e.g., 20 × 20), i.e., G i = i j x i x i j < T , 0 j m 1 , where T is a predefined threshold, and G i denotes the collection of positions corresponding to those similar blocks. After block grouping, we obtain a data matrix X i = x i 0 , x i 1 , , x i m 1 , where each column of X i denotes a block similar to x i (including x i itself).
Due to the similar structure of these image blocks, the matrix X i composed by them has the property of low-rank. There is also noise pollution in the actual image x, so the similar block matrix can be modeled as: X i = L i + N i , where L i and N i represent low-rank matrix and Gaussian noise matrix respectively. Then, the low-rank matrix L i can be recovered by solving the following optimization problems:
L i = arg min L i i R i x L i 2 2 + μ rank L i ,
where R i x = R i 0 x , R i 1 x , , R i m 1 x is a similar block matrix composed of each image block after block matching, i.e., X i . The solution of Equation (8) is a rank minimization problem, which is non-convex and difficult to solve. In this paper, the Weighted Nuclear Norm (WNN) [28] is used to replace the rank of matrix. For a matrix X, its weighted nuclear norm is defined as:
X w , = j w j σ j X ,
where σ j X corresponds to the j-th singular value in X, w = w 1 , w 2 , , w n , w j 0 is the weight assigned to the corresponding σ j X . As we all know, the larger singular value in X, the more important characteristic component in matrix. So the larger singular value should give a smaller shrinkage in weight distribution. Here, we set the weight w j = 1 / σ j X + ε , and ε is a small positive constant.
In this way, the optimization problem of replacing the rank of matrix with weighted nuclear norm is transformed into:
L i = arg min L i i R i x L i 2 2 + μ L i w ,

2.3. The Joint Model

We add the proposed bilateral total variation constraint as a global information prior to the reconstruction based on the weighted nuclear norm low-rank model, and get the following joint model:
x = arg min x 1 2 y Φ x 2 2 + λ 1 i R i x L i 2 2 + μ L i w , + λ 2 l = p p t = p p α l + t w b x S h l S v t x 1
The joint reconstruction model seems to be relatively complicated. In order to facilitate the calculation, we simplify the bilateral total variation term, replace l = p p t = p p α l + t by β , and define Q x = x S h l S v t x , Equation (11) can be abbreviated as:
x = arg min x 1 2 y Φ x 2 2 + λ 1 i R i x L i 2 2 + μ L i w , + λ 2 β w b Q x 1

3. Compressed Image Reconstruction Process

The ADMM [29] can be used to solve the optimization problem of Equation (12) to recover image x. First, auxiliary variables are introduced for replacement:
x = arg min x 1 2 y Φ x 2 2 + λ 1 i R i u L i 2 2 + μ L i w , + λ 2 β w b z 1 s . t .   x = u , Q x = z ,
using the augmented Lagrangian function, Equation (13) is transformed into an unconstrained form:
L x , L i , u , z = arg min x 1 2 y Φ x 2 2 + λ 1 i R i u L i 2 2 + μ L i w , + λ 2 β w b z 1 + η 1 2 u x + a 2 2 + η 2 2 z Q x + b 2 2 ,
where η 1 and η 2 are penalty parameters, and a and b are Lagrange multipliers. Then the multiplier iteration is adopted as follows:
a k + 1 = a k x k + 1 u k + 1 b k + 1 = b k Q x k + 1 z k + 1 ,
where k is the number of iterations. According to the ADMM, the original problem can be divided into the following four sub-problems to solve.

3.1. Solving the Sub-Problem of L i

Having fixed x , u , z , the optimization problem of L i is as follows:
L i k + 1 = arg min L i λ 1 i R i u k L i 2 2 + μ L i w , ,
this optimization problem of the weighted nuclear norm is generally solved by singular value threshold (SVT) [31] operation:
L i k + 1 = U S w , τ Σ V T ,
where U Σ V T is the singular value decomposition of X i , and S w , τ Σ is the threshold operator to perform threshold operation on each element in the diagonal matrix Σ :
S w , τ Σ = max Σ τ d i a g w j k , 0 ,
here τ = μ / 2 λ 1 , and w j is the weight, as described in Section 2.2, we set w j k = 1 / σ j L i k + ε , and the singular value of L i k is sorted in descending order, so the weight is increasing. In order to reduce the amount of computation, we relocate similar blocks every T iterations rather than searching similar blocks in every iteration.

3.2. Solving the Sub-Problem of u

Having fixed x , L i , z , the optimization problem of u is as follows:
u k + 1 = arg min u λ 1 i R i u L i k 2 2 + η 1 2 u x k + a k 2 2
Equation (19) has a closed-form solution:
u k + 1 = λ 1 i R i T R i + η 1 I 1 λ 1 i R i T L i k + η 1 x k a k ,
where i R i T R i is the diagonal matrix, each term in it corresponds to the image pixel position, and its value is the number of overlapping image blocks covering the pixel position. i R i T L i k is the average value result of blocks, that is, average the similar blocks collected by each image block.

3.3. Solving the Sub-Problem of z

Having fixed x , L i , u , the optimization problem of z is as follows:
z k + 1 = arg min z λ 2 β w b z 1 + η 2 2 z Q x k + b k 2 2
Equation (21) can be solved according to soft threshold shrinkage [32]:
z k + 1 = soft Q x k b k , λ 2 β w b / η 2
The soft threshold shrinkage operator soft x , t = sgn x max x t , 0 . The gradient weight w b is updated according to the following form:
w b k + 1 = 1 / Q x k 1 + ε ,
where ε is a small positive constant, and the weight w b is updated according to the gradient at the pixel point in the k-th iteration of image.

3.4. Solving the Sub-Problem of x

Having fixed L i , u , z , the optimization problem of x is as follows:
x k + 1 = arg min x 1 2 y Φ x 2 2 + η 1 2 u k x + a k 2 2 + η 2 2 z k Q x + b k 2 2
Equation (24) has a closed-form solution:
x k + 1 = Φ T Φ + η 1 I + η 2 Q T Q 1 Φ T y + η 1 u k + a k + η 2 Q T z k + b k
Since the inverse of matrix in Equation (25) is large, it is not easy to solve directly. Here, we use the conjugate gradient algorithm [33] to deal with this problem.
After solving each sub-problem, the multipliers are updated according to Equation (15). The whole process of image reconstruction is summarized in Algorithm 1. A description of the relevant symbols is given in Appendix A.
Algorithm 1: The proposed CS reconstruction algorithm
Input: The measurements y and sampling matrix Φ
Initialization:
  The traditional CS algorithm (DCT, DWT, etc.) is used to estimate the initial image x 1 ;
  Set parameters λ 1 , λ 2 , α , p , η 1 , η 2 , K , T ;
  Set the nuclear norm weight w j = 1 , 1 , , 1 T ;
  Set the gradient weight w b = 1 ;
Outer loop: for k = 1 , 2 , , K do
  Using the similar block matching strategy to search and group the similar blocks in the image to get the similar block matrix X i ;
  Set L i 1 = X i ;
  Inner loop: for each block in L i k do
    Update weight: w j k = 1 / σ j L i k + ε ;
    Calculate L i k + 1 according to Equation (17);
  End for
  Calculate u k + 1 according to Equation (20);
  Calculate z k + 1 according to Equation (22), update the gradient weight w b k + 1 according to Equation (23);
  Calculate x k + 1 according to Equation (25);
  Update Lagrange multipliers a k + 1 , b k + 1 according to Equation (15);
  if mod k , T = 0 , relocate the similar blocks position and update similar blocks grouping;
End for
Output: The final reconstructed image x ^ = x K .

4. Experiments

We compare the proposed algorithm with several mainstream CS algorithms, including TVAL3 [32], TVNLR [25], BM3D-CS [26] and NLR-CS [27]. The comparison algorithms are all based on the total variation constraint ([25,32]) or non-local redundant structure of the image ([26,27]). These algorithms are obtained through the website of the relevant authors. All experimental results are obtained by running Matlab R2016b on a personal computer equipped with an Intel®Core™i5 processor and 8 GB memory running the Windows 10 operating system. We choose eight standard images and two resolution charts from the USC-SIPI image database to test the performance of our algorithm. All the test images are 256 × 256 gray-scale images, as shown in Figure 1. In order to evaluate the reconstruction performance objectively, we choose peak signal-to-noise ratio (PSNR) and structural similarity index (SSIM) as evaluation indexes. PSNR and SSIM are calculated as follows:
MSE = 1 M N i = 1 M j = 1 N x i , j y i , j 2 ,
PSNR = 10 lg 255 2 MSE ,
SSIM x , y = 2 μ x μ y + C 1 2 σ x y + C 2 μ x 2 + μ y 2 + C 1 σ x 2 + σ y 2 + C 2 .
In the above formulas, MSE is the mean square error, x is the reference image, y is the reconstructed image, and their size is M × N. The larger the PSNR value, the closer y and x are, which means the better the evaluation result of image quality is. μ x , μ y are the mean value of x and y, σ x , σ y are the standard deviation of x and y, respectively, and σ x y is the covariance between two images. C 1 and C 2 are small positive constants. The value of SSIM is between 0 and 1, and the larger the value, the better the image quality.
In order to make full use of the nonlocal self-similarity features in images as much as possible, it is necessary to select an appropriate block matching strategy. First of all, we give the PSNR results and the reconstruction time of the Monarch image reconstructed by the proposed algorithm. In Figure 2a, we set different image block sizes ( n 2 ) and search window sizes ( s 2 ). It can be seen that with the increase of image block size, the reconstruction time of the proposed algorithm is significantly shortened. With the increase of the search window size, the reconstruction time of the image will increase linearly. In general, the PSNR of the image does not change significantly. In Figure 2b, we set different number of similar blocks ( m ). It can also be seen that when the number of similar blocks is similar, the improvement of PSNR of image reconstructed by the proposed algorithm is relatively consistent with the increase of reconstruction time. However, when the number of selected similar blocks increases to a certain amount ( m = 45 ), the changes of the two are completely opposite. Moreover, when too many similar blocks are selected ( m = 54 ), the reconstruction time of the image will increase dramatically.

4.1. Parameters Selection

Since our proposed algorithm combines bilateral total variation, it is necessary to select appropriate spatial attenuation term α and filter window size p . Figure 3 shows the PSNR results and reconstruction time of the Monarch image reconstructed by the proposed algorithm with corresponding bilateral parameters. With the increase of spatial attenuation term α , the PSNR of the image gradually increases, reaching the peak at α = 0.7 , and then decreases slightly, but decreases sharply when α = 1 . When the filter window size p = 2 and p = 3 , the reconstruction results of the algorithm is approximately the same, but the final decline of the latter is faster. When p = 1 , the overall PSNR results are worse than the former two. In addition, we compared the corresponding reconstruction time. It can be found that the main factor affecting the reconstruction time of the algorithm is the filter window size p . The change of spatial attenuation term α makes the reconstruction time fluctuate slightly, but the change is not significant. When p = 1 and p = 2 , their reconstruction times are close to each other, and the latter is slightly longer. When p = 3 , the reconstruction time seems to be much longer. A larger filter window size p obviously increases the reconstruction time of the algorithm.
In the experiment, the sampling matrix Φ is a partial Fourier matrix. In the block matching strategy, based on the previous discussion, we select an image block with the size of n 2 = 6 × 6 and the number of similar blocks is m = 45 for trade-off between PSNR and reconstruction time. In order to reduce the computational complexity, we extract an image block every 5 pixels along the horizontal and vertical directions, and the search window size of block matching is s 2 = 20 × 20 . Lagrange multipliers a and b are initially set to null matrix, K = 100, T = 4, i.e., the total number of iterations is 100, and the similar block positions are relocated every 4 iterations. The initial image is estimated by the same standard DCT algorithm as NLR-CS. The regularization parameters λ 1 , λ 2 are set separately according to different sampling rates. According to the analysis results of Figure 3, we choose to set the spatial attenuation weight α of bilateral total variation term to 0.7, and the filter window size p = 2 . The penalty parameter used in ADMM iteration η 1 = η 2 = 0.01 . We first present the experimental results for noiseless CS measurements and then report the results using noisy CS measurements.

4.2. Noiseless CS Measurements

Table 1 shows the PSNR and SSIM of the proposed algorithm and comparison algorithms at different sampling rates (5%, 10%, 15%, 20%, 25%) respectively. It can be seen from Table 1 that at a lower sampling rate, the total variation model based on the sparse gradient prior, including TVAL3 and TVNLR, is worse than other algorithms. In contrast, BM3D-CS and NLR-CS can extract the nonlocal self-similarity prior information of the image, so even if the sampling rate is very low, the PSNR reconstructed by these algorithms are also great. The proposed algorithm, owing to the joint total variation and nonlocal rank minimization, achieves better reconstruction performance than NLR-CS. For example, at 10% sampling rate, different test images in Table 1 can have different PSNR gain of 1.86 dB, 1.38 dB, 1.44 dB, 2.08 dB, 2.06 dB, 1.96 dB, 1.48 dB, 1.22 dB, 2.17 dB and 2.39 dB respectively. For a more intuitive comparison, we selected the Cameraman and Peppers images to draw the corresponding PSNR curves. The results are shown in Figure 4. It can be seen that our proposed algorithm achieves better PSNR at each sampling rate. When reconstructing Testpat1 image, we find that the PSNR result of BM3D-CS is better than that of NLR-CS and the proposed algorithm. We speculate that the initial image estimated by the standard DCT algorithm affects the performance of the proposed algorithm. It also can be seen that at a lower sampling rate, the SSIM of TVAL3 and TVNLR are relatively lower, while those of nonlocal self-similarity algorithms are relatively higher. Among them, the SSIM of the proposed algorithm is closer to 1, which indicates that the reconstructed image quality is better.
In order to compare the reconstructed image quality subjectively, Figure 5 shows the results of reconstruction of five test images (Boats, Cameraman, House, Peppers, Testpat2) by the proposed algorithm and comparison algorithms at 10% sampling rate. There are a lot of nonlocal similar redundant structures in these images, so we choose them for comparison. The above comparison results show that at a lower sampling rate, the details of TVAL3 and TVNLR are seriously lost. For the House image, the texture of the house becomes very blurred, and the bricks and tiles on the eaves of the house cannot be distinguished. The image reconstructed by BM3D-CS with nonlocal similar structure is better, but there are still some artifacts, and the details are also blurred. NLR-CS and the proposed algorithm look clearer visually. Because the proposed algorithm adds bilateral total variation as a constraint to the nonlocal low-rank sparse model, compared with NLR-CS, it plays an important role in maintaining edge texture. Compared with the bricks and tiles details in the middle area on the right side of the image, we can see that the texture of the proposed algorithm is richer, while the NLR-CS is too smooth. Similarly, for the Boats image, we can see from the local detail image that the result of NLR-CS reconstruction is too smooth resulting in a serious loss of rope details, while the details of the proposed algorithm look better.
In order to compare the texture details of the reconstructed image more intuitively, we use DCT to separate the high and low frequency components of the test images. Figure 6 shows the comparison of high frequency details of the reconstructed Cameraman and Peppers images. It can be seen from Figure 6 that there are many artifacts in the image restored by BM3D-CS, resulting in a large number of artifacts interfering with high frequency components and serious loss of details. The high frequency details of TVAL3 and TVNLR are obviously less. Compared with NLR-CS, the proposed algorithm can reconstruct the high frequency profile more clearly and more details of the texture can be found. Figure 7 shows the relative errors and SSIM results of high frequency components of the Cameraman image restored by different algorithms at different sampling rates (5%, 10%, 15%, 20%, 25%). In Figure 7a, the relative errors of TVAL3, TVNLR and BM3D-CS are relatively large at a low sampling rate, but with the increase of sampling rates, the relative errors will decrease greatly. The relative errors of NLR-CS are very low at the beginning. With the increase of sampling rates, the amplitude of NLR-CS decreases slightly, but it is always in the leading position. The downward trend of our proposed algorithm is similar to that of NLR-CS, and its relative errors are lower. The SSIM curve in Figure 7b seems to be consistent, but the result of the proposed algorithm is always optimal. It is worth noting that both in Figure 7a,b it can be seen that the recovery results of BM3D-CS are relatively poor. Obviously, as shown in Figure 6, the high frequency details of the algorithm are seriously damaged due to the interference of artifacts. The comparison of the above results shows that the proposed algorithm can retain more details of high frequency components of the restored image.

4.3. Noisy CS Measurements

We also test the robustness of the proposed algorithm to noisy measurements, by adding Gaussian noise to CS measurements, and the variation of standard deviation of noise will produce a signal-to-noise ratio (SNR) between 10 dB and 30 dB. In order to compare the results reasonably, we choose a 20% sampling rate to evaluate algorithms. Table 2 shows the PSNR results of test images reconstructed by different comparison algorithms under different SNR conditions. As can be seen from Table 2, it is different from noiseless CS measurements. In noisy CS measurements, the anti-noise performance of the algorithm based on total variation model is obviously inferior to that based on nonlocal self-similarity prior. For a more intuitive comparison, we selected the Boats and Parrots images to draw the corresponding PSNR curves. The results are shown in Figure 8. It can be seen that the reconstruction performance of each comparison algorithm is relatively poor at low SNR, but the reconstruction algorithms based on nonlocal self-similarity prior are much better than those based on sparse gradient prior. At the beginning, the proposed algorithm has no difference with BM3D-CS and NLR-CS. With the improvement of SNR, the PSNR of TVAL3 and TVNLR increased significantly, at higher SNR, the PSNR improvement of them became gentle. In contrast, the reconstruction performance of the proposed algorithm and NLR-CS are improved steadily, and there is a certain disparity between the proposed algorithm and BM3D-CS at higher SNR. Compared with NLR-CS, the proposed algorithm achieves better reconstruction results under all SNR, although the gain amplitude of PSNR is not large. Figure 9 shows the subjective visual comparison results of the reconstruction of the Boats and Parrots images by each algorithm from noisy measurements. Here noise environment with SNR of 25 dB is selected. It can be seen from Figure 9 that even if the sampling rate is increased, the quality of image restoration is much worse than that of noiseless measurements. From the perspective of local detail images, the reconstruction algorithm based on nonlocal self-similarity prior is better. There are still some artifacts in BM3D-CS. Compared with NLR-CS, the texture details of the proposed algorithm are richer, such as the details of parrot head feathers. The comparison between PSNR and subjective quality results shows that the proposed algorithm is robust to noisy measurements.

4.4. Reconstruction Time

Figure 10 shows the average reconstruction time required by several comparison algorithms to restore ten test images at different sampling rates (5%, 10%, 15%, 20%, 25%). All the results were compared in the same iterative environment. From Figure 10, the reconstruction time of TVAL3 which based on total variation model is quite short, because of utilizing the steepest descent method, the advantage of this algorithm is fast reconstruction. The reconstruction time of BM3D-CS is relatively short, because the processing of its similar blocks is carried out in the wavelet transform domain.
As the nonlocal filtering operation takes a lot of time in iteration, the reconstruction time of TVNLR is the longest. Compared with TVAL3, the computational complexity of the model increases due to the consideration of nonlocal structure information, similar block searching and matching. The reconstruction time of NLR-CS and the proposed algorithm is relatively longer, because the low-rank approximation of similar blocks requires singular value decomposition of the matrix, and the computational complexity will be further increased. Although the reconstruction time of the proposed algorithm will be a little longer, the reconstruction quality is improved significantly by adding bilateral total variation constraints for a joint solution. Compared with NLR-CS, the average time increase is about only 3 seconds, the performance gain is acceptable.

5. Discussion and Conclusions

In this paper, we proposed a CS image reconstruction algorithm, which combines bilateral total variation and weighted nuclear norm low-rank regularization. In this algorithm, the bilateral total variation constraint is added to the reconstruction model based on nonlocal low-rank as a global information prior, and the texture details of the reconstructed image are enhanced by using the bilateral total variation operator to maintain the edge of the image. Experimental results on standard test images demonstrate that the proposed algorithm works well. Compared with traditional algorithms which using total variation constraint or considering image nonlocal structure only, although the reconstruction time of the algorithm increases a little, the proposed algorithm obtains better reconstruction results both subjectively and objectively, and retains more detail information of the image, which shows the effectiveness of the proposed algorithm.

Author Contributions

Concept and structure of this paper, K.Z.; resources, Y.Q. and H.Z.; writing—original draft preparation, K.Z.; writing—review and editing, Y.Q., H.R. and Y.H. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (NSFC) (61675184 and 61405178).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

Table A1. Description of relevant symbols in Algorithm 1.
Table A1. Description of relevant symbols in Algorithm 1.
SymbolDescriptionSymbolDescription
CScompressive sensing Φ sampling matrix
DCTdiscrete cosine transform K ADMM iterations
DWTdiscrete wavelet transform T relocate similar blocks threshold
y measured value w j nuclear norm weight
x reconstructed image w b gradient weight
λ 1 regularization parameter X i similar block matrix
λ 2 regularization parameter L i low-rank matrix
α spatial attenuation weight u auxiliary variable
p filter window size z auxiliary variable
η 1 penalty parameter a Lagrange multiplier
η 2 penalty parameter b Lagrange multiplier

References

  1. Donoho, D.L. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  2. Candès, E.J.; Romberg, J.; Tao, T. Robust Uncertainty Principles: Exact Signal Reconstruction from Highly Incomplete Frequency Information. IEEE Trans. Inf. Theory 2006. [Google Scholar] [CrossRef] [Green Version]
  3. Candès, E.J.; Wakin, M.B. An Introduction to Compressive Sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
  4. Sun, M.J.; Edgar, M.P.; Gibson, G.M.; Sun, B.; Radwell, N.; Lamb, R.; Padgett, M.J. Single-pixel three-dimensional imaging with time-based depth resolution. Nat. Comm. 2016, 7, 12010. [Google Scholar] [CrossRef]
  5. Zhang, Y.; Edgar, M.P.; Sun, B.; Radwell, N.; Gibson, G.M.; Padgett, M.J. 3D single-pixel video. J. Opt. 2016, 18, 035203. [Google Scholar] [CrossRef]
  6. Zhang, Y.; Xu, B.; Zhou, N. A novel image compression–encryption hybrid algorithm based on the analysis sparse representation. Opt. Comm. 2017, 392, 223–233. [Google Scholar] [CrossRef]
  7. Rodriguez, A.D.; Clemente, P.; Tajahuerce, E.; Lancis, J. Dual-mode optical microscope based on single-pixel imaging. Optics Lasers Eng. 2016, 82, 87–94. [Google Scholar] [CrossRef] [Green Version]
  8. Shi, D.; Yin, K.; Huang, J.; Yuan, K.; Zhu, W.; Xie, C.; Liu, D.; Wang, Y. Fast tracking of moving objects using single-pixel imaging. Opt. Comm. 2019, 440, 155–162. [Google Scholar] [CrossRef]
  9. Martínez-León, L.; Clemente, P.; Mori, Y.; Climent, V.; Tajahuerce, E. Single-pixel digital holography with phase-encoded illumination. Opt. Express 2017, 25, 4975–4984. [Google Scholar] [CrossRef]
  10. Amitonova, L.V.; Boer, J.F.D. Compressive imaging through a multimode fiber. Opt. Lett. 2018, 43, 5427. [Google Scholar] [CrossRef] [PubMed]
  11. Lan, M.; Guan, D.; Gao, L.; Li, J.; Yu, S.; Wu, G. Robust compressive multimode fiber imaging against bending with enhanced depth of field. Opt. Express 2019, 27, 12957–12962. [Google Scholar] [CrossRef] [PubMed]
  12. Cohen, A.; Dahmen, W.; Devore, R. Orthogonal Matching Pursuit under the Restricted Isometry Property. Constr. Approx. 2017, 45, 113–127. [Google Scholar] [CrossRef] [Green Version]
  13. Han, X.; Zhao, G.; Li, X.; Shu, T.; Yu, W. Sparse signal reconstruction via expanded subspace pursuit. J. Appl. Remote Sens. 2019, 13, 1. [Google Scholar] [CrossRef]
  14. Tirer, T.; Giryes, R. Generalizing CoSaMP to Signals from a Union of Low Dimensional Linear Subspaces. Appl. Comput. Harmonic Anal. 2017. [Google Scholar] [CrossRef] [Green Version]
  15. Zeng, K.; Erus, G.; Sotiras, A.; Shinohara, R.T.; Davatzikos, C. Abnormality Detection via Iterative Deformable Registration and Basis-Pursuit Decomposition. IEEE Trans. Med. Imaging 2016, 35, 1937–1951. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  16. Bayram, I. On the convergence of the iterative shrinkage/thresholding algorithm with a weakly convex penalty. IEEE Trans. Signal Process. 2016, 64, 1597–1608. [Google Scholar] [CrossRef] [Green Version]
  17. Gong, B.; Liu, W.; Tang, T.; Zhao, W.; Zhou, T. An Efficient Gradient Projection Method for Stochastic Optimal Control Problems. SIAM J. Num. Anal. 2017, 55, 2982–3005. [Google Scholar] [CrossRef] [Green Version]
  18. Vishnevskiy, V.; Gass, T.; Szekely, G.; Tanner, C.; Goksel, O. Isotropic Total Variation Regularization of Displacements in Parametric Image Registration. IEEE Trans. Med. Imaging 2017, 36, 385–395. [Google Scholar] [CrossRef] [Green Version]
  19. Chen, H.; Qin, Y.; Ren, H.; Chang, L.; Zheng, H. Adaptive weighted high frequency iterative algorithm for fractional-order total variation with nonlocal regularization for image reconstruction. Electronics 2020, 9, 1103. [Google Scholar] [CrossRef]
  20. Candès, E.J.; Wakin, M.B.; Boyd, S.P. Enhancing Sparsity by Reweighted l1 Minimization. J. Fourier Anal. Appl. 2008, 14, 877–905. [Google Scholar] [CrossRef]
  21. Farsiu, S.; Robinson, M.D.; Elad, M.; Milanfar, P. Fast and robust multiframe super resolution. IEEE Trans. Image Process. 2004, 13, 1327–1344. [Google Scholar] [CrossRef]
  22. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  23. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K.O. Image restoration by sparse 3D transform-domain collaborative filtering. In Proceedings of the Image Processing: Algorithms and Systems VI, San Jose, CA, USA, 28 January 2008. [Google Scholar]
  24. Dong, W.; Zhang, L.; Shi, G. Nonlocally Centralized Sparse Representation for Image Restoration. IEEE Trans. Image Process. 2012, 22. [Google Scholar] [CrossRef] [Green Version]
  25. Zhang, J.; Liu, S.; Zhao, D.; Xiong, R.; Ma, S. Improved total variation based image compressive sensing recovery by nonlocal regularization. In Proceedings of the 2013 IEEE International Symposium on Circuits and Systems (ISCAS), Beijing, China, 19–23 May 2013; pp. 2836–2839. [Google Scholar]
  26. Egiazarian, K.; Foi, A.; Katkovnik, V. Compressed Sensing Image Reconstruction via Recursive Spatially Adaptive Filtering. In Proceedings of the IEEE Conference on Image Processing, San Antonio, TX, USA, 17–19 September 2007. [Google Scholar]
  27. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive Sensing via Nonlocal Low-Rank Regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef] [PubMed]
  28. Gu, S.; Zhang, L.; Zuo, W.; Feng, X. Weighted Nuclear Norm Minimization with Application to Image Denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014. [Google Scholar]
  29. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed Optimization and Statistical Learning via the Alternating Direction Method of Multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  30. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the IEEE Conference on Computer Vision, Bombay, India, 7 January 1998. [Google Scholar]
  31. Cai, J.F.; Candès, E.J.; Shen, Z. A Singular Value Thresholding Algorithm for Matrix Completion. SIAM J. Optim. 2010, 20, 1956–1982. [Google Scholar] [CrossRef]
  32. Li, C.; Yin, W.; Jiang, H.; Zhang, Y. An efficient augmented Lagrangian method with applications to total variation minimization. Computat. Optim. Appl. 2013, 56, 507–530. [Google Scholar] [CrossRef] [Green Version]
  33. Nazareth, J.L. Conjugate gradient method. WIREs Computat. Stat. 2009, 1, 348–353. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Standard test images used in experiments. From left to right, first row: Barbara, Boats, Cameraman, Foreman, House; second row: Peppers, Monarch, Parrots, Testpat1, Testpat2.
Figure 1. Standard test images used in experiments. From left to right, first row: Barbara, Boats, Cameraman, Foreman, House; second row: Peppers, Monarch, Parrots, Testpat1, Testpat2.
Electronics 10 00385 g001
Figure 2. The PSNR (dB) and reconstruction time(s) of the Monarch image with different (a) block sizes ( n 2 ) and search window size ( s 2 ), (b) number of similar blocks ( m ) reconstructed by the proposed algorithm.
Figure 2. The PSNR (dB) and reconstruction time(s) of the Monarch image with different (a) block sizes ( n 2 ) and search window size ( s 2 ), (b) number of similar blocks ( m ) reconstructed by the proposed algorithm.
Electronics 10 00385 g002
Figure 3. The PSNR (dB) and reconstruction time(s) of the Monarch image with different spatial attenuation term α and filter window size p reconstructed by the proposed algorithm.
Figure 3. The PSNR (dB) and reconstruction time(s) of the Monarch image with different spatial attenuation term α and filter window size p reconstructed by the proposed algorithm.
Electronics 10 00385 g003
Figure 4. The PSNR (dB) curves of the (a) Cameraman and (b) Peppers images reconstructed at different sampling rates reconstructed by different algorithms.
Figure 4. The PSNR (dB) curves of the (a) Cameraman and (b) Peppers images reconstructed at different sampling rates reconstructed by different algorithms.
Electronics 10 00385 g004
Figure 5. Comparison of reconstruction results of five test images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 10% sampling rate. The lower right or left corner of the image is the selected local detail image.
Figure 5. Comparison of reconstruction results of five test images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 10% sampling rate. The lower right or left corner of the image is the selected local detail image.
Electronics 10 00385 g005
Figure 6. Comparison of high frequency details of the Cameraman(top) and Peppers(bottom) images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 10% sampling rate.
Figure 6. Comparison of high frequency details of the Cameraman(top) and Peppers(bottom) images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 10% sampling rate.
Electronics 10 00385 g006
Figure 7. The (a) relative error(dB) and (b) SSIM of high frequency components of the Cameraman image at different sampling rates reconstructed by different algorithms.
Figure 7. The (a) relative error(dB) and (b) SSIM of high frequency components of the Cameraman image at different sampling rates reconstructed by different algorithms.
Electronics 10 00385 g007
Figure 8. The PSNR (dB) curves of the (a) Boats and (b) Parrots images reconstructed at 20% sampling rate from noisy measurements.
Figure 8. The PSNR (dB) curves of the (a) Boats and (b) Parrots images reconstructed at 20% sampling rate from noisy measurements.
Electronics 10 00385 g008
Figure 9. Comparison of reconstruction results of the Boats and Parrots images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 20% sampling rate from noisy measurements (SNR = 25 dB). The lower right corner of the image is the selected local detail image.
Figure 9. Comparison of reconstruction results of the Boats and Parrots images with (a) TVAL3, (b) TVNLR, (c) BM3D-CS, (d) NLR-CS, (e) Proposed algorithm at 20% sampling rate from noisy measurements (SNR = 25 dB). The lower right corner of the image is the selected local detail image.
Electronics 10 00385 g009
Figure 10. The average reconstruction time(s) of ten test images reconstructed by different algorithms at different sampling rates. The results are obtained by running Matlab R2016 b on a personal computer with Win10 operating system, Intel®Core™i5 processor and 8 GB memory.
Figure 10. The average reconstruction time(s) of ten test images reconstructed by different algorithms at different sampling rates. The results are obtained by running Matlab R2016 b on a personal computer with Win10 operating system, Intel®Core™i5 processor and 8 GB memory.
Electronics 10 00385 g010
Table 1. The PSNR (dB, left) and SSIM (right) of test images at different sampling rates reconstructed by the proposed algorithm and comparison algorithms. The best performance algorithm is shown in bold.
Table 1. The PSNR (dB, left) and SSIM (right) of test images at different sampling rates reconstructed by the proposed algorithm and comparison algorithms. The best performance algorithm is shown in bold.
ImageMethodSampling Rates
0.050.10.150.20.25
BarbaraTVAL319.95|0.54921.97|0.65823.98|0.74324.79|0.77625.36|0.812
TVNLR21.88|0.54922.56|0.66324.07|0.75025.36|0.82627.54|0.883
BM3D-CS23.41|0.62124.31|0.73927.52|0.82130.94|0.90433.64|0.935
NLR-CS25.99|0.80127.34|0.81130.69|0.90533.85|0.94737.87|0.973
Proposed28.16|0.83629.20|0.86332.93|0.93636.45|0.96639.57|0.979
BoatsTVAL322.06|0.65523.36|0.66025.37|0.76227.03|0.79928.91|0.838
TVNLR24.31|0.66025.15|0.67326.90|0.76928.07|0.80629.60|0.838
BM3D-CS25.04|0.66927.52|0.79529.89|0.82632.88|0.90934.93|0.943
NLR-CS27.18|0.81328.64|0.81531.83|0.90335.13|0.94638.63|0.970
Proposed29.64|0.84530.02|0.85334.35|0.93537.87|0.96540.47|0.978
CameramanTVAL315.73|0.50819.61|0.58421.63|0.66622.68|0.73924.85|0.813
TVNLR17.23|0.53721.26|0.64522.40|0.68724.98|0.77127.24|0.829
BM3D-CS22.67|0.65624.56|0.74226.36|0.79029.12|0.87932.04|0.908
NLR-CS25.35|0.78526.72|0.79229.72|0.87032.92|0.92336.60|0.954
Proposed27.72|0.81228.16|0.82031.42|0.89535.15|0.94238.71|0.964
ForemanTVAL315.17|0.77017.64|0.85420.86|0.87422.84|0.89024.03|0.905
TVNLR16.75|0.78518.85|0.86722.49|0.88125.75|0.90427.79|0.918
BM3D-CS29.17|0.81331.41|0.86934.71|0.89635.82|0.91537.71|0.929
NLR-CS32.49|0.87733.61|0.89837.27|0.92739.68|0.95741.34|0.970
Proposed34.65|0.90935.69|0.91939.35|0.95441.95|0.97243.02|0.981
HouseTVAL316.44|0.59921.91|0.73422.51|0.77526.21|0.80827.43|0.826
TVNLR20.34|0.62724.54|0.76225.12|0.77628.14|0.80829.67|0.831
BM3D-CS27.31|0.72629.57|0.79633.13|0.87435.12|0.90337.83|0.916
NLR-CS31.27|0.84032.71|0.85736.30|0.91439.22|0.95240.64|0.960
Proposed33.81|0.86734.77|0.87738.23|0.93740.58|0.96242.47|0.974
PeppersTVAL319.41|0.57620.81|0.68423.58|0.76326.54|0.80827.38|0.833
TVNLR19.95|0.58521.13|0.69024.54|0.77027.67|0.80528.89|0.838
BM3D-CS25.17|0.70126.07|0.77128.57|0.83629.55|0.86330.32|0.882
NLR-CS25.63|0.74326.86|0.77629.33|0.85231.57|0.87732.80|0.895
Proposed27.34|0.79828.82|0.81831.49|0.87133.51|0.90135.24|0.923
MonarchTVAL317.59|0.53519.36|0.67524.91|0.78726.77|0.82727.65|0.859
TVNLR18.59|0.54322.02|0.69525.57|0.78627.71|0.84429.39|0.882
BM3D-CS22.89|0.70125.39|0.80627.10|0.85530.59|0.90533.96|0.956
NLR-CS24.92|0.80726.48|0.84628.61|0.89632.47|0.94537.10|0.972
Proposed26.45|0.84327.96|0.87530.82|0.92835.18|0.96439.43|0.980
ParrotsTVAL321.57|0.71322.69|0.73625.39|0.78526.93|0.84927.84|0.893
TVNLR22.22|0.71424.74|0.72126.48|0.78527.95|0.85428.12|0.894
BM3D-CS25.95|0.76927.81|0.84230.94|0.87732.14|0.89933.43|0.919
NLR-CS29.05|0.85230.88|0.86533.60|0.91937.13|0.95140.21|0.969
Proposed31.29|0.87532.10|0.88736.45|0.94139.50|0.96441.43|0.974
Testpat1TVAL37.90|0.48612.07|0.58314.85|0.66717.23|0.73620.55|0.805
TVNLR10.23|0.52414.06|0.61717.01|0.74919.31|0.79622.34|0.836
BM3D-CS17.69|0.73921.33|0.78423.54|0.85425.50|0.90127.01|0.936
NLR-CS13.68|0.53416.45|0.67418.76|0.77021.67|0.82223.83|0.852
Proposed15.41|0.64718.62|0.75820.70|0.81624.65|0.87526.55|0.910
Testpat2TVAL37.17|0.46812.40|0.66413.55|0.68617.96|0.77619.32|0.795
TVNLR10.26|0.50414.74|0.69416.52|0.72420.92|0.80524.81|0.838
BM3D-CS16.62|0.63318.77|0.77223.98|0.81426.11|0.85828.99|0.892
NLR-CS18.78|0.76721.64|0.80324.64|0.83127.78|0.90930.49|0.916
Proposed22.64|0.81324.03|0.83926.88|0.86830.95|0.92833.03|0.956
Table 2. The PSNR (dB) of test images reconstructed at 20% sampling rate from noisy measurements (SNR = 10, 15, 20, 25, 30 dB). The best performance algorithm is shown in bold.
Table 2. The PSNR (dB) of test images reconstructed at 20% sampling rate from noisy measurements (SNR = 10, 15, 20, 25, 30 dB). The best performance algorithm is shown in bold.
ImageMethodPSNR(dB)
SNR = 10SNR = 15SNR = 20SNR = 25SNR = 30
BarbaraTVAL34.099.4017.1121.3523.80
TVNLR5.2712.5418.5722.3824.79
BM3D-CS15.9119.9622.8225.3226.92
NLR-CS16.2020.3523.9026.6430.06
Proposed16.6021.0825.2429.1532.96
BoatsTVAL33.4712.7115.6221.2025.81
TVNLR4.5213.6517.7622.8226.31
BM3D-CS14.9819.7324.7727.6029.09
NLR-CS16.0220.9825.3028.7531.51
Proposed16.4022.1026.4329.3233.06
CameramanTVAL32.966.6911.3617.9121.55
TVNLR3.888.0114.6718.7023.67
BM3D-CS15.0919.5723.1526.4127.49
NLR-CS16.2420.5724.4427.4929.52
Proposed16.6621.2725.4829.1032.29
ForemanTVAL32.4410.1617.4019.6721.48
TVNLR3.7111.9320.3322.5124.21
BM3D-CS14.5819.6324.3328.4131.89
NLR-CS14.9419.7724.4728.8932.74
Proposed15.2620.2825.1029.6833.88
HouseTVAL33.8210.4016.3722.3925.75
TVNLR5.1512.6120.6525.0627.78
BM3D-CS15.6919.6224.1929.0730.62
NLR-CS15.9220.6225.1629.3432.89
Proposed16.3021.2325.9430.3534.27
PeppersTVAL32.788.3614.6316.1623.48
TVNLR2.9810.0516.0718.3526.96
BM3D-CS15.4119.1623.3527.5128.33
NLR-CS16.3920.7524.5828.4730.26
Proposed16.8021.4225.5829.0931.89
MonarchTVAL33.7410.2218.0923.1525.94
TVNLR4.6611.4921.1624.2626.42
BM3D-CS15.8620.5724.0326.5828.08
NLR-CS16.5321.3724.5228.3929.90
Proposed16.9921.4325.5129.3132.97
ParrotsTVAL32.8613.8720.9124.8525.93
TVNLR4.8216.4721.7625.7126.52
BM3D-CS15.1419.8324.8027.6629.19
NLR-CS16.1920.2125.6729.0432.14
Proposed16.5721.4126.9530.1433.87
Testpat1TVAL3−2.182.676.4812.5516.59
TVNLR−1.653.977.9713.7517.89
BM3D-CS12.8717.0321.1223.5824.79
NLR-CS11.4514.8517.4819.0920.78
Proposed11.8015.5918.9221.5123.22
Testpat2TVAL3−4.772.937.109.8816.78
TVNLR−2.783.127.8413.6819.10
BM3D-CS10.4114.2918.9222.3625.17
NLR-CS10.9415.0719.9923.1126.64
Proposed11.1815.6020.2424.4328.94
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhang, K.; Qin, Y.; Zheng, H.; Ren, H.; Hu, Y. Nonlocal Low-Rank Regularization Combined with Bilateral Total Variation for Compressive Sensing Image Reconstruction. Electronics 2021, 10, 385. https://doi.org/10.3390/electronics10040385

AMA Style

Zhang K, Qin Y, Zheng H, Ren H, Hu Y. Nonlocal Low-Rank Regularization Combined with Bilateral Total Variation for Compressive Sensing Image Reconstruction. Electronics. 2021; 10(4):385. https://doi.org/10.3390/electronics10040385

Chicago/Turabian Style

Zhang, Kunhao, Yali Qin, Huan Zheng, Hongliang Ren, and Yingtian Hu. 2021. "Nonlocal Low-Rank Regularization Combined with Bilateral Total Variation for Compressive Sensing Image Reconstruction" Electronics 10, no. 4: 385. https://doi.org/10.3390/electronics10040385

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop