Image Denoising Using a Compressive Sensing Approach Based on Regularization Constraints

In remote sensing applications and medical imaging, one of the key points is the acquisition, real-time preprocessing and storage of information. Due to the large amount of information present in the form of images or videos, compression of these data is necessary. Compressed sensing is an efficient technique to meet this challenge. It consists in acquiring a signal, assuming that it can have a sparse representation, by using a minimum number of nonadaptive linear measurements. After this compressed sensing process, a reconstruction of the original signal must be performed at the receiver. Reconstruction techniques are often unable to preserve the texture of the image and tend to smooth out its details. To overcome this problem, we propose, in this work, a compressed sensing reconstruction method that combines the total variation regularization and the non-local self-similarity constraint. The optimization of this method is performed by using an augmented Lagrangian that avoids the difficult problem of nonlinearity and nondifferentiability of the regularization terms. The proposed algorithm, called denoising-compressed sensing by regularization (DCSR) terms, will not only perform image reconstruction but also denoising. To evaluate the performance of the proposed algorithm, we compare its performance with state-of-the-art methods, such as Nesterov’s algorithm, group-based sparse representation and wavelet-based methods, in terms of denoising and preservation of edges, texture and image details, as well as from the point of view of computational complexity. Our approach permits a gain up to 25% in terms of denoising efficiency and visual quality using two metrics: peak signal-to-noise ratio (PSNR) and structural similarity (SSIM).


Introduction
Compressed sensing (CS) has already attracted great interest in various fields. Examples include medical imaging [1,2], communication systems [3][4][5][6], remote sensing [7], reconstruction algorithm design [8], image storage in databases [9], etc. Compressed sensing provides an alternative approach to Shannon's vision to reduce the number of samples and/or reduce transmission/storage costs. Other approaches also address this issue, such as random sampling [10]. Compressed sensing recovery is a linear optimization problem. The most common CS retrieval algorithms explore the prior knowledge that a natural image is sparse in certain domains, such as the wavelet domain, where simple and efficient noise reduction is possible [11][12][13][14][15], or in the discrete-gradient domain, which we will develop in this work.
In real life, images are usually noisy, but noise is random and is, therefore, unknown. This noise can have many origins: It can be due to poor weather conditions (wind, haze, fog, mist, . . . ), light fluctuations, the electronic image sensor of a digital camera, the conditions under which the image was acquired or simply the manner the image was stored and the techniques used to compress it. Therefore, denoising is an essential preprocessing step to recover improved image quality.
We can consider the problem of image recovery as an inverse problem because the goal is to reconstruct an image close to the original image from a degraded image while respecting two constraints: an optimized image quality and the speed of execution. Our strategy is part of this framework, which consists in recovering a quality image-a debatable notion that is imprecise and depends not only on objective criteria but also on the "eye" of the observer-in a relatively short time.
The regularization of an inverse problem corresponds to the idea that data alone do not allow obtaining an acceptable solution and that it is, thus, necessary to introduce a priori information on the regularity of the image to be estimated, reconstructed or recovered. The regularization of an inverse problem corresponds to the idea that data alone cannot make obtaining an acceptable solution possible and that it is, therefore, necessary to introduce a priori information, likely allowing an estimation, reconstruction or recovery of the image of interest.
Many inverse problem optimization approaches for image denoising have been proposed in the literature. Some are based on deep learning and, more precisely, on the deep generative network [16], and others are based on models [17].
In particular, the total variation (TV)-based approach has been one of the most popular and successfully applied approach, where, for example, Chambolle [18] introduced the dual approach to the unconstrained real case. Subsequently, Beck and Teboulle [19] presented a fast computational method based on a gradient optimization approach to solve the TV-regularized problem. Recently, other denoising approaches have been proposed, such as a remote sensing image denoising method via a low-rank tensor approximation [20], which can be formulated as a generative Bayesian model. Another method for regularizing nonuniformly sampled data based on least-squares spectral analysis [21] can be applied to nonstationary signals by introducing classical sliding windowing, as was the case for processing marine seismic data. Other approaches to denoising hyperspectral images [22,23] combine total variation regularization and low-rank tensor decomposition.
However, since the total variation model favors piecewise constant image structures, the total variation models tend to oversmooth image details; it tends to smooth out the fine details of an image. To overcome these intrinsic drawbacks of the total variation model, we introduce a nonlocal self-similarity constraint as a complementary regularization. Nonlocal self-similarity can restore high-quality images. To make our algorithm robust, an augmented Lagrangian method was used efficiently to solve the above inverse problem.
The remainder of this paper is organized as follows. In Section 2, the basics of the methods used to image restoration are presented. The tools of denoising image considered in the proposed framework are described in Section 3. Section 4 presents our proposed algorithm, called DCSR. The experimental results from the processing of Lena, Barbara and Cameraman images, available in the public and royalty-free database https://ccia. ugr.es/cvg/dbimagenes/index.php (accessed on 1 June 2021), and the comparison with competing methods are shown in Section 5. The results are discussed in Section 6. Finally, conclusions are provided in Section 7.

Compressive Sensing
Compressive sensing or compressed sensing [24,25] allows acquiring and efficiently reconstructing a signal by solving underdetermined linear systems. This technique is based on the principle where, by optimization, the sparsity of a signal can be exploited to recover it from several samples that are much lower than that required by the Nyquist-Shannon condition, which was, until recently, unavoidable.
Compressive sensing is based on three conditions: signal sparsity, the measurement matrix's incoherence and the reconstruction algorithm's robustness.
Let us assume a real signal x ∈ R N possessing a sparse representation in some transform domain, with K sparsity, (K N). Then, the signal can be written as follows: where ψ = (ψ 1 , ψ 2 , . . . , ψ N ) is the basis matrix, and u ∈ R N×1 is a weighted N dimensional vector with the following being the case: using φ as a sensing matrix of dimension M × N with K < M N. The measurement vector is acquired according to the following linear model: A is an M × N matrix that verifies the restricted isometry condition of order K.
The compressed sensing recovery of x from f can be obtained via l 1 norm minimization as the following optimization problem [26].
More often, when f is contaminated by noise, the equality constraint is as follows: where ε > 0 is the noise level. For an appropriate scalar weight α, we can obtain the following variant of (6).
Problems (6) and (7) are equivalent because solving one will determine the parameter in the other such that both provide the same solution. The signal x can be reconstructed by solving the l 1 norm minimization problem under the condition that x is sufficiently sparse, and the measurement matrix ψ is inconsistent with the orthogonal basis φ.
Several iterative reconstruction algorithms solve the problem of denoising algorithms for CS recovery and obtained high performances for nature images, such as orthogonal matching pursuit [27] and approximate message passing (AMP) [28], an extension of AMP and denoising based AMP [29].

Augmented Lagrangian
The augmented Lagrangian method, originally known as the multipliers method [30], combines the Lagrangian function and a quadratic penalty term. It is applied to solve a constrained optimization problem iteratively: where u ∈ R N and g ∈ R M , and H is a matrix of dimension M × N.
Applied to Equation (8), the augmented Lagrangian can be expressed as follows: where λ ∈ R M is a vector of the Lagrange multiplier, and ρ > 0 is the augmented Lagrangian penalty parameter for the quadratic infeasibility term [31]. We recall in Algorithm 1 the so-called Augmented Lagrangian method: this consists of searching for the optimal solution providing signal u by alternately updating u and the Lagrange multiplier λ. The alternate iteration method keeps one vector fixed and updates the other successfully until the stopping criterion is satisfied.

Technical Framework for Image Denoising
Several methods exist for image denoising, including total variation image regularization [32], wavelet thresholding [33], nonlocal means [34], basis pursuit denoising [35], block matching and 3D filtering [36], among others. Moreover, these methods can perform, to a certain extent, image smoothing and preserve edges. Therefore, we can reconstruct the image well by CS theory to obtain more precise measurements of the original images than the corresponding noisy images.
In particular, we can insert them into two classes as exceptional cases of the proposed framework.

Regularization Functions
Image recovery in these application domains can be formulated as a linear inverse problem, which can be modelled as follows: where f ∈ R M is the noisy image observation, u ∈ R N . is the cleanest image unknown, ε is an additive noise and A ∈ R M×N is a linear operator. Given A, image reconstruction extractsû from f , making classical least-squares approximation alone unsuitable. To stabilize recovery, regularization techniques are frequently used, producing a general reconstruction model of the following form: arg min where λ > 0 is a regularization parameter, and 2 denotes the l 2 norm. The fidelity term, A u − f 2 2 , forces the reconstructed image close to the original image, and the regularization function, φ reg (u), performs noise reduction.
The choice of the regularization function is very important for reconstructing an image that reflects, as accurately as possible, the original image of interest. In our work, we have combined total variation and nonlocal self-similarities. The interest of such a choice is linked to the fact that the total variation (TV) model shows high efficiency in preserving the contours and recovering smooth regions. However, this operator is local. It, therefore, does not take into account nonlocal features of the data, such as repetitive structures (such as texture, for example). However, nonlocal self-similarity describes the repetitiveness of textures [37,38] or embodied structures in realistic images, which allows the preservation of sharp edges.
The total variation has been introduced first by Rudin, Osher and Fatemi [39] as a regularizing criterion for solving inverse problems. Then, the total variation model is written as follows: TV(u) = Du p (12) where u ∈ R N represents an image, and D = [D h , D v ] with D h and D v represents the gradient of the image in the horizontal and vertical direction, respectively. The l p norm could be the l 1 norm corresponding to the anisotropic TV or the l 2 norm corresponding to the isotropic TV. By definition, l p norm is In this paper, we consider p to be equal to 1.
A nonlocal self-similarity (NLS) is a significant property of natural images too. It was proposed for the first time in image denoising [40] and was obtained in these steps. Image u of size N is divided into many overlapping blocks u i of size √ n × √ n , at location i , i = 1, 2, . . . , N. Then, find (m − 1) similar blocks, which comprise set S u i in the training window with (L × L) size. Finally, all blocks of S u i are stacked into a 3D array Z u i of size √ n × √ n × m. Nonlocal self-similarity can be formulated as follows: where T 3D is a transform operator, and T 3D (Z u i ) includes the transform coefficients for Z u i . θ u is the column vector of the lexicographically stacked representation of all 3D transform coefficients.

Wavelet Denoising
Wavelet shrinkage denoising removes whatever present noise and retains whatever signal is present regardless of the signal's frequency content. In the wavelet domain, the energy of a natural signal is concentrated in a small number of coefficients; noise is, however, spread over the entire domain. The basic wavelet shrinkage denoising algorithm comprises three steps: The following is the measurement model: where u is the original image of size M × N corrupted by additive noise ε. The goal is to estimate denoised imageû from noisy observation f . The elimination of this additive noise permits the assumption that the appropriate decomposition basis allows the discrimination of a useful signal (image) from noise. This hypothesis justifies, in part, the traditional use of denoising by thresholding. Two thresholding methods are frequently used. The soft-thresholding function (also called the shrinkage function) is as follows.
The hard-thresholding function, another popular alternative, is as follows.
includes the wavelet coefficients of the measured signal at level j, and dû j (k) is the estimation of the wavelet coefficients of the useful signal with the threshold S = σ 2 log N, where N and σ represent the number of pixels for the test image and the standard noise deviation.

Our Efficient Image Denoising Scheme
In what follows, we substitute the aforementioned results (12) and (13) into (7), and we obtain the following problem.
arg min Let us recall that TV(u) = Du 1 and NLS(u Recovering image u with high quality requires solving the optimization problem described by Equation (17). This issue is converted into an equality-constraint problem by introducing w and x: where τ and µ are control parameters. The augmented Lagrangian function of (18) is as follows: where µ and β are the penalty parameters corresponding to Du − w 2 2 and u − x 2 2 , respectively. To solve (18), we use the augmented Lagrangian method iteratively as follows.
Here, subscript k denotes the iteration index, and γ and ϕ are the Lagrangian multipliers associated with the constraints Du = w, u = x, respectively.
The alternative direction method is introduced to solve the problem efficiently. Due to the non-differentiability of Equation (19), the alternative direction method is introduced to solve the problem efficiently, which alternatively minimizes one variable while fixing the other variables to split Equation (19) into the following three subproblems. For the sake of simplicity, subscript k is omitted without confusion.

•
Update w Given u and x, we obtain subproblem w.
arg min The optimization problem described by Equation (21) can be solved using the shrinkage formula [43]. Then, w can be obtained as follows: where max {·} represents the larger number between two elements, and sgn(·) is a piecewise function that is defined as follows.
By fixing w and x, the optimization associated with u is as follows.
arg min Equation (23) is u-quadratic; thus, the minimization of the subproblem is simplified to a linear system.
The matrix on the left-hand side of the above system is positive, definite and tridiagonal, since D T D is a positive semidefinite tridiagonal matrix. Moreover, µ and β are both positive scalars.
• Update x Given u, we obtain the x subproblem as follows.
arg min By applying a completing square method and omitting all constants independent of x, the subproblem defined in Equation (25) can be simplified as follows.
arg min Considering = u − c, with c = ϕ β , as a noisy observation of x, error (or noise) e = x − r follows a probability law that is not necessarily Gaussian, with zero mean and variance σ 2 . According to the central limit theorem or law of large numbers, the following equation holds: 1 with e, x, r ∈ R N and θ x , θ r ∈ R i for i = 1, . . . , N.
arg min Since θ x is component-wise separable and an unknown variable, according to [44], the closed-form of the minimization problem of Equation (28) can be written as follows.
Thus, the solution of the x subproblem of Equation (25) is as follows: where Ω is the reconstruction operator. Based on the discussions above, we obtain Algorithm 2 for solving (17): Algorithm 2 is applied to recover corrupted images by white Gaussian noise and salt pepper noise.
The comparative performance of competing methods for the recovery of noisy images is discussed in the next sections.

Algorithm 2: Our algorithm (DCSR).
Input: The measurement f and the linear measurement matrix A Initialization: Outer stopping criteria unsatisfied do while Inner stopping criteria unsatisfied do use Equation (22) to solve w sub-problem use Equation (24) to solve u sub-problem use Equation (31) to solve x sub-problem end while Update multipliers by using Equation (20) end while Output: The final image u is restored.

Experimental Results
In this section, we verify the efficiency and practicability of the proposed method by describing experiments with simulated and real data sets. The proposed algorithm's performance, DCSR, is evaluated by comparing it with three other popular CS recovery algorithms: The first algorithm is wavelet thresholding, which involves denoising natural images by assuming that they are sparse in the wavelet domain. It transforms signals into a wavelet basis, thresholds the coefficients and then inverses the transform. The second algorithm extends Nesterov's smoothing technique to TV minimization by modifying the smooth approximation of the objective function, called the NESTA algorithm. The third algorithm, group-based sparse representation (GSR), simultaneously enforces image sparsity and self-similarity under a unified framework in an adaptive group domain.
We used three images, Barbara and Cameraman gray-scale images with sizes of 256 × 256 and a color Lena image sized 512 × 512 in our experiments (see Figure 1). The simulations used two types of noise: additive white Gaussian noise (AWGN) measured by its standard deviation σ and its mean m and the salt and pepper noise.
The salt and pepper noise in image has the following form: where p = (p 1 , p 2 , . . . , p n ) ∈ R n pixels, v max is the pixel intensity of salt pixels and v min is the pixel intensity of pepper pixels. The sum, q = q 1 + q 2 , is the level of the salt and pepper noise.

Visual Quality Comparison
First, we evaluate the performance of our DCSR algorithm by performing experiments on Barbara's image corrupted by white Gaussian noise with a standard deviation σ ranging from 20 to 80. Then, to verify the superiority of the proposed method, we compared it with the GSR algorithm, wavelet denoising and the NESTA algorithm. In Figure 2  The Barbara image is relatively complex given its rich texture and its geometric structure: it is clear that our algorithm (see the first line of Figure 2) has this ability to effectively denoise while preserving the details and texture of this particular image. Secondly, we handled the proposed approach to process impulsive salt and pepper noise in various noise levels varying from 20% to 70% and compared its denoising performance with several denoising algorithms: GSR algorithm, NESTA algorithm, and wavelet denoising. Figure 3 provides visual results for different algorithms and several noise levels; it can be observed that the Cameraman image has been reconstructed very well by our DCSR algorithm. By analyzing the images in the fifth column of Figure 3, obtained by the NESTA algorithm, we can observe some artifacts above the camera head and in the upper right corner due to the nature of the NESTA algorithm itself, which is characterized by a loss of precision in high-frequency components. We can conclude that the DCSR method presents the best quality image compared to other methods.

Image Quality Metrics
Peak signal-to-noise ratio and mean square error (MSE) have long been used as fidelity metrics in the image processing community. The formulas are simple to understand and implement; they are easy and fast to compute, and minimizing MSE is also very well understood from a mathematical point of view.
PSNR (peak signal-to-noise ratio, unit: dB) [45] is the ratio between the maximum possible power of a signal and the power of noise. Higher PSNR value means better visual quality. PSNR is defined as follows.

Image Quality Metrics
Peak signal-to-noise ratio and mean square error (MSE) have long been used as fidelity metrics in the image processing community. The formulas are simple to understand and implement; they are easy and fast to compute, and minimizing MSE is also very well understood from a mathematical point of view.
PSNR (peak signal-to-noise ratio, unit: dB) [45] is the ratio between the maximum possible power of a signal and the power of noise. Higher PSNR value means better visual quality. PSNR is defined as follows.
MSE is the mean square error between initial image u and estimated imageû of size M × N; i and j represent the image row and column pixel position, respectively; and k is the number of bits of each sample value.
Structural similarity (SSIM) textures [46] have proven to be a better error metric for comparing the image quality with better structure preservation. They are in the range of [0,1], which is a value closer to one indicating better structure preservation: such that l(i, j) is the luminance comparison defined as follows: where µ i and µ j are functions of the mean intensities of signals i and j, respectively. c(i, j), the contrast comparison, is a function of standard deviations σ i and σ j , and it is defined as the following form.
Structure comparison s(i, j) is defined as follows: where µ i and µ j are the mean value of images u andû, respectively. σ 2 i and σ 2 j represent the variance of u andû, respectively. σ ij is the covariance of images u andû. c 1 , and c 2 and c 3 are constant.
Specifically, it is possible to choose c i = K 2 i D 2 with i = 1 and K i 1 and D as the pixel values' dynamic range.

Quantitative Assessment
In this subsection, we evaluate the quality of image reconstruction. We compare these methods quantitatively; the peak signal-to-noise ratio and structural similarity indices are calculated for images with different algorithms.
Remember that, at first, Barbara was contaminated by Gaussian white noise with different values of standard deviation σ and denoising with other algorithms. The results of the PSNR values by various algorithms are shown in Figure 4. Let us recall that a higher PSNR indicates superior image quality and good performance of the algorithm. The values of PSNR illustrate that DCSR yields a higher PSNR than the other methods.  Figure 5 presents the performance analysis of four denoising methods for grayscale images corrupted by additive white Gaussian noise. We can see that DCSR, our algorithm, exhibits the best performance with high PSNR (low noise level) and low PSNR (high noise level).
Furthermore, to evaluate the effect of denoising our algorithm, DCSR, the grayscale image is corrupted by salt and pepper noise with different levels. Table 1 shows the quantitative assessment results of PSNR and SSIM relative to our algorithm, DCSR, for several values of noise levels, and the results were obtained using GSR, wavelet and NESTA.
The best results are highlighted in bold type font. Table 1 validates that our method has superiority in image reconstruction compared with the three other methods.
The performance of our method is confirmed in Figure 6, which illustrates PSNR output variations concerning the input PSNR. Furthermore, to evaluate the effect of denoising our algorithm, DCSR, the grayscale image is corrupted by salt and pepper noise with different levels. Table 1 shows the quantitative assessment results of PSNR and SSIM relative to our algorithm, DCSR, for several values of noise levels, and the results were obtained using GSR, wavelet and NESTA. The best results are highlighted in bold type font. Table 1 validates that our method has superiority in image reconstruction compared with the three other methods.
The performance of our method is confirmed in Figure 6, which illustrates PSNR output variations concerning the input PSNR.   The next challenge is to apply this DCSR algorithm to color or multidimensional images. This is essential since most digital images used in the modern world are not grayscale but usually operate in either RGB or YCbCr color spaces. Both these colorspaces are three-dimensional.
We evaluate the proposed DCSR algorithm on the Lena color image because this test image is interesting from the point of view of the mixture of details, flat regions, shadow areas and texture. We mainly compareed our proposed method to wavelet denoising and the GSR algorithm.
As mentioned previously, two types of noise were used in these experiments: AWGN with a standard deviation σ and several salt and pepper noise levels.
Our first experiment is to add white Gaussian noise with different σ values, σ = 20, 50, 60 and 80, to the test image (here Lena), thus generating noisy observations, and the challenge is to determine the most efficient denoising method in terms of PSNR.
The denoising results of this image with different algorithm are shown in Figure 7. From Figure 7, the proposed method achieves the highest scores of PSNR in all cases, which fully demonstrates that the denoising results by the proposed method are the best both objectively and visually.  Figure 8 shows the output results (after reconstruction) evaluated at different noise levels. DCSR algorithm provides the best denoising performance.
In the second experiment, we added salt and pepper noise with different noise levels at 10%, 15%, 20% and 30% to the Lena test image. Then, we applied our denoising algorithm to restore the noisy images and compared them with two other algorithms: wavelet denoising and GSR algorithm. Figure 9 shows that the DCSR algorithm provides better visual quality results. This performance is confirmed by Figure 10, which represents the variations of the output PSNR vs. the input PSNR. The robustness of our algorithm relative to the noise level is, thus, verified.   From these results of PSNR and SSIM, in all the cases, the proposed method achieves the highest scores, which fully demonstrates that the restoration results by the proposed method are the best both objectively and visually.

Algorithm Robustness
The robustness of the proposed algorithm will be confirmed in this subsection. The test images are corrupted, on the one hand, by additive white Gaussian noise and, on the other hand, by salt and pepper noise at different noise levels. Figures 11-14 show PSNR values (after reconstruction) as a function of the number of iterations for the greyscale images of Barbara and Cameraman and for the color image of Lena using different algorithms. Note that we did not use the NESTA algorithm in the case of the color images because NESTA is not suitable for color images.  From these results of PSNR and SSIM, in all the cases, the proposed method achieves the highest scores, which fully demonstrates that the restoration results by the proposed method are the best both objectively and visually.

Algorithm Robustness
The robustness of the proposed algorithm will be confirmed in this subsection. The test images are corrupted, on the one hand, by additive white Gaussian noise and, on the other hand, by salt and pepper noise at different noise levels. Figures 11-14 show PSNR values (after reconstruction) as a function of the number of iterations for the greyscale images of Barbara and Cameraman and for the color image of Lena using different algorithms. Note that we did not use the NESTA algorithm in the case of the color images because NESTA is not suitable for color images. From these results of PSNR and SSIM, in all the cases, the proposed method achieves the highest scores, which fully demonstrates that the restoration results by the proposed method are the best both objectively and visually.

Algorithm Robustness
The robustness of the proposed algorithm will be confirmed in this subsection. The test images are corrupted, on the one hand, by additive white Gaussian noise and, on the other hand, by salt and pepper noise at different noise levels. Figures 11-14 show PSNR values (after reconstruction) as a function of the number of iterations for the greyscale images of Barbara and Cameraman and for the color image of Lena using different algorithms. Note that we did not use the NESTA algorithm in the case of the color images because NESTA is not suitable for color images.

Computational Complexity
In the following paragraph, we will estimate the computational complexity of the proposed DCSR algorithm. It is clear that the main complexity of the proposed algorithm comes from total variation TV and the high cost of the nonlocal self-similarities (NLS).
Knowing that the computational complexity of TV is O(N) [47], let us compute that of NLS.
For an image u of N pixels, the average time to compute similar blocks for each reference block is T s . If √ n × √ n represents overlapped blocks u i , i = 1, 2, . . . , N and (m − 1) is the number of similar blocks denoted S u i , all blocks of S u i are stacked in a matrix of size (n × m) with complexity O n × m 2 ; hence, the resulting complexity is O N n m 2 + T s , similarly to the computational complexity of a group-based sparse representation GSR [48].
We point to the finding that the total computational complexity of our algorithm is O N n m 2 + T s + N .
It is interesting to compare the computational complexity of DCSR with competing methods. The computational complexity of NESTA algorithm is O(N + N log 2 N) [49] and that of wavelet denoising is O(N log 2 N) [50]. Table 2 summarizes the computational complexity of the four algorithms used.
Relation (39) clearly shows that our proposed algorithm is more expensive in terms of computational complexity by an order N. Such an increase is not excessive in view of the good performance of this algorithm and the current computational means, which permit real-time processing.

Discussion
In the real world, the images acquired or measured or recorded have generally suffered degradation of various origins: This degradation can be canceled or at least reduced by proceeding to a preprocessing procedure called a denoising operation or simply denoising. Such an operation allows, on the one hand, producing a perception of a quality image and, on the other hand, the improvement in the performance of subsequent image processing (extraction of the desired information, prediction, classification, texture analysis, segmentation, etc.).
With this in mind, the authors of reference [51] propose a new image denoising method called dehazing because it tends to eliminate haze due to bad weather conditions. This original method is based on the application of artificial multiexposure image fusion [52] involving local and global image details. Such an approach allows the recovery of quality images but, unfortunately, halos or artifacts often appear near the edges when the inputs are sparse, which makes postprocessing (linear saturation adjustment) introduced by the authors ineffective.
Wavelet image denoising [53] is powerful for edge detection in three preferred directions: diagonal, vertical and horizontal. For this purpose, several types of wavelets exist, such as the Haar wavelet that preserves edge information, but technically it is not continuous and is not differentiable. This wavelet can be identified with the optimization problem using an l 1 norm. In contrast, the Morlet wavelet [13], which is continuous, can be identified with the use of the l 2 norm.
Our method mixed the total variation with nonlocal self-similarities to recover the image without artifacts and preserve details and textures of the image. On a psychoemotional level, the visual quality of an image is necessary for the observer. In this respect, it is interesting to note that the images reconstructed in Figure 2 by the NESTA algorithm are visually unpleasant and lose some important details. Limitations also appear during wavelet denoising: The noise contained in the image cannot be removed if the standard deviation is set too large (above 50), i.e., if the noise level is high. Concerning GSR (second row of Figure 2), we can observe that this algorithm is quite efficient in removing noise; however, it has a slight loss of contrast. On the other hand, it can be observed that our algorithm, DCSR, is efficient in removing even noise with standard deviations equal to 80. It is, therefore, confirmed that the proposed method provides the most visually satisfying results for both edges and textures.
On an objective and therefore measurable level, Table 1 shows that the PSNR and SSIM values obtained by NESTA are the lowest, which clearly indicates a limitation in the performance of this algorithm. GSR and wavelets obtain intermediate PSNR and SSIM values. At the end of the convergence analysis, Figures 11-14 allow us to conclude that, as the number of iterations increases, all PSNR curves increase monotonically and stabilize from the 10th iteration. These figures show that all these algorithms converge very quickly. Nevertheless, the most robust algorithm should have a high PSNR. According to these figures, regardless of the nature of the noise or the test image used, DCSR has the highest PSNR. It is, therefore, the most robust algorithm among the competing methods.
However, when we compared our method, DCSR, in terms of computational complexity, our approach is not the most advantageous; however, real-time processing is largely feasible. Hence, complexity reduction techniques such as block-compressed sensing [54] or deep learning technique [55] are feasible.
We applied our algorithm in an ablation study to clarify the effect of total variation (TV) and nonlocal self-similarity (NLS) on the compressed sensing (CS) recovery model. First, we cancel all regularization constraints and leave only the CS alone. Next, we analyze CS coupled with TV and cancelled NLS. Then, we delete TV, replaced it with NLS and kept CS. Table 3 summarizes this ablation analysis using quantitative values: PSNR (dB) and SSIM. From Table 3, we can conclude that the presence of TV or NLS improves the quality of the reconstruction with CS, while noting that the CS+NLS coupling performs better than CS+TV. On the other hand, by comparing the four scenarios with each other, we can conclude that the reconstruction, with both regularization functions simultaneously (CS+TV+NLS), is significantly better than the other three scenarios: it effectively removes noise and improves the robustness of our approach.

Conclusions
In this paper, we proposed an original image denoising method based on compressed sensing that we called denoising-compressed sensing by regularizations terms (DCSR), by incorporating two regularization constraints in the model: total variation and nonlocal self-similarity. The optimization of this method is performed by the augmented Lagrangian, which avoids the difficult problem of nonlinearity and non-differentiability of the regularization terms.
The effectiveness of our approach was validated using images corrupted by white Gaussian noise and impulsive salt and pepper noise. Comparing DCSR in terms of PSNR and SSIM to state-of-the-art methods such as Nesterov's algorithm, group-based sparse representation and wavelet-based methods, it turns out that, depending on the image texture and the type of noise corrupting the image, our method performs much better: We gain at least 25% in PSNR and at least 11% in SSIM. The price to pay is a slight increase in terms of computational complexity of the order of image size, but this does not call real time processing into question.
Due to the robustness and the speed of convergence of DCSR algorithm, its application is efficient in vital and sensitive domains such as medical imaging and remote sensing. Our future contribution is a technological breakthrough that includes introducing a layer of intelligence at the acquisition level aimed at automatically determining image texture and its quality in terms of noise level, blur and shooting conditions (lighting, inpainting, registration, occlusion, low resolution, etc.) [56][57][58][59][60][61] in order to automatically adjust the parameters necessary for an optimal use of the proposed DCSR algorithm.
The definitions of the acronyms used in this work are given in Table 4.