Abstract
Image denoising is a classic yet challenging problem in low-level image processing. Traditional image denoising approaches using convex regularized prior (e.g., -norm) often bring bias problems. To address this issue, a novel prior model based on a family of non-convex functions and group sparsity residual (GSC) prior constraint for image denoising is studied. We propose a generalized non-convex GSC prior model for the image denoising problem. We first utilize the group-sparse representation (GSR) before exploiting image prior information. Specifically, to further improve the image denoising performance of the GSC prior model, we employ several typical non-convex surrogate functions for the sparsity constraint. Then, a fast and efficient thresholding algorithm is proposed to minimize the resulting optimization problem. The experimental results have demonstrated that our proposed method can achieve the best reconstruction results compared with other image denoising approaches.
1. Introduction
Image denoising is a classic yet popular low-level image processing problem, whose purpose is to reconstruct the latent clean image from the noisy observation , where is usually defined as additive Gaussian white noise [1,2,3,4]. Mathematically, image denoising can be successfully addressed via the following optimization problems:
where denotes the data term, denotes the regularizer, and the non-negative parameter is used to balance the data term and regularizer.
In the past decade, image denoising problems have achieved great progress [1,2,3,5,6]. Mathematically, image denoising is an ill-posed problem; thus, the prior model plays a crucial role in improving the image denoising performance. The most widely used models include the sparsity model [7], non-local self-similar model [8,9,10,11], wavelet transform model, and Markov random field model [12]. In addition, deep-learning-based denoising approaches are also increasingly proposed. Among them, convolution neural networks (CNN) [13,14] and recursive neural networks (RNN) [13,14] are mostly used for image denoising. With the maturity of deep learning theory, various network structures have been applied to image denoising and have achieved excellent results [13,14,15]. Although deep-learning-based approaches for image denoising have achieved promising progress, their drawbacks are obvious, e.g., requiring clean, large-scale, labeled datasets for training.
In recent years, sparsity has been increasingly used for image denoising [16,17,18]. Elad et al. [16] used K-SVD for dictionary learning to obtain a sparse dictionary, then performed image denoising in units of image blocks. In 2010, Mairal et al. [17] proposed the LSSC algorithm, which uses dictionary learning to mine local sparsity in the domain and combines non-local methods to explore non-local sparsity. To take advantage of the sparsity between image blocks, Zhang et al. [18] proposed the concept of group sparsity in 2014 by combining several similar image blocks into groups and using the group as the basic unit of sparse representation, then using the SBI algorithm to solve the cost function to improve the robustness of the model. In 2017, Lin et al. [19] proposed an image reconstruction algorithm based on a sparse representation of an adaptive structure group. This algorithm can adaptively adjust the criteria for selecting similar image blocks according to the image’s own structure and regional characteristics, thus improving the performance of model reconstruction.
Non-local structure self similarity (NSS) is one of the most popular image denoising prior models in recent years [20,21,22]. For the model of NSS, the non-local means (NLM) is employed to exploit the NSS characteristics of the image for image denoising, which is an improvement on the traditional neighborhood filtering method and takes into account the self-similar nature of the image. It makes full use of the redundant information in the image and can preserve the details of the image to the greatest extent while denoising. Buades et al. [8,23] first proposed the method of non-local means. This method first determines the weight according to the similarity between the pixel area to be processed and other similar point neighborhoods, and then uses the weight to calculate the gray value of the pixel, which is the weighted average of the gray values of all pixels in the image. Since then, new and improved algorithms have been proposed continuously. Kervrann et al. [24] made improvements in the selection of similar blocks, which improved the denoising performance. Goossens et al. [25] proposed a dual scoring function to calculate the weights, which improves the calculation accuracy of the similarity between similar blocks and improves the denoising performance. Inspired by NLM, Dabov et al. [5] proposed the BM3D method to solve the problem of image denoising. The BM3D algorithm uses collaborative filtering in the three-dimensional transform domain, and the denoising performance reaches an unprecedented height.
In addition, the low-rank property can also be used as prior knowledge for image denoising [26]. The WNNM method proposed by Gu et al. also uses the low-rank attribute of the image. Since this model ignores low-rank properties, the effect of image noise reduction can be achieved through low-rank clustering. Dong et al. [27] proposed applying the low-rank attributes of images as prior knowledge to image compressed sensing, which showed good performance and further improved the denoising ability based on the NSS model. Fen et al. [28] proposed an image compression sensing model based on low-rank priors. Based on the traditional image low-rank model, the Schatten-p norm was used instead of the weighted kernel norm to improve the reconstruction ability of the model [29]. Chen et al. [30] used the low-rank property of images to design non-local low-rank regularization terms, and at the same time, selected m-estimation to achieve robust image compression sensing denoising. This algorithm not only has good denoising performance but also has excellent robustness. Zhang et al. [31] applied the sparsity and low rank of this image to MRI image denoising and achieved good results. Recently, residual prior-based models have shown their promising advantages in various image processing tasks, such as sparsity residual prior [7,32], rank residual prior [33], and deep residual feature in network [34].
The so-called convex optimization method uses a convex function as a penalty function, and the most commonly used ones include and norms. The non-convex optimization method is to choose a non-convex function as the penalty function, and the most used one is . By computing sparse vectors from noisy images, both convex optimization methods and non-convex optimization methods can achieve the goal, but both types of methods have their advantages and disadvantages. Among them, the convex optimization method can better guarantee the convergence of the algorithm. Non-convex optimization methods can obtain more sparse solutions.
In practical applications, the problem (1) is typical NP-hard, which is difficult to solve directly. Usually, the norm and norm are used for convex optimization solutions. However, convex optimization often cannot obtain a perfect solution because a perfect approximation of the rank function cannot be obtained. In order to better approximate the rank function, many non-convex surrogate functions have been proposed, including the -norm [35], the smooth clipped absolute deviation (SCAD) [36], the logarithm [37], the minimax concave penalty (MCP) [38], and the exponential type penalty (ETP) [39].
In this paper, we propose a method for non-convex optimization groups with sparse residual-constrained models. First, we transform the image denoising problem into a residual minimization problem using group-sparse residuals, which simplifies the computation and understandability. To better approximate the value function, we choose a non-convex substitution function to replace the traditional convex substitution function, which improves the denoising performance of the model. Finally, to compare the performance of non-convex surrogate functions, we selected the five most commonly used non-convex surrogate functions for comparative experiments. The final experiment proves that the model not only has good denoising performance but also has high efficiency.
The rest of the paper is organized as follows: Section 2 presents some basics, including dictionary learning, group-sparse models, group-sparse residuals, and low-rank minimization models. In Section 3, a non-convex optimization group-sparse residual constrained model is proposed, and the model is solved. Section 4 presents the experiments and results and analyzes them. Section 5 summarizes the full text.
2. Related Work
In this section, we will briefly introduce some related work in image denoising, including group sparse representation, adaptive dictionary learning, and low-rank minimization theory.
2.1. Group Sparse Representation
Recently, the group sparsity prior has been popularly used for image denoising tasks, which has significantly improved image denoising performance [18,40]. For any image , there are n overlapping patches with a size of ; then we can search k patches in the sized searching windows based on the benchmarked using the kNN method [41]. The patch set is defined by , where denotes the k-th patch similar to . If we define a representation dictionary for each group , then can be expressed as , where denotes the group sparse vector.
2.2. Low-Rank Minimization
The low-rank approximation problem can be summarized as follows: given an input group , the purpose of the low-rank minimization method is to find a low-rank matrix through minimizing the objective function (e.g., data fidelity term and the constraint of the regularization term). In general, the low-rank minimization can be expressed as follows:
where denotes the loss function and denotes the low-rank constraint term with a non-negative weight . The most popular choice of is the squared loss: . In practical applications, low-rank minimization is a typical NP-hard problem, which is difficult to solve directly. The nuclear norm minimization (NNM) and weight nuclear norm minimization (WNNM) are two popular algorithms for approximation. In fact, however, the nuclear norm or weighted nuclear norm is not a perfect approximation of the rank function. Usually, the -based nuclear norm is usually used to obtain a biased solution.
2.3. Adaptive Non-Local Dictionary Learning
Dictionary learning is also called sparse dictionary coding. The goal of dictionary learning is to extract the most essential features of objects by reducing the interference of other insignificant transactions on the target. The sparse model is consistent with the goal of dictionary learning, which is to remove useless information and retain the most essential and important information. Therefore, the quality of dictionary creation is closely related to the sparsity of the model. Given a group , we have [42,43]
where denotes the group dictionary and is the corresponding coefficient matrix. Then, the process of dictionary learning can be expressed as follows:
Since -norm is difficult to solve, it is usually relaxed by the popular regular term. Firstly, randomly select samples from the original sample as the initial value of , and initialize to . Then, iteratively update and respectively, and finally obtain the dictionary with the best sparsity.
However, the dictionary learning method via (4) is usually complicated and inefficient. In this paper, we introduce a novel approach to achieve the non-local dictionary via SVD technology [18,44]. Considering the group , the SVD of is
Thus the dictionary atom can be obtained by
Therefore, the non-local dictionary for each group is formed by
3. Proposed Method
3.1. Group Sparse Coefficient Residual
For image and its corresponding group , if we give a dictionary , then the group can be sparsely represented by solving the following optimization problem,
where denotes the i-th group sparse coefficient for group . In practical applications for image denoising tasks, for the noisy image and its corresponding group , considering the fact that and can share the same dictionary , the corresponding optimization problem for the i-th group sparse coefficient can be modeled as
where denotes the i-th group sparse coefficient for group . Thus, group sparse coefficient residual [32,33] can be described as
To better understand the group sparse coefficient residual, Figure 1 presents the flowchart of the group sparse coefficient residuals.
Figure 1.
Flochart of the group sparse coefficient residuals.
3.2. Residual Prior-Based Generalized Non-Convex Non-Smooth Low-Rank Minimization
The residual defined in Equation (10) can indicate the difference between and . Correspondingly, considering the low-rank minimization problem in Equation (2), the residual prior-based low-rank approximation can be formulated as
To address the optimization problem, the popular model is
where .
To better approximate the rank of the matrix , many non-convex alternative functions have been proposed. The most commonly used one is the -norm [35]. In this paper, we employ a family of non-convex non-smooth functions to regularize the group residual, including the -norm [35], the smooth clipped absolute deviation (SCAD) [36], the Logarithm [37], the minimax concave penalty( MCP) [38], and the exponential type penalty (ETP) [39]. These non-convex substitution functions are described in Table 1. Then our proposed group sparse residual prior-based non-convex non-smooth low-rank minimization can be formulated as
According to earlier work [10,11], we have the following relationship for the group, e.g., . Moreover, according to the definition of dictionary, it is easy to find that its atoms can be achieved by . Therefore, if we replace with , the optimization problem in Equation (13) can be equal to the following problem:
Table 1.
Five popular non-convex surrogate functions and their super-gradients.
3.3. Image Denoising Application
To achieve good denoising performance, the group sparse residual should be as small as possible. In this paper, our generalized non-convex non-smooth residual prior-driven image denoising can be modeled as
Let ; we have
According to [11,22], we have the following relationship with a large probability (close to 1) at each iteration:
Therefore, our denoising problem can be converted into the following problem:
where . In this work, we employed the surrogate algorithm [38] to solve (18), and we obtained the iterative formula:
where denotes the soft-thresholding operator, is the principle component analysis (PCA) based dictionary of [18,40]. In this paper, we adopt the non-convex surrogate function to solve . Here, we employ the popular -norm as the relaxation function. According to the definition in Table 1, we have
where is regularization parameter. To obtain better denoising performance, we set the parameter to change adaptively according to [7,32].
where c is a small constant and is related to . The denotes the variance of the residual . In the iterative process, we choose the iterative regularization strategy to update the estimated variance of noise to improve the performance of the model,
where is a constant. After getting all , we can get the searched reconstruction blocks through
The complete description of the proposed image denoising method is presented in Algorithm 1.
| Algorithm 1: Non-convex Optimization Denoising Model Based on Group Sparse Residual Constraint |
![]() |
4. Experimental Results
In this section, extensive experimental results are presented to evaluate the image denoising performance of our proposed method. In this experiment, similar to the state-of-the-art methods [20,21,45], we also chose two widely used metrics to assess the objective and subjective quality of the reconstructed images, namely the peak signal-to-noise ratio (PSNR) and the Structural Similarity Index metric (SSIM), respectively. Note that PSNR is a commonly used indicator to measure signal/image distortion; though the evaluation results usually appear inconsistent with the supervisor’s feelings, it is still a valuable evaluation index. The SSIM evaluates images from three aspects: brightness, contrast, and structure. Therefore, neighborhood pixels have strong relevance, that is, the similarity of object structure information. The contrast peak signal-to-noise ratio is more complex with the intuitive effect observed by human vision, and it has symmetry, upper and lower bounds, and other properties. These two parameters are the most commonly used objective indicators to evaluate the quality of image restoration. They are defined as follows:
where denotes the i-th channel of the original image, the default i is 1 for the grayscale image, and denotes the i-th channel of the repaired image. The size of the image is .
where is the average value of the gray value of pixels in the i-th channel of the repaired image, is the average value of the gray value of pixels in the i-th channel of the original image, is the variance of , is the variance of , and is the covariance of and . and are constants used to maintain stability.
4.1. Effectiveness of Our Denoising Model
In this subsection, we will validate the effectiveness of our non-convex image denoising model by denoising noisy classic images and comparing it with some state-of-the-art denoising methods, including BM3D [5], NCSR [46], PGPD [47], A-NLS [45], and GSRC [7]. Note that BM3D, PGPD, and GSRC are three non-local-based image denoising methods. NCSR and A-NLS are two sparsity residual-based image denoising methods. Without loss of generality, the classic non-convex function of MCP is utilized to plug into our denoising model for denoising experiments. The size of the search window is related to the search efficiency of similar blocks. If the search window is too small, it will increase the search time and affect the efficiency; otherwise, similar blocks of the image may be missed, which will affect the image denoising performance of the denoising algorithm. To obtain the best denoising performance, suitable parameters are used for the proposed model. The searching window is set to , and the searching thresholding value is set to 0.2. The size of patch is set to , , , and for , , , and , respectively. The parameters of and are set as 1 × 107 and 2.1, respectively. We chose 13 widely used test images (see Figure 2) to conduct image denoising experiments at four noise levels, i.e., Gaussian white noise with standard deviations , and all the PSNR results are presented in Table 2. From the results, we can find that our proposed method outperforms the other five competing methods in all cases in terms of PSNR values.
Figure 2.
Test images used in the experiments.
Table 2.
The results of the images denoised using different methods.
4.2. Generalization Validation of Different Penalties
In this subsection, we will study the generalization of our proposed non-convex denoising model by comparing the denoising performance of five non-convex functions under different standard deviations of Gaussian noise. Moreover, we selected 13 gray images a the size of 256 × 256 for the denoising test, as shown in Figure 2. These images are selected from the datasets Set12 and CSet8. The Cset8 datasets are all RGB images. In this experiment, those color images are converted into grayscale images. We select two normal levels with and 100 for the experiments. The PSNR(dB)/SSIM values of all experimental results are shown in Table 3. It can be seen from Table 3 that in the case of , the average performance of MCP is the best, which is 0.11 dB, 0.19 dB, 0.03 dB, and 0.58 dB higher than , Logarithm, SCAD, and ETP, respectively. With the increasing standard deviation of Gaussian white noise, the gap between the five algorithms is narrowing. While , the five non-convex optimization algorithms have almost no bias. To obtain a better denoising performance, we set the parameter to change adaptively according to different .
Table 3.
The denoised results of images corrupted by Gaussian noise with different standard deviations, i.e., and 100.
Figure 3 and Figure 4 show the experimental results for visual comparison. Figure 3 is the result of the image Airplane under the standard deviation of noise. From the perspective of details, none of the five pictures produces artifacts; however, relatively speaking, the edge details of the MCP algorithm are the most complete, while the edges of the images of the ETP algorithm are more blurred. Compared with the previous images, the color edges in the house image are mostly straight lines. However, due to the excessive noise, the performance of the five algorithms is almost the same, and the purpose of denoising can be achieved in general, but the details are not perfectly preserved. As the noise level increases, the difference between algorithms becomes smaller and smaller. The difference is invisible to the naked eye.
Figure 3.
The results of the test image (Airplane) denoised using different methods while . (a) ETP, 25.35 dB; (b) Logarithm, 25.59 dB; (c) Lp, 25.50 dB; (d) MCP, 25.69 dB; (e) SCAD, 25.67 dB.
Figure 4.
The results of the test image (Parrots) denoised using different methods while . (a) ETP, 23.58 dB; (b) Logarithm, 23.6 dB; (c) Lp, 23.6 dB; (d) MCP, 23.6 dB; (e) SCAD, 23.6 dB.
4.3. Robustness Study Under Gaussian Noise with Different Standard Deviations
In this experiment, we will study the robustness of our proposed non-convex denoising model by comparing the denoising performance of five non-convex optimization algorithms under different standard deviations of Gaussian noise. We also selected 13 gray images with a size of 256 × 256 for the denoising test from Figure 2. In this experiment, we add different standard deviations of white Gaussian noise to experiment with , 40, 50, and 75 (Table 4). In the case of , the performance of MCP is the best. When , compared to ETP, Logarithm, Lp, and SCAD, MCP is 0.1 dB, 0.03 dB, 0.08 dB, and 0.01 dB higher in average PSNR, respectively. When , the difference in the mean PSNR between MCP and other non-convex substitution functions is 0.29 dB, 0.09 dB, 0.12 dB, and 0.01 dB. With the increasing standard deviation of Gaussian white noise, the gap between the five algorithms is narrowing.
Table 4.
Robustness study of non-convex penalties via averaged PSNR (dB)/SSIM for 13 images corrupted by Gaussian noise with 4 different standard deviations.
Figure 5, Figure 6, Figure 7 and Figure 8 respectively show the experimental results for visual comparison. Figure 5 is the result in a Gaussian noise environment. Compared with the Airplane, the color boundary of the cameraman is more obvious, and the scene of the image is simpler. In this type of image, MCP is still the best performer. Whether it is the excessive color of characters and the environment or the details of distant buildings, the MCP algorithm can well preserve the details. Figure 6 is the result under Gaussian noise with a standard deviation of . The composition of the Monarch image is more complex, and there are more color edges in the picture, which requires a higher ability for the algorithm to retain details. At this time, the performance of MCP and SCAD is similar. The picture in Figure 7 is the result of . Compared with the previous images, the color edges in the house image are mostly straight lines. However, due to the excessive noise, the performance of the five algorithms is almost the same, and the purpose of denoising can be achieved in general, but the details are not perfectly preserved. As the noise level increases, the difference between algorithms becomes smaller and smaller, as shown in Figure 7 and Figure 8. The difference is invisible to the naked eye.
Figure 5.
The results of the test image (Boats) denoised using different methods . (a) ETP, 26.21 dB; (b) Logarithm, 26.27 dB; (c) Lp, 26.24 dB; (d) MCP, 26.30 dB; (e) SCAD, 26.30 dB.
Figure 6.
The results of the test image (Cameraman) denoised using different methods while . (a) ETP, 23.56 dB; (b) Logarithm, 23.67 dB; (c) Lp, 23.68 dB; (d) MCP, 23.74 dB; (e) SCAD, 23.73 dB.
Figure 7.
The results of the test image (Monarch) denoised using different methods while . (a) ETP, 23.76 dB; (b) Logarithm, 23.80 dB; (c) Lp, 23.80 dB; (d) MCP, 23.82 dB; (e) SCAD, 23.82 dB.
Figure 8.
The results of the test image (House) denoised using different methods while . (a) ETP, 26.39 dB; (b) Logarithm, 26.43 dB; (c) Lp, 26.41 dB; (d) MCP, 26.43 dB; (e) SCAD, 26.43 dB.
4.4. Computational Time Analysis
Computational time is also a key factor when comparing the performance of algorithms. All experiments were carried out in the same environment. As the noise standard deviation increases, the greater the intensity of the noise, the longer the program takes to run. With noise intensity from 20 to 100, the program takes 2–3 times longer to run. Different algorithms run at different speeds. The average running time of the experiment is shown in Table 5. It can be seen that the running time of Logarithm is the shortest, while the running time of penalty function is the longest.
Table 5.
Average run time (seconds) of different algorithms on the test images.
5. Conclusions and Discussion
This paper studies a weighted generalized non-convex low-rank denoising model with a group sparsity residual prior. To improve the rank approximation accuracy, a family of non-convex optimization functions is employed to replace traditional convex optimization. Moreover, the group sparsity residual prior is utilized to solve the ill-posed problem. We also analyze the denoising performance of various non-convex optimization functions under different noise intensities. Experimental results prove that the MCP function performs best both in objective criteria and in subjective visual evaluation. The method based on group sparse residual has a good denoising effect, but the residual model still has great potential.
In future research, the performance of the residual prior model can be further developed. The objects of image processing in this paper are two-dimensional gray images. Therefore, the algorithm can be extended to the processing of three-dimensional hyperspectral images, remote sensing images, and brain CT images.
Author Contributions
S.W. contributed to conceptualization, methodology, data analysis, and writing. R.H. contributed to formal analysis, data curation, review, and editing. C.L. and P.Q. contributed to validation and supervision. All authors have read and agreed to the published version of the manuscript.
Funding
This work was supported by the Science and Technology Project of the State Grid Zhejiang Electric Power Co., Ltd. [grant number 5211DS23000T].
Data Availability Statement
The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.
Conflicts of Interest
Author Shaohe Wang, Rui Han and Chen Li were employed by State Grid Zhejiang Electric Power Co., Ltd., and Ping Qian was employed by State Grid Wenzhou Electric Power Supply Co., Ltd. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors declare that this study received funding from Science and Technology Project of the State Grid Zhejiang Electric Power Co., Ltd. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.
References
- Danielyan, A.; Katkovnik, V.; Egiazarian, K. BM3D frames and variational image debluring. IEEE Trans. Image Process. 2012, 21, 1715–1728. [Google Scholar] [CrossRef] [PubMed]
- Chen, H.; Gu, J.; Zhang, Z. Attention in Attention Network for Image Super-Resolution. arXiv 2021, arXiv:2104.09497. [Google Scholar]
- Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
- Guo, L.; Huang, S.; Liu, H.; Wen, B. Towards Robust Image Denoising via Flow-based Joint Image and Noise Model. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 6105–6115. [Google Scholar] [CrossRef]
- Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
- Luo, Q.; Liu, B.; Zhang, Y.; Han, Z.; Tang, Y. Low-rank decomposition on transformed feature maps domain for image denoising. Vis. Comput. 2021, 37, 1899–1915. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group sparsity residual constraint with non-local priors for image restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
- Li, Y.; Xiao, F.; Liang, W.; Gui, L. Multiply Complementary Priors for Image Compressive Sensing Reconstruction in Impulsive Noise. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–22. [Google Scholar] [CrossRef]
- Li, Y.; Gao, L.; Hu, S.; Gui, G.; Chen, C.Y. Nonlocal low-rank plus deep denoising prior for robust image compressed sensing reconstruction. Expert Syst. Appl. 2023, 228, 120456. [Google Scholar] [CrossRef]
- Li, Y.; Jiang, Y.; Zhang, H.; Liu, J.; Ding, X.; Gui, G. Nonconvex L1/2-regularized nonlocal self-similarity denoiser for compressive sensing based CT reconstruction. J. Frankl. Inst. 2023, 360, 4172–4195. [Google Scholar] [CrossRef]
- Li, Y.; Gui, G.; Cheng, X. From group sparse coding to rank minimization: A novel denoising model for low-level image restoration. Signal Process. 2020, 176, 107655. [Google Scholar] [CrossRef]
- Malfait, M.; Roose, D. Wavelet-based image denoising using a Markov random field a priori model. IEEE Trans. Image Process. 1997, 6, 549–565. [Google Scholar] [CrossRef] [PubMed]
- Lefkimmiatis, S. Universal denoising networks: A novel CNN architecture for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3204–3213. [Google Scholar]
- Tian, C.; Xu, Y.; Zuo, W. Image denoising using deep CNN with batch renormalization. Neural Netw. 2020, 121, 461–473. [Google Scholar] [CrossRef]
- Liu, W.; Li, Y.; Huang, D. RA-UNet: An improved network model for image denoising. Vis. Comput. 2023, 40, 4319–4335. [Google Scholar] [CrossRef]
- Elad, M.; Aharon, M. Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
- Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar] [CrossRef]
- Zhang, J.; Zhao, D.; Gao, W. Group-Based Sparse Representation for Image Restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef]
- Lin, J.; Deng, D.; Yan, J.; Lin, X. Self-adaptive group based sparse representation for image inpainting. J. Comput. Appl. 2017, 37, 1169–1173. [Google Scholar] [CrossRef]
- Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef]
- Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Image restoration via reconciliation of group sparsity and low-rank models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef] [PubMed]
- Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. Low-rankness guided group sparse representation for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 7593–7607. [Google Scholar] [CrossRef]
- Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
- Kervrann, C.; Boulanger, J. Local adaptivity to variable smoothness for exemplar-based image regularization and representation. Int. J. Comput. Vis. 2008, 79, 45–69. [Google Scholar] [CrossRef]
- Goossens, B.; Luong, H.; Pizurica, A.; Philips, W. An improved non-local denoising algorithm. In Proceedings of the 2008 International Workshop on Local and Non-Local Approximation in Image Processing (LNLA 2008), Tuusula, Finland, 19–21 August 2008. [Google Scholar]
- Shi, M.; Fan, L.; Li, X.; Zhang, C. A competent image denoising method based on structural information extraction. Vis. Comput. 2023, 39, 2407–2423. [Google Scholar] [CrossRef]
- Feng, L.; Sun, H.; Sun, Q.; Xia, G. Compressive sensing via nonlocal low-rank tensor regularization. Neurocomputing 2016, 216, 45–60. [Google Scholar] [CrossRef]
- Feng, L.; Sun, H.; Zhu, J. Robust image compressive sensing based on half-quadratic function and weighted schatten- p norm. Inf. Sci. 2018, 477, 265–280. [Google Scholar] [CrossRef]
- Xie, Y.; Gu, S.; Liu, Y.; Zuo, W.; Zhang, W.; Zhang, L. Weighted Schatten p -Norm Minimization for Image Denoising and Background Subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef]
- Chen, B.; Sun, H.; Feng, L.; Xia, G.; Zhang, G. Robust image compressive sensing based on m-estimator and nonlocal low-rank regularization. Neurocomputing 2018, 275, 586–597. [Google Scholar] [CrossRef]
- Zhang, Y.; Yang, Z.; Hu, J.; Zou, S.; Fu, Y. MRI Denoising Using Low Rank Prior and Sparse Gradient Prior. IEEE Access 2019, 7, 45858–45865. [Google Scholar] [CrossRef]
- Zha, Z.; Liu, X.; Zhou, Z.; Huang, X.; Shi, J.; Shang, Z.; Tang, L.; Bai, Y.; Wang, Q.; Zhang, X. Image denoising via group sparsity residual constraint. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 1787–1791. [Google Scholar] [CrossRef]
- Li, Y.; Wu, H.; Jiang, X.; Ding, X. NG-RED:Nonconvex group-matrix residual denoising learning for image restoration. Expert Syst. Appl. 2025, 264, 125876. [Google Scholar] [CrossRef]
- Umirzakova, S.; Mardieva, S.; Muksimova, S.; Ahmad, S.; Whangbo, T. Enhancing the Super-Resolution of Medical Images: Introducing the Deep Residual Feature Distillation Channel Attention Network for Optimized Performance and Efficiency. Bioengineering 2023, 10, 1332. [Google Scholar] [CrossRef] [PubMed]
- Frank, I.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. (With discussion). Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
- Li, F.R. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties. Publ. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar]
- Friedman, J.H. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
- Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
- Gao, C.; Wang, N.; Yu, Q.; Zhang, Z. A Feasible Nonconvex Relaxation Approach to Feature Selection. In Proceedings of the Aaai Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014. [Google Scholar]
- Zha, Z.; Wen, B.; Yuan, X.; Ravishankar, S.; Zhou, J.; Zhu, C. Learning Nonlocal Sparse and Low-Rank Models for Image Compressive Sensing: Nonlocal sparse and low-rank modeling. IEEE Signal Process. Mag. 2023, 40, 32–44. [Google Scholar] [CrossRef]
- Larose, D.T.; Larose, C.D. k-Nearest Neighbor Algorithm. In Discovering Knowledge in Data: An Introduction to Data Mining; Wiley: Hoboken, NJ, USA, 2014; pp. 149–164. [Google Scholar] [CrossRef]
- Tang, H.; Liu, H.; Xiao, W.; Sebe, N. When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition With Limited Data. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2129–2141. [Google Scholar] [CrossRef] [PubMed]
- Kreutz-Delgado, K.; Murray, J.F.; Rao, B.D.; Engan, K.; Lee, T.W.; Sejnowski, T.J. Dictionary Learning Algorithms for Sparse Representation. Neural Comput. 2003, 15, 349–396. [Google Scholar] [CrossRef] [PubMed]
- Zha, Z.; Wen, B.; Yuan, X.; Zhang, J.; Zhou, J.; Lu, Y.; Zhu, C. Nonlocal Structured Sparsity Regularization Modeling for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
- Xiong, R.; Liu, H.; Zhang, X.; Zhang, J.; Ma, S.; Wu, F.; Gao, W. Image Denoising via Bandwise Adaptive Modeling and Regularization Exploiting Nonlocal Similarity. IEEE Trans. Image Process. 2016, 25, 5793–5805. [Google Scholar] [CrossRef] [PubMed]
- Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally Centralized Sparse Representation for Image Restoration. IEEE Trans. Image Process. 2013, 22, 1620–1630. [Google Scholar] [CrossRef]
- Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch Group Based Nonlocal Self-Similarity Prior Learning for Image Denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).








