Next Article in Journal
Eco-Evo-Devo in the Adaptive Evolution of Artificial Creatures Within a 3D Physical Environment
Previous Article in Journal
Package Positioning Based on Point Registration Network DCDNet-Att
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Generalized Non-Convex Non-Smooth Group-Sparse Residual Prior for Image Denoising

1
Research Institute, State Grid Zhejiang Electric Power Co., Ltd., Hangzhou 310007, China
2
State Grid Wenzhou Electric Power Supply Co., Ltd, Wenzhou 325000, China
*
Author to whom correspondence should be addressed.
Electronics 2025, 14(2), 353; https://doi.org/10.3390/electronics14020353
Submission received: 31 October 2024 / Revised: 4 January 2025 / Accepted: 9 January 2025 / Published: 17 January 2025
(This article belongs to the Section Electronic Multimedia)

Abstract

:
Image denoising is a classic yet challenging problem in low-level image processing. Traditional image denoising approaches using convex regularized prior (e.g., L 1 -norm) often bring bias problems. To address this issue, a novel prior model based on a family of non-convex functions and group sparsity residual (GSC) prior constraint for image denoising is studied. We propose a generalized non-convex GSC prior model for the image denoising problem. We first utilize the group-sparse representation (GSR) before exploiting image prior information. Specifically, to further improve the image denoising performance of the GSC prior model, we employ several typical non-convex surrogate functions for the sparsity constraint. Then, a fast and efficient thresholding algorithm is proposed to minimize the resulting optimization problem. The experimental results have demonstrated that our proposed method can achieve the best reconstruction results compared with other image denoising approaches.

1. Introduction

Image denoising is a classic yet popular low-level image processing problem, whose purpose is to reconstruct the latent clean image x from the noisy observation y = x + n , where n is usually defined as additive Gaussian white noise [1,2,3,4]. Mathematically, image denoising can be successfully addressed via the following optimization problems:
min x f y , x + λ x
where f ( y , x ) denotes the data term, x denotes the regularizer, and the non-negative parameter λ is used to balance the data term and regularizer.
In the past decade, image denoising problems have achieved great progress [1,2,3,5,6]. Mathematically, image denoising is an ill-posed problem; thus, the prior model plays a crucial role in improving the image denoising performance. The most widely used models include the sparsity model [7], non-local self-similar model [8,9,10,11], wavelet transform model, and Markov random field model [12]. In addition, deep-learning-based denoising approaches are also increasingly proposed. Among them, convolution neural networks (CNN) [13,14] and recursive neural networks (RNN) [13,14] are mostly used for image denoising. With the maturity of deep learning theory, various network structures have been applied to image denoising and have achieved excellent results [13,14,15]. Although deep-learning-based approaches for image denoising have achieved promising progress, their drawbacks are obvious, e.g., requiring clean, large-scale, labeled datasets for training.
In recent years, sparsity has been increasingly used for image denoising [16,17,18]. Elad et al. [16] used K-SVD for dictionary learning to obtain a sparse dictionary, then performed image denoising in units of image blocks. In 2010, Mairal et al. [17] proposed the LSSC algorithm, which uses dictionary learning to mine local sparsity in the domain and combines non-local methods to explore non-local sparsity. To take advantage of the sparsity between image blocks, Zhang et al. [18] proposed the concept of group sparsity in 2014 by combining several similar image blocks into groups and using the group as the basic unit of sparse representation, then using the SBI algorithm to solve the cost function to improve the robustness of the model. In 2017, Lin et al. [19] proposed an image reconstruction algorithm based on a sparse representation of an adaptive structure group. This algorithm can adaptively adjust the criteria for selecting similar image blocks according to the image’s own structure and regional characteristics, thus improving the performance of model reconstruction.
Non-local structure self similarity (NSS) is one of the most popular image denoising prior models in recent years [20,21,22]. For the model of NSS, the non-local means (NLM) is employed to exploit the NSS characteristics of the image for image denoising, which is an improvement on the traditional neighborhood filtering method and takes into account the self-similar nature of the image. It makes full use of the redundant information in the image and can preserve the details of the image to the greatest extent while denoising. Buades et al. [8,23] first proposed the method of non-local means. This method first determines the weight according to the similarity between the pixel area to be processed and other similar point neighborhoods, and then uses the weight to calculate the gray value of the pixel, which is the weighted average of the gray values of all pixels in the image. Since then, new and improved algorithms have been proposed continuously. Kervrann et al. [24] made improvements in the selection of similar blocks, which improved the denoising performance. Goossens et al. [25] proposed a dual scoring function to calculate the weights, which improves the calculation accuracy of the similarity between similar blocks and improves the denoising performance. Inspired by NLM, Dabov et al. [5] proposed the BM3D method to solve the problem of image denoising. The BM3D algorithm uses collaborative filtering in the three-dimensional transform domain, and the denoising performance reaches an unprecedented height.
In addition, the low-rank property can also be used as prior knowledge for image denoising [26]. The WNNM method proposed by Gu et al. also uses the low-rank attribute of the image. Since this model ignores low-rank properties, the effect of image noise reduction can be achieved through low-rank clustering. Dong et al. [27] proposed applying the low-rank attributes of images as prior knowledge to image compressed sensing, which showed good performance and further improved the denoising ability based on the NSS model. Fen et al. [28] proposed an image compression sensing model based on low-rank priors. Based on the traditional image low-rank model, the Schatten-p norm was used instead of the weighted kernel norm to improve the reconstruction ability of the model [29]. Chen et al. [30] used the low-rank property of images to design non-local low-rank regularization terms, and at the same time, selected m-estimation to achieve robust image compression sensing denoising. This algorithm not only has good denoising performance but also has excellent robustness. Zhang et al. [31] applied the sparsity and low rank of this image to MRI image denoising and achieved good results. Recently, residual prior-based models have shown their promising advantages in various image processing tasks, such as sparsity residual prior [7,32], rank residual prior [33], and deep residual feature in network [34].
The so-called convex optimization method uses a convex function as a penalty function, and the most commonly used ones include L 1 and L 2 norms. The non-convex optimization method is to choose a non-convex function as the penalty function, and the most used one is L p p < 1 . By computing sparse vectors from noisy images, both convex optimization methods and non-convex optimization methods can achieve the goal, but both types of methods have their advantages and disadvantages. Among them, the convex optimization method can better guarantee the convergence of the algorithm. Non-convex optimization methods can obtain more sparse solutions.
In practical applications, the problem (1) is typical NP-hard, which is difficult to solve directly. Usually, the L 1 norm and L 2 norm are used for convex optimization solutions. However, convex optimization often cannot obtain a perfect solution because a perfect approximation of the rank function cannot be obtained. In order to better approximate the rank function, many non-convex surrogate functions have been proposed, including the -norm [35], the smooth clipped absolute deviation (SCAD) [36], the logarithm [37], the minimax concave penalty (MCP) [38], and the exponential type penalty (ETP) [39].
In this paper, we propose a method for non-convex optimization groups with sparse residual-constrained models. First, we transform the image denoising problem into a residual minimization problem using group-sparse residuals, which simplifies the computation and understandability. To better approximate the value function, we choose a non-convex substitution function to replace the traditional convex substitution function, which improves the denoising performance of the model. Finally, to compare the performance of non-convex surrogate functions, we selected the five most commonly used non-convex surrogate functions for comparative experiments. The final experiment proves that the model not only has good denoising performance but also has high efficiency.
The rest of the paper is organized as follows: Section 2 presents some basics, including dictionary learning, group-sparse models, group-sparse residuals, and low-rank minimization models. In Section 3, a non-convex optimization group-sparse residual constrained model is proposed, and the model is solved. Section 4 presents the experiments and results and analyzes them. Section 5 summarizes the full text.

2. Related Work

In this section, we will briefly introduce some related work in image denoising, including group sparse representation, adaptive dictionary learning, and low-rank minimization theory.

2.1. Group Sparse Representation

Recently, the group sparsity prior has been popularly used for image denoising tasks, which has significantly improved image denoising performance [18,40]. For any image x , there are n overlapping patches x i with a size of b × b ; then we can search k patches in the L L sized searching windows based on the benchmarked x i using the kNN method [41]. The patch set is defined by X i = x i 1 , x i 2 , x i 3 , , x i k , where x i k denotes the k-th patch similar to x i . If we define a representation dictionary D i for each group X i , then X i can be expressed as X i = D i C i , where C i denotes the group sparse vector.

2.2. Low-Rank Minimization

The low-rank approximation problem can be summarized as follows: given an input group Y R M × N , the purpose of the low-rank minimization method is to find a low-rank matrix X R M × N through minimizing the objective function (e.g., data fidelity term and the constraint of the regularization term). In general, the low-rank minimization can be expressed as follows:
X ^ = arg min X f ( X , Y ) + λ X
where f ( X ) denotes the loss function and X denotes the low-rank constraint term with a non-negative weight λ . The most popular choice of f ( X , Y ) is the squared loss: f ( X , Y ) = 1 2 Y X 2 2 . In practical applications, low-rank minimization is a typical NP-hard problem, which is difficult to solve directly. The nuclear norm minimization (NNM) and weight nuclear norm minimization (WNNM) are two popular algorithms for approximation. In fact, however, the nuclear norm or weighted nuclear norm is not a perfect approximation of the rank function. Usually, the L 1 -based nuclear norm is usually used to obtain a biased solution.

2.3. Adaptive Non-Local Dictionary Learning

Dictionary learning is also called sparse dictionary coding. The goal of dictionary learning is to extract the most essential features of objects by reducing the interference of other insignificant transactions on the target. The sparse model is consistent with the goal of dictionary learning, which is to remove useless information and retain the most essential and important information. Therefore, the quality of dictionary creation is closely related to the sparsity of the model. Given a group Y i , we have [42,43]
Y i = D i A i
where D i denotes the group dictionary and A i is the corresponding coefficient matrix. Then, the process of dictionary learning can be expressed as follows:
( D i , A i ) = arg min D i , A i 1 2 Y i D i A i F 2 + λ A i 0
Since L 0 -norm is difficult to solve, it is usually relaxed by the popular L 1 regular term. Firstly, randomly select samples from the original sample Y i as the initial value of D i , and initialize A i to 0 . Then, iteratively update D i and A i respectively, and finally obtain the dictionary D i with the best sparsity.
However, the dictionary learning method via (4) is usually complicated and inefficient. In this paper, we introduce a novel approach to achieve the non-local dictionary D via SVD technology [18,44]. Considering the group Y i , the SVD of Y i is
Y i = U i Σ i V i T = k = 1 j δ i , k u i , k v i , k T
Thus the dictionary atom can be obtained by
d i , k = u i , k v i , k T .
Therefore, the non-local dictionary for each group is formed by
D i = d i , 1 , d i , 2 , , d j .

3. Proposed Method

3.1. Group Sparse Coefficient Residual

For image x and its corresponding group X i , if we give a dictionary D i , then the group X i can be sparsely represented by solving the following optimization problem,
B i = arg min B i X i D i B i F 2 + λ B i 0 ,
where B i denotes the i-th group sparse coefficient for group X i . In practical applications for image denoising tasks, for the noisy image y and its corresponding group Y i , considering the fact that Y i and X i can share the same dictionary D i , the corresponding optimization problem for the i-th group sparse coefficient can be modeled as
A i = arg min A i Y i D i A i F 2 + λ A i 0
where A i denotes the i-th group sparse coefficient for group Y i . Thus, group sparse coefficient residual [32,33] can be described as
R i = A i B i
To better understand the group sparse coefficient residual, Figure 1 presents the flowchart of the group sparse coefficient residuals.

3.2. Residual Prior-Based Generalized Non-Convex Non-Smooth Low-Rank Minimization

The residual R i defined in Equation (10) can indicate the difference between A i and B i . Correspondingly, considering the low-rank minimization problem in Equation (2), the residual prior-based low-rank approximation can be formulated as
X ^ i = arg min X i f Y i , X i + λ R i .
To address the optimization problem, the popular model is
A ^ i = arg min X i Y i D i A i 2 2 + λ R i p
where 0 < p 1 .
To better approximate the rank of the matrix X i , many non-convex alternative functions have been proposed. The most commonly used one is the l p -norm [35]. In this paper, we employ a family of non-convex non-smooth functions to regularize the group residual, including the l p -norm [35], the smooth clipped absolute deviation (SCAD) [36], the Logarithm [37], the minimax concave penalty( MCP) [38], and the exponential type penalty (ETP) [39]. These non-convex substitution functions are described in Table 1. Then our proposed group sparse residual prior-based non-convex non-smooth low-rank minimization can be formulated as
A ^ i = arg min X i 1 2 Y i D i A i 2 2 + λ R i
According to earlier work [10,11], we have the following relationship for the group, e.g., X i = U i Σ i V i T = k = 1 j δ i , k u i , k v i , k T . Moreover, according to the definition of dictionary, it is easy to find that its atoms can be achieved by d i , k = u i , k v i , k T . Therefore, if we replace D i A i with X i , the optimization problem in Equation (13) can be equal to the following problem:
X ^ i = arg min X i 1 2 Y i X i 2 2 + λ R i .

3.3. Image Denoising Application

To achieve good denoising performance, the group sparse residual R i should be as small as possible. In this paper, our generalized non-convex non-smooth residual prior-driven image denoising can be modeled as
α ^ = arg min α 1 2 y d α 2 2 + λ i = 1 n A i B i
Let x = d α ; we have
x ^ = arg min x 1 2 y x 2 2 + λ i = 1 n A i B i
According to [11,22], we have the following relationship with a large probability (close to 1) at each iteration:
1 K i = 1 n Y i D i A i F 2 = 1 N y d α 2 2
Therefore, our denoising problem can be converted into the following problem:
X ^ i = arg min X i 1 2 Y i X i 2 2 + λ ˜ A i B i
where λ ˜ = λ K N . In this work, we employed the surrogate algorithm [38] to solve (18), and we obtained the iterative formula:
A i t + 1 = S λ ˜ ( D i 1 X ^ i t B i t ) + B i t
where S λ ˜ denotes the soft-thresholding operator, D i is the principle component analysis (PCA) based dictionary of A i [18,40]. In this paper, we adopt the non-convex surrogate function to solve S λ ˜ . Here, we employ the popular L p -norm as the relaxation function. According to the definition in Table 1, we have
S λ ˜ = x = 0 λ ˜ p x p 1 x > 0
where λ ˜ is regularization parameter. To obtain better denoising performance, we set the parameter λ to change adaptively according to [7,32].
λ = c 2 2 σ n 2 σ i
where c is a small constant and is related to Y i . The σ i denotes the variance of the residual R i . In the iterative process, we choose the iterative regularization strategy to update the estimated variance of noise to improve the performance of the model,
σ t + 1 = γ ( σ 2 Y X ^ t + 1 2 2
where γ is a constant. After getting all A i , we can get the searched reconstruction blocks through
X ^ i = D i A ^ i .
The complete description of the proposed image denoising method is presented in Algorithm 1.
Algorithm 1: Non-convex Optimization Denoising Model Based on Group Sparse Residual Constraint
Electronics 14 00353 i001

4. Experimental Results

In this section, extensive experimental results are presented to evaluate the image denoising performance of our proposed method. In this experiment, similar to the state-of-the-art methods [20,21,45], we also chose two widely used metrics to assess the objective and subjective quality of the reconstructed images, namely the peak signal-to-noise ratio (PSNR) and the Structural Similarity Index metric (SSIM), respectively. Note that PSNR is a commonly used indicator to measure signal/image distortion; though the evaluation results usually appear inconsistent with the supervisor’s feelings, it is still a valuable evaluation index. The SSIM evaluates images from three aspects: brightness, contrast, and structure. Therefore, neighborhood pixels have strong relevance, that is, the similarity of object structure information. The contrast peak signal-to-noise ratio is more complex with the intuitive effect observed by human vision, and it has symmetry, upper and lower bounds, and other properties. These two parameters are the most commonly used objective indicators to evaluate the quality of image restoration. They are defined as follows:
PSNR = 10 log 10 255 2 1 3 m n i = 1 3 X ^ i M i F 2
where M i denotes the i-th channel of the original image, the default i is 1 for the grayscale image, and X ^ i denotes the i-th channel of the repaired image. The size of the image is m × n .
SSIM ( X ^ i , M i ) = ( 2 μ X ^ i μ M i + c 1 ) ( σ X ^ i M i c 2 ) ( μ X ^ i 2 + μ M i 2 + c 1 ) ( σ X ^ i 2 + σ M i 2 + c 2 )
where μ X ^ i is the average value of the gray value of pixels in the i-th channel of the repaired image, μ M i is the average value of the gray value of pixels in the i-th channel of the original image, σ X ^ i is the variance of X ^ i , σ M ^ i is the variance of M i , and σ M i X ^ i is the covariance of M i and X ^ i . c 1 and c 2 are constants used to maintain stability.

4.1. Effectiveness of Our Denoising Model

In this subsection, we will validate the effectiveness of our non-convex image denoising model by denoising noisy classic images and comparing it with some state-of-the-art denoising methods, including BM3D [5], NCSR [46], PGPD [47], A-NLS [45], and GSRC [7]. Note that BM3D, PGPD, and GSRC are three non-local-based image denoising methods. NCSR and A-NLS are two sparsity residual-based image denoising methods. Without loss of generality, the classic non-convex function of MCP is utilized to plug into our denoising model for denoising experiments. The size of the search window is related to the search efficiency of similar blocks. If the search window is too small, it will increase the search time and affect the efficiency; otherwise, similar blocks of the image may be missed, which will affect the image denoising performance of the denoising algorithm. To obtain the best denoising performance, suitable parameters are used for the proposed model. The searching window W × W is set to 25 × 25 , and the searching thresholding value is set to 0.2. The size of patch b × b is set to 6 × 6 , 7 × 7 , 8 × 8 , and 9 × 9 for σ n 20 , 20 < σ n 50 , 50 < σ n 75 , and σ n > 75 , respectively. The parameters of γ and λ are set as 1 × 107 and 2.1, respectively. We chose 13 widely used test images (see Figure 2) to conduct image denoising experiments at four noise levels, i.e., Gaussian white noise with standard deviations n = 30 , 40 , 50 , 75 , and all the PSNR results are presented in Table 2. From the results, we can find that our proposed method outperforms the other five competing methods in all cases in terms of PSNR values.

4.2. Generalization Validation of Different Penalties

In this subsection, we will study the generalization of our proposed non-convex denoising model by comparing the denoising performance of five non-convex functions under different standard deviations of Gaussian noise. Moreover, we selected 13 gray images a the size of 256 × 256 for the denoising test, as shown in Figure 2. These images are selected from the datasets Set12 and CSet8. The Cset8 datasets are all RGB images. In this experiment, those color images are converted into grayscale images. We select two normal levels with σ n = 20 and 100 for the experiments. The PSNR(dB)/SSIM values of all experimental results are shown in Table 3. It can be seen from Table 3 that in the case of σ n = 20 , the average performance of MCP is the best, which is 0.11 dB, 0.19 dB, 0.03 dB, and 0.58 dB higher than L p , Logarithm, SCAD, and ETP, respectively. With the increasing standard deviation of Gaussian white noise, the gap between the five algorithms is narrowing. While σ n = 100 , the five non-convex optimization algorithms have almost no bias. To obtain a better denoising performance, we set the parameter λ i to change adaptively according to different Y i .
λ i = c 2 2 σ n 2 σ i
Figure 3 and Figure 4 show the experimental results for visual comparison. Figure 3 is the result of the image Airplane under the standard deviation of σ n = 20 noise. From the perspective of details, none of the five pictures produces artifacts; however, relatively speaking, the edge details of the MCP algorithm are the most complete, while the edges of the images of the ETP algorithm are more blurred. Compared with the previous images, the color edges in the house image are mostly straight lines. However, due to the excessive noise, the performance of the five algorithms is almost the same, and the purpose of denoising can be achieved in general, but the details are not perfectly preserved. As the noise level increases, the difference between algorithms becomes smaller and smaller. The difference is invisible to the naked eye.

4.3. Robustness Study Under Gaussian Noise with Different Standard Deviations

In this experiment, we will study the robustness of our proposed non-convex denoising model by comparing the denoising performance of five non-convex optimization algorithms under different standard deviations of Gaussian noise. We also selected 13 gray images with a size of 256 × 256 for the denoising test from Figure 2. In this experiment, we add different standard deviations of white Gaussian noise to experiment with σ n = 30 , 40, 50, and 75 (Table 4). In the case of σ n = 30 , 40 , 50 , 75 , the performance of MCP is the best. When σ n = 30 , compared to ETP, Logarithm, Lp, and SCAD, MCP is 0.1 dB, 0.03 dB, 0.08 dB, and 0.01 dB higher in average PSNR, respectively. When σ n = 40 , the difference in the mean PSNR between MCP and other non-convex substitution functions is 0.29 dB, 0.09 dB, 0.12 dB, and 0.01 dB. With the increasing standard deviation of Gaussian white noise, the gap between the five algorithms is narrowing.
Figure 5, Figure 6, Figure 7 and Figure 8 respectively show the experimental results for visual comparison. Figure 5 is the result in a σ n = 30 Gaussian noise environment. Compared with the Airplane, the color boundary of the cameraman is more obvious, and the scene of the image is simpler. In this type of image, MCP is still the best performer. Whether it is the excessive color of characters and the environment or the details of distant buildings, the MCP algorithm can well preserve the details. Figure 6 is the result under Gaussian noise with a standard deviation of σ n = 40 . The composition of the Monarch image is more complex, and there are more color edges in the picture, which requires a higher ability for the algorithm to retain details. At this time, the performance of MCP and SCAD is similar. The picture in Figure 7 is the result of σ n = 50 . Compared with the previous images, the color edges in the house image are mostly straight lines. However, due to the excessive noise, the performance of the five algorithms is almost the same, and the purpose of denoising can be achieved in general, but the details are not perfectly preserved. As the noise level increases, the difference between algorithms becomes smaller and smaller, as shown in Figure 7 and Figure 8. The difference is invisible to the naked eye.

4.4. Computational Time Analysis

Computational time is also a key factor when comparing the performance of algorithms. All experiments were carried out in the same environment. As the noise standard deviation increases, the greater the intensity of the noise, the longer the program takes to run. With noise intensity from 20 to 100, the program takes 2–3 times longer to run. Different algorithms run at different speeds. The average running time of the experiment is shown in Table 5. It can be seen that the running time of Logarithm is the shortest, while the running time of L P penalty function is the longest.

5. Conclusions and Discussion

This paper studies a weighted generalized non-convex low-rank denoising model with a group sparsity residual prior. To improve the rank approximation accuracy, a family of non-convex optimization functions is employed to replace traditional convex optimization. Moreover, the group sparsity residual prior is utilized to solve the ill-posed problem. We also analyze the denoising performance of various non-convex optimization functions under different noise intensities. Experimental results prove that the MCP function performs best both in objective criteria and in subjective visual evaluation. The method based on group sparse residual has a good denoising effect, but the residual model still has great potential.
In future research, the performance of the residual prior model can be further developed. The objects of image processing in this paper are two-dimensional gray images. Therefore, the algorithm can be extended to the processing of three-dimensional hyperspectral images, remote sensing images, and brain CT images.

Author Contributions

S.W. contributed to conceptualization, methodology, data analysis, and writing. R.H. contributed to formal analysis, data curation, review, and editing. C.L. and P.Q. contributed to validation and supervision. All authors have read and agreed to the published version of the manuscript.

Funding

This work was supported by the Science and Technology Project of the State Grid Zhejiang Electric Power Co., Ltd. [grant number 5211DS23000T].

Data Availability Statement

The original contributions presented in this study are included in the article. Further inquiries can be directed to the corresponding author.

Conflicts of Interest

Author Shaohe Wang, Rui Han and Chen Li were employed by State Grid Zhejiang Electric Power Co., Ltd., and Ping Qian was employed by State Grid Wenzhou Electric Power Supply Co., Ltd. All authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest. The authors declare that this study received funding from Science and Technology Project of the State Grid Zhejiang Electric Power Co., Ltd. The funder was not involved in the study design, collection, analysis, interpretation of data, the writing of this article or the decision to submit it for publication.

References

  1. Danielyan, A.; Katkovnik, V.; Egiazarian, K. BM3D frames and variational image debluring. IEEE Trans. Image Process. 2012, 21, 1715–1728. [Google Scholar] [CrossRef] [PubMed]
  2. Chen, H.; Gu, J.; Zhang, Z. Attention in Attention Network for Image Super-Resolution. arXiv 2021, arXiv:2104.09497. [Google Scholar]
  3. Donoho, D. Compressed sensing. IEEE Trans. Inf. Theory 2006, 52, 1289–1306. [Google Scholar] [CrossRef]
  4. Guo, L.; Huang, S.; Liu, H.; Wen, B. Towards Robust Image Denoising via Flow-based Joint Image and Noise Model. IEEE Trans. Circuits Syst. Video Technol. 2023, 34, 6105–6115. [Google Scholar] [CrossRef]
  5. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image Denoising by Sparse 3-D Transform-Domain Collaborative Filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  6. Luo, Q.; Liu, B.; Zhang, Y.; Han, Z.; Tang, Y. Low-rank decomposition on transformed feature maps domain for image denoising. Vis. Comput. 2021, 37, 1899–1915. [Google Scholar] [CrossRef]
  7. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group sparsity residual constraint with non-local priors for image restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
  8. Li, Y.; Xiao, F.; Liang, W.; Gui, L. Multiply Complementary Priors for Image Compressive Sensing Reconstruction in Impulsive Noise. ACM Trans. Multimed. Comput. Commun. Appl. 2024, 20, 1–22. [Google Scholar] [CrossRef]
  9. Li, Y.; Gao, L.; Hu, S.; Gui, G.; Chen, C.Y. Nonlocal low-rank plus deep denoising prior for robust image compressed sensing reconstruction. Expert Syst. Appl. 2023, 228, 120456. [Google Scholar] [CrossRef]
  10. Li, Y.; Jiang, Y.; Zhang, H.; Liu, J.; Ding, X.; Gui, G. Nonconvex L1/2-regularized nonlocal self-similarity denoiser for compressive sensing based CT reconstruction. J. Frankl. Inst. 2023, 360, 4172–4195. [Google Scholar] [CrossRef]
  11. Li, Y.; Gui, G.; Cheng, X. From group sparse coding to rank minimization: A novel denoising model for low-level image restoration. Signal Process. 2020, 176, 107655. [Google Scholar] [CrossRef]
  12. Malfait, M.; Roose, D. Wavelet-based image denoising using a Markov random field a priori model. IEEE Trans. Image Process. 1997, 6, 549–565. [Google Scholar] [CrossRef] [PubMed]
  13. Lefkimmiatis, S. Universal denoising networks: A novel CNN architecture for image denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3204–3213. [Google Scholar]
  14. Tian, C.; Xu, Y.; Zuo, W. Image denoising using deep CNN with batch renormalization. Neural Netw. 2020, 121, 461–473. [Google Scholar] [CrossRef]
  15. Liu, W.; Li, Y.; Huang, D. RA-UNet: An improved network model for image denoising. Vis. Comput. 2023, 40, 4319–4335. [Google Scholar] [CrossRef]
  16. Elad, M.; Aharon, M. Image Denoising Via Sparse and Redundant Representations Over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef]
  17. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; pp. 2272–2279. [Google Scholar] [CrossRef]
  18. Zhang, J.; Zhao, D.; Gao, W. Group-Based Sparse Representation for Image Restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef]
  19. Lin, J.; Deng, D.; Yan, J.; Lin, X. Self-adaptive group based sparse representation for image inpainting. J. Comput. Appl. 2017, 37, 1169–1173. [Google Scholar] [CrossRef]
  20. Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image restoration via simultaneous nonlocal self-similarity priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef]
  21. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C. Image restoration via reconciliation of group sparsity and low-rank models. IEEE Trans. Image Process. 2021, 30, 5223–5238. [Google Scholar] [CrossRef] [PubMed]
  22. Zha, Z.; Wen, B.; Yuan, X.; Zhou, J.; Zhu, C.; Kot, A.C. Low-rankness guided group sparse representation for image restoration. IEEE Trans. Neural Netw. Learn. Syst. 2022, 34, 7593–7607. [Google Scholar] [CrossRef]
  23. Buades, A.; Coll, B.; Morel, J.M. A non-local algorithm for image denoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–26 June 2005; Volume 2, pp. 60–65. [Google Scholar] [CrossRef]
  24. Kervrann, C.; Boulanger, J. Local adaptivity to variable smoothness for exemplar-based image regularization and representation. Int. J. Comput. Vis. 2008, 79, 45–69. [Google Scholar] [CrossRef]
  25. Goossens, B.; Luong, H.; Pizurica, A.; Philips, W. An improved non-local denoising algorithm. In Proceedings of the 2008 International Workshop on Local and Non-Local Approximation in Image Processing (LNLA 2008), Tuusula, Finland, 19–21 August 2008. [Google Scholar]
  26. Shi, M.; Fan, L.; Li, X.; Zhang, C. A competent image denoising method based on structural information extraction. Vis. Comput. 2023, 39, 2407–2423. [Google Scholar] [CrossRef]
  27. Feng, L.; Sun, H.; Sun, Q.; Xia, G. Compressive sensing via nonlocal low-rank tensor regularization. Neurocomputing 2016, 216, 45–60. [Google Scholar] [CrossRef]
  28. Feng, L.; Sun, H.; Zhu, J. Robust image compressive sensing based on half-quadratic function and weighted schatten- p norm. Inf. Sci. 2018, 477, 265–280. [Google Scholar] [CrossRef]
  29. Xie, Y.; Gu, S.; Liu, Y.; Zuo, W.; Zhang, W.; Zhang, L. Weighted Schatten p -Norm Minimization for Image Denoising and Background Subtraction. IEEE Trans. Image Process. 2016, 25, 4842–4857. [Google Scholar] [CrossRef]
  30. Chen, B.; Sun, H.; Feng, L.; Xia, G.; Zhang, G. Robust image compressive sensing based on m-estimator and nonlocal low-rank regularization. Neurocomputing 2018, 275, 586–597. [Google Scholar] [CrossRef]
  31. Zhang, Y.; Yang, Z.; Hu, J.; Zou, S.; Fu, Y. MRI Denoising Using Low Rank Prior and Sparse Gradient Prior. IEEE Access 2019, 7, 45858–45865. [Google Scholar] [CrossRef]
  32. Zha, Z.; Liu, X.; Zhou, Z.; Huang, X.; Shi, J.; Shang, Z.; Tang, L.; Bai, Y.; Wang, Q.; Zhang, X. Image denoising via group sparsity residual constraint. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 1787–1791. [Google Scholar] [CrossRef]
  33. Li, Y.; Wu, H.; Jiang, X.; Ding, X. NG-RED:Nonconvex group-matrix residual denoising learning for image restoration. Expert Syst. Appl. 2025, 264, 125876. [Google Scholar] [CrossRef]
  34. Umirzakova, S.; Mardieva, S.; Muksimova, S.; Ahmad, S.; Whangbo, T. Enhancing the Super-Resolution of Medical Images: Introducing the Deep Residual Feature Distillation Channel Attention Network for Optimized Performance and Efficiency. Bioengineering 2023, 10, 1332. [Google Scholar] [CrossRef] [PubMed]
  35. Frank, I.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. (With discussion). Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  36. Li, F.R. Variable Selection via Nonconcave Penalized Likelihood and its Oracle Properties. Publ. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar]
  37. Friedman, J.H. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
  38. Zhang, C.H. Nearly unbiased variable selection under minimax concave penalty. Ann. Statist. 2010, 38, 894–942. [Google Scholar] [CrossRef] [PubMed]
  39. Gao, C.; Wang, N.; Yu, Q.; Zhang, Z. A Feasible Nonconvex Relaxation Approach to Feature Selection. In Proceedings of the Aaai Conference on Artificial Intelligence, Québec City, QC, Canada, 27–31 July 2014. [Google Scholar]
  40. Zha, Z.; Wen, B.; Yuan, X.; Ravishankar, S.; Zhou, J.; Zhu, C. Learning Nonlocal Sparse and Low-Rank Models for Image Compressive Sensing: Nonlocal sparse and low-rank modeling. IEEE Signal Process. Mag. 2023, 40, 32–44. [Google Scholar] [CrossRef]
  41. Larose, D.T.; Larose, C.D. k-Nearest Neighbor Algorithm. In Discovering Knowledge in Data: An Introduction to Data Mining; Wiley: Hoboken, NJ, USA, 2014; pp. 149–164. [Google Scholar] [CrossRef]
  42. Tang, H.; Liu, H.; Xiao, W.; Sebe, N. When Dictionary Learning Meets Deep Learning: Deep Dictionary Learning and Coding Network for Image Recognition With Limited Data. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 2129–2141. [Google Scholar] [CrossRef] [PubMed]
  43. Kreutz-Delgado, K.; Murray, J.F.; Rao, B.D.; Engan, K.; Lee, T.W.; Sejnowski, T.J. Dictionary Learning Algorithms for Sparse Representation. Neural Comput. 2003, 15, 349–396. [Google Scholar] [CrossRef] [PubMed]
  44. Zha, Z.; Wen, B.; Yuan, X.; Zhang, J.; Zhou, J.; Lu, Y.; Zhu, C. Nonlocal Structured Sparsity Regularization Modeling for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2023, 61, 1–16. [Google Scholar] [CrossRef]
  45. Xiong, R.; Liu, H.; Zhang, X.; Zhang, J.; Ma, S.; Wu, F.; Gao, W. Image Denoising via Bandwise Adaptive Modeling and Regularization Exploiting Nonlocal Similarity. IEEE Trans. Image Process. 2016, 25, 5793–5805. [Google Scholar] [CrossRef] [PubMed]
  46. Dong, W.; Zhang, L.; Shi, G.; Li, X. Nonlocally Centralized Sparse Representation for Image Restoration. IEEE Trans. Image Process. 2013, 22, 1620–1630. [Google Scholar] [CrossRef]
  47. Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch Group Based Nonlocal Self-Similarity Prior Learning for Image Denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015. [Google Scholar]
Figure 1. Flochart of the group sparse coefficient residuals.
Figure 1. Flochart of the group sparse coefficient residuals.
Electronics 14 00353 g001
Figure 2. Test images used in the experiments.
Figure 2. Test images used in the experiments.
Electronics 14 00353 g002
Figure 3. The results of the test image (Airplane) denoised using different methods while σ n = 20 . (a) ETP, 25.35 dB; (b) Logarithm, 25.59 dB; (c) Lp, 25.50 dB; (d) MCP, 25.69 dB; (e) SCAD, 25.67 dB.
Figure 3. The results of the test image (Airplane) denoised using different methods while σ n = 20 . (a) ETP, 25.35 dB; (b) Logarithm, 25.59 dB; (c) Lp, 25.50 dB; (d) MCP, 25.69 dB; (e) SCAD, 25.67 dB.
Electronics 14 00353 g003
Figure 4. The results of the test image (Parrots) denoised using different methods while σ n = 100 . (a) ETP, 23.58 dB; (b) Logarithm, 23.6 dB; (c) Lp, 23.6 dB; (d) MCP, 23.6 dB; (e) SCAD, 23.6 dB.
Figure 4. The results of the test image (Parrots) denoised using different methods while σ n = 100 . (a) ETP, 23.58 dB; (b) Logarithm, 23.6 dB; (c) Lp, 23.6 dB; (d) MCP, 23.6 dB; (e) SCAD, 23.6 dB.
Electronics 14 00353 g004
Figure 5. The results of the test image (Boats) denoised using different methods σ n = 30 . (a) ETP, 26.21 dB; (b) Logarithm, 26.27 dB; (c) Lp, 26.24 dB; (d) MCP, 26.30 dB; (e) SCAD, 26.30 dB.
Figure 5. The results of the test image (Boats) denoised using different methods σ n = 30 . (a) ETP, 26.21 dB; (b) Logarithm, 26.27 dB; (c) Lp, 26.24 dB; (d) MCP, 26.30 dB; (e) SCAD, 26.30 dB.
Electronics 14 00353 g005
Figure 6. The results of the test image (Cameraman) denoised using different methods while σ n = 40 . (a) ETP, 23.56 dB; (b) Logarithm, 23.67 dB; (c) Lp, 23.68 dB; (d) MCP, 23.74 dB; (e) SCAD, 23.73 dB.
Figure 6. The results of the test image (Cameraman) denoised using different methods while σ n = 40 . (a) ETP, 23.56 dB; (b) Logarithm, 23.67 dB; (c) Lp, 23.68 dB; (d) MCP, 23.74 dB; (e) SCAD, 23.73 dB.
Electronics 14 00353 g006
Figure 7. The results of the test image (Monarch) denoised using different methods while σ n = 50 . (a) ETP, 23.76 dB; (b) Logarithm, 23.80 dB; (c) Lp, 23.80 dB; (d) MCP, 23.82 dB; (e) SCAD, 23.82 dB.
Figure 7. The results of the test image (Monarch) denoised using different methods while σ n = 50 . (a) ETP, 23.76 dB; (b) Logarithm, 23.80 dB; (c) Lp, 23.80 dB; (d) MCP, 23.82 dB; (e) SCAD, 23.82 dB.
Electronics 14 00353 g007
Figure 8. The results of the test image (House) denoised using different methods while σ n = 75 . (a) ETP, 26.39 dB; (b) Logarithm, 26.43 dB; (c) Lp, 26.41 dB; (d) MCP, 26.43 dB; (e) SCAD, 26.43 dB.
Figure 8. The results of the test image (House) denoised using different methods while σ n = 75 . (a) ETP, 26.39 dB; (b) Logarithm, 26.43 dB; (c) Lp, 26.41 dB; (d) MCP, 26.43 dB; (e) SCAD, 26.43 dB.
Electronics 14 00353 g008
Table 1. Five popular non-convex surrogate functions and their super-gradients.
Table 1. Five popular non-convex surrogate functions and their super-gradients.
PenaltyFormula ρ x Super-Gradient ρ x
L p λ θ p x = 0 λ p x p 1 x > 0
SCAD λ x x λ x 2 + 2 γ λ x λ 2 2 γ 1 λ < x λ 2 γ + 1 2 x > γ λ γ λ λ x λ γ λ x γ 1 λ < x 0 x > γ λ γ λ
Logarithm λ log γ + 1 log γ x + 1 λ γ γ x + 1 log γ + 1
MCP λ x x 2 2 γ x < γ λ 1 2 γ λ 2 x γ λ λ x γ x < γ λ 0 x γ λ
ETP λ 1 exp γ 1 exp γ x λ γ 1 exp γ exp γ x
Table 2. The results of the images denoised using different methods.
Table 2. The results of the images denoised using different methods.
σ n = 30 σ n = 40
ImagesBM3DNCSRPGPDA-NLSGSRCOursBM3DNCSRPGPDA-NLSGSRCOurs
Airplane28.4928.3428.6328.5928.6828.9726.8826.7827.1227.1027.2127.51
Boats29.3329.0529.3229.3429.4229.6527.7627.5227.9027.8027.9728.17
Fence28.1928.1428.1328.4328.3928.6526.8426.7626.9127.1127.1627.41
Flower27.9727.8828.1128.2028.2128.4526.4826.3526.6826.7526.8427.11
Foreman32.7532.6132.8332.7933.1533.4531.2931.5231.5531.2931.8131.99
House32.0932.0132.2432.2632.4432.7830.6530.7931.0230.9131.1631.39
J. Bean31.9731.9031.9932.0732.2832.4330.2130.4930.3930.3830.5130.79
Lake26.7426.6926.9026.9226.8927.0525.2125.2125.5125.4625.5225.77
Leaves27.8128.0427.9928.3728.5628.7825.6926.2026.2926.6926.8227.02
Lena29.4629.3229.6029.5029.6629.8927.8228.0028.2228.0028.1628.43
Lin30.9530.6530.9630.8330.9231.2129.5229.2729.7329.3929.4729.61
Monarch28.3528.3828.4928.7028.8028.9726.7226.8127.0227.2027.3327.58
Starfish27.6627.6927.6727.8928.0228.2926.0626.1726.2126.3626.5326.72
σ n = 50 σ n = 75
Airplane25.7625.6325.9826.0226.1726.5123.9923.7624.1524.0624.1224.29
Boats26.7426.3726.8226.7826.9527.4324.8224.4424.8324.7624.9425.12
Fence25.9225.7725.9426.2226.2626.4724.2223.7524.1824.4024.5324.79
Flower25.4925.3125.6325.7725.7625.9923.8223.5023.8223.8723.8723.99
Foreman30.3630.4130.4530.4630.7730.7028.0728.1828.3928.5428.7528.94
House29.6929.6129.9330.1330.4530.7427.5127.1627.8128.0628.5928.83
J. Bean29.2629.2429.2029.2629.5829.8727.2227.1527.0727.1227.2927.42
Lake24.2924.1524.4924.4424.4424.6522.6322.4822.7622.6122.6122.82
Leaves24.6824.9425.0325.3225.6625.9722.4922.6022.6122.9523.3423.51
Lena26.9026.9427.1527.0827.0627.4925.1725.0225.3025.3225.3225.53
Lin28.7128.2328.7928.5028.6028.9826.9626.2227.0526.7226.8426.97
Monarch25.8225.7326.0026.1226.2526.5423.9123.6724.0024.1124.3524.65
Starfish25.0425.0625.1125.2625.3625.6423.2723.1823.2323.2423.3223.51
Table 3. The denoised results of images corrupted by Gaussian noise with different standard deviations, i.e., σ n = 20 and 100.
Table 3. The denoised results of images corrupted by Gaussian noise with different standard deviations, i.e., σ n = 20 and 100.
MethodsETPLogarithmLpMCPSCAD
σ n = 20
Airplane25.35/0.768825.59/0.804725.50/0.787625.69/0.820225.67/0.8178
Baboon23.41/0.531823.59/0.542823.60/0.539323.69/0.550023.68/0.5499
Barbara26.86/0.804427.28/0.827727.28/0.819727.49/0.837227.46/0.8360
Boats26.79/0.777027.18/0.801627.11/0.791327.34/0.812427.32/0.8113
Cameraman24.25/0.719124.47/0.762424.44/0.745224.57/0.783624.55/0.7803
Elaine28.24/0.815728.71/0.841228.72/0.831328.99/0.853628.95/0.8518
Foreman30.80/0.820831.95/0.869131.87/0.853632.54/0.890732.45/0.8875
House28.87/0.763729.64/0.814729.48/0.795629.95/0.838329.91/0.8346
Leaves23.69/0.856523.84/0.871123.83/0.866223.91/0.877023.9/0.8762
Monarch25.06/0.821725.30/0.844525.29/0.836025.42/0.855225.39/0.8532
Parrots26.57/0.789027.00/0.830726.95/0.813627.21/0.850727.18/0.8478
Peppers25.44/0.749925.68/0.772716.23/0.760125.78/0.783125.78/0.7822
Starfish24.84/0.745425.08/0.765722.98/0.754425.16/0.773925.15/0.7725
Average26.17/0.766426.56/0.796125.64/0.784126.75/0.809726.72/0.8078
σ n = 100
Airplane21.92/0.657821.94/0.659821.93/0.658521.94/0.660221.94/0.6603
Baboon21.42/0.361321.43/0.362621.43/0.362521.43/0.363221.43/0.3631
Barbara23.04/0.610823.04/0.611223.04/0.609823.04/0.611023.04/0.6111
Boats23.06/0.617223.07/0.618223.06/0.617423.07/0.618723.07/0.6186
Cameraman21.60/0.642421.61/0.644621.6/0.642021.61/0.645721.61/0.6456
Elaine24.07/0.688324.08/0.689724.07/0.688924.08/0.690724.08/0.6906
Foreman26.81/0.778526.86/0.781526.86/0.779626.88/0.783026.88/0.7828
House25.13/0.715325.15/0.718125.13/0.716425.14/0.718525.14/0.7185
Leaves19.40/0.691319.40/0.692519.4/0.692619.4/0.693219.4/0.6930
Monarch21.34/0.676921.36/0.678521.36/0.678321.37/0.679321.36/0.6791
Parrots23.58/0.715323.60/0.718023.6/0.717523.6/0.719123.6/0.7188
Peppers22.25/0.643222.26/0.644522.26/0.644322.26/0.645322.26/0.6451
Starfish21.32/0.564321.32/0.565421.32/0.565521.32/0.566121.32/0.5659
Average22.69/0.643322.70/0.645022.70/0.644122.70/0.645722.70/0.6456
Table 4. Robustness study of non-convex penalties via averaged PSNR (dB)/SSIM for 13 images corrupted by Gaussian noise with 4 different standard deviations.
Table 4. Robustness study of non-convex penalties via averaged PSNR (dB)/SSIM for 13 images corrupted by Gaussian noise with 4 different standard deviations.
MethodsETPLogarithmLpMCPSCAD
σ n = 30
Results (1)25.67/0.764525.74/0.770325.69/0.763725.77/0.773025.76/0.7727
σ n = 40
Results (2)25.20/0.726525.40/0.746825.37/0.742225.49/0.756125.48/0.7547
σ n = 50
Results (3)24.92/0.727624.97/0.733124.96/0.730525.00/0.735724.99/0.7353
σ n = 75
Results (4)23.62/0.680723.64/0.683323.64/0.682223.65/0.684523.65/0.6843
Table 5. Average run time (seconds) of different algorithms on the test images.
Table 5. Average run time (seconds) of different algorithms on the test images.
σ ETPLogarithmLpMCPSCAD
206.495.748.016.996.48
309.088.3410.918.838.82
409.288.7910.959.138.91
5010.558.9411.699.039.37
7513.1513.0716.4814.0114.36
10021.1421.0926.3922.3123.01
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Wang, S.; Han, R.; Qian, P.; Li, C. Generalized Non-Convex Non-Smooth Group-Sparse Residual Prior for Image Denoising. Electronics 2025, 14, 353. https://doi.org/10.3390/electronics14020353

AMA Style

Wang S, Han R, Qian P, Li C. Generalized Non-Convex Non-Smooth Group-Sparse Residual Prior for Image Denoising. Electronics. 2025; 14(2):353. https://doi.org/10.3390/electronics14020353

Chicago/Turabian Style

Wang, Shaohe, Rui Han, Ping Qian, and Chen Li. 2025. "Generalized Non-Convex Non-Smooth Group-Sparse Residual Prior for Image Denoising" Electronics 14, no. 2: 353. https://doi.org/10.3390/electronics14020353

APA Style

Wang, S., Han, R., Qian, P., & Li, C. (2025). Generalized Non-Convex Non-Smooth Group-Sparse Residual Prior for Image Denoising. Electronics, 14(2), 353. https://doi.org/10.3390/electronics14020353

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop