Next Article in Journal
Kinematic Determination of the Aerial Phase in Ski Jumping
Previous Article in Journal
Soft Transducer for Patient’s Vitals Telemonitoring with Deep Learning-Based Personalized Anomaly Detection
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Hybrid Sparse Representation Model for Image Restoration

1
School of Cyber Science and Engineering, Qufu Normal University, Qufu 273165, China
2
Department of Information Engineering, Weihai Ocean Vocational College, Rongcheng 264300, China
*
Author to whom correspondence should be addressed.
Sensors 2022, 22(2), 537; https://doi.org/10.3390/s22020537
Submission received: 26 November 2021 / Revised: 3 January 2022 / Accepted: 5 January 2022 / Published: 11 January 2022
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Group-based sparse representation (GSR) uses image nonlocal self-similarity (NSS) prior to grouping similar image patches, and then performs sparse representation. However, the traditional GSR model restores the image by training degraded images, which leads to the inevitable over-fitting of the data in the training model, resulting in poor image restoration results. In this paper, we propose a new hybrid sparse representation model (HSR) for image restoration. The proposed HSR model is improved in two aspects. On the one hand, the proposed HSR model exploits the NSS priors of both degraded images and external image datasets, making the model complementary in feature space and the plane. On the other hand, we introduce a joint sparse representation model to make better use of local sparsity and NSS characteristics of the images. This joint model integrates the patch-based sparse representation (PSR) model and GSR model, while retaining the advantages of the GSR model and the PSR model, so that the sparse representation model is unified. Extensive experimental results show that the proposed hybrid model outperforms several existing image recovery algorithms in both objective and subjective evaluations.

1. Introduction

The purpose of image restoration is to reconstruct high-quality images x from the degraded images y . This is a typical inverse problem, and its mathematical expression is
y = H x + n
where H denotes the degenerate operator and n is usually assumed to be zero-mean Gaussian white noise. Under different settings, Equation (1) can represent different image processing tasks. When H denotes the identity matrix, Equation (1) represents the image denoising task [1,2]; when H denotes a diagonal matrix with diagonal 1 or 0, Equation (1) represents an image inpainting task [3,4]; when H denotes the blurring operator, Equation (1) represents an image deblurring task [5,6]. In this paper, we focus on the image restoration task.
In order to obtain high-quality reconstructed images, image prior knowledge is usually used to regularize the solution space. In general, image restoration can be expressed as the following minimization problems:
x ^ = arg min x 1 2 y H x 2 2 + λ R ( x )
where the first term 1 2 y H x 2 2 represents data fidelity, the second term R ( x ) depends on the image prior, and λ is a regularization parameter that balances the two terms. Due to the ill-posed nature of image restoration, the prior knowledge of image plays an important role in improving the performance of the image restoration algorithm. In the past decades, various image prior models have been proposed, such as total variation [7], sparse representation [3,8,9,10,11], and deep convolutional neural network (CNN) [2,12,13].
Sparse representation is a commonly used technique in image processing. Sparse representation models are usually divided into two categories: analytical sparse representation models [14,15] and synthetic sparse representation models [3]. The analytic sparse representation model represents the signal by multiplying it with an analytic over-complete dictionary to produce a sparse effect. In this paper, we mainly study the synthetic sparse representation model. Generally speaking, synthetic sparse representation models in image processing can be further divided into two categories: patch-based sparse representation (PSR) [16,17] and group-based sparse representation (GSR) [3,9,10,11]. The PSR model assumes that each patch of an image can be modeled perfectly by sparse linear combination of learnable dictionaries, which are usually learned from images or image datasets. Compared with traditional analysis dictionaries, such as discrete cosine variation and wavelet, dictionaries that learn directly from images can improve sparsity and are superior in adapting to the local structure of images. For example, K-SVD based dictionary learning [17] not only shows good image denoising effects, but also has been extended to many image processing and computer vision tasks [18,19]. However, the PSR model uses an over-complete dictionary, which usually produces poor visual artifacts in image restoration [20]. Moreover, the PSR model ignores the correlation between similar patches [3,21], which usually leads to image degradation.
Inspired by the success of nonlocal self-similarity prior (NSS) [22], the GSR model was proposed. The GSR model uses patch group instead of image patch as the basic unit of image processing in sparse representation and shows great potential in various image processing tasks [3,8,9,11,23,24,25,26,27]. Dabov et al. [27] proposed the BM3D method combining transform domain filtering with NSS prior, which is still one of the most effective denoising methods. Elad et al. [23] proposed an image denoising algorithm based on the improved KSVD learning dictionary and non-local self-similarity, which combined the correlation coefficient matching criterion with the dictionary clipping method. Mairal et al. [28] proposed to learn simultaneous sparse coding (LSSC) for image restoration, improving the recovery performance of KSVD [17] through GSR. Zhang et al. [24] used non-locally similar patches as data samples and estimated statistical parameters based on PCA training. Zhang et al. [3] proposed a group-based sparse representation model for image restoration, which is essentially equivalent to a low-rank minimization model. Dong et al. [25] developed structured sparse coding with Gaussian-scale mixture prior for image restoration. Zha et al. [8] proposed a joint model to integrate the PSR model and GSR model, making image restoration establish a unified model in the field of sparse representation. Wu et al. [11] proposed structured analysis sparsity learning (SASL), which combines the structured sparse priors learned from the given degraded image and reference images in an iterative and trainable manner. Zha et al. [9] introduced the group sparse residual constraint, trying to further define and simplify the image restoration problem by reducing the group sparse residual. Zha et al. [26] proposed an image recovery method using NSS priors of both internal and external image data to develop the GSR model. Despite the great success of the GSR models in various image restoration tasks, the image restored by the traditional GSR model is prone to over-smooth effect [29]. At the same time, the traditional GSR model and various improved models only consider using the patch group of degraded image to minimize the approximate error, which will produce the effect of image over-fitting, especially when the degraded image is highly damaged.
Therefore, we propose a hybrid sparse representation model. The model uses both degraded image and the NSS prior of external image dataset to perform image restoration more effectively. On this basis, a joint sparse representation model is introduced. This model integrates the PSR model and GSR model into one model, which not only retains the advantages of the PSR model and the GSR model, but also reduces their shortcomings, so that the models in the sparse representation field are unified. For the convenience of description, the proposed hybrid sparse representation model is called HSR model. The NSS priors of degraded images are called internal NSS priors, and the NSS priors of external image datasets are called external NSS priors. Figure 1 shows how the HSR model can repair degraded images. The contributions of this paper are summarized as follows:
(1) We propose a hybrid sparse representation model that combines the NSS priori of degraded images and external image dataset to make full use of the specific structure of degraded image and the common characteristics of natural image;
(2) The introduction of joint model into the HSR not only retains the advantages of the PSR model and GSR model, but also alleviates their respective disadvantages.
The rest of this paper is organized as follows. Section 1 describes the related work of sparse representation. Section 2 introduces how to learn NSS prior from external image corpus. Section 3 introduces the proposed mixed sparse representation model. Section 4 employs an iterative algorithm based on the alternating direction multiplier framework to solve the proposed model. Section 5 presents the experimental results. Section 6 concludes the paper.

2. Fundamentals of Image Analysis Methods

This section introduces the knowledge of the HSR model. The proposed HSR model uses the NSS prior knowledge of both degraded images and external datasets and introduces a joint model that integrates the PSR model and GSR model. Therefore, the proposed HSR model is based on GSR and PSR. A brief introduction of these two models is given below.

2.1. Patch-Based Sparse Representation

The basic unit of patch-based sparse representation (PSR) model is image patch. Given an image x R N and a dictionary D R b × M , b M , where M represents the number of atoms in the dictionary D . The dictionary D in the PSR model is shared. x i = R i x , i = 1 , 2 , , n represents the size image patch of b × b extracted from the position i , and R i represents the extraction operation. The sparse representation of each patch x i is to find the sparse vectors A i with most coefficients zero, that is x i = D A i . The l 0 -norm represents the number of non-zero elements in a vector. To regularize the parameter matrix A i with l 0 -norm is to expect most of the elements of A i to be 0 and the parameter A i to be sparse. Therefore, by solving the following l 0 -norm minimization problem, each patch x i can be sparsely represented as
A ^ i = arg min A i 1 2 x i D A i 2 2 + λ A i 0 i
where 2 denotes l 2 -norm and λ is a regularization parameter. In the image restoration task, the input degraded image y R b is used because the original image is not available. Extracting image patch y i from degraded images y , each image patch y i can be sparsely represented as
A ^ i = arg min A i 1 2 y i D A i 2 2 + λ A i 0 i
In this way, the whole image can be sparsely represented by a set of sparse codes A ^ i i = 1 n .

2.2. Group-Based Sparse Representation

Compared with typical PSR models, the GSR model uses patch group as the basic unit of image processing and can produce more promising results in various image processing tasks [3,21,25]. In this subsection, we briefly introduce the GSR model.
Firstly, the image x is divided into n overlapped patch of size b × b , i = 1 , 2 , , n . For each exemplar patch x i , the m most similar matching patches are selected from the search window of size is W × W by the k-nearest neighbor (KNN) method to form the set S G i . Then, all patches in S G i are stacked into a matrix X i R b × m , with each patch in the collection S G i as a column of the matrix X i , that is X i = { x i , 1 , x i , 2 , , x i , m } . Since X i is a matrix of all image blocks with similar structures, it is called a patch group, where x i , j represents the j-th similar patch in the i-th patch group. Finally, given a dictionary D i R b × K , which D is usually learned from each image group, then each patch group X i can be sparsely represented as
B ^ i = arg min B i 1 2 X i D i B i 2 2 + ρ B i 0 i
where B i represents the group sparsity coefficient of each image group, 0 represents the l 0 -norm, and calculate the non-zero items of each column in B i .
In image restoration tasks, since the original image is not available, we can only use the input degraded image y R b . According to the above steps, the image patch y i are extracted from the degraded image y , search for similar matching patches to generate an image group Y i R b × m , i.e., Y i = { y i , 1 , y i , 2 , , y i , m } .
B ^ i = arg min B i 1 2 Y i D i B i 2 2 + ρ B i 0 i
The entire image can be sparsely represented by groups of sparse codes B ^ i i = 1 n . In the above introduction, y i in the PSR model and Y i in the GSR model are extracted from the same degraded image y .

3. Learning NSS Priors from External Image Datasets

As mentioned earlier, the traditional sparse representation model only uses the NSS prior of degraded image and ignores the NSS prior of external dataset. In this section, we use the group-based Gaussian mixture model (GMM) [26,30] to learn the external NSS prior from the patch group of a given training image dataset. The following briefly introduces how to learn NSS priors from external image data sets.

3.1. Gaussian Component Learning Based on Group GMM

Similar to the construction process of patch groups in Section 2.2, patch groups are extracted from the given external training image dataset, and each patch group is expressed as
E i = { e i , j } j = 1 d , i = 1 , 2 , , S
where e i , j represents the j-th non-local similar patch of the i-th patch group E i , j = 1 , 2 , , d . In this paper, the GMM model is used to learn k Gaussian components { N ( u k , k ) } from the patch group { E i } i = 1 S of the external image dataset, and all patches in each patch group are required to belong to the same Gaussian component. The likelihood of a patch group { E i } i = 1 S can be expressed as
P ( E i ) = k = 1 K π k j = 1 d N ( e i . j | μ k , k )
where K is the total number of Gaussian components, μ k is the mean value, k is the covariance matrix, π k is the weight of Gaussian components, and k = 1 K π k = 1 . The GMM model is parameterized by mean vectors μ k , covariance matrices k , and the weights of Gaussian components π k . To facilitate representation, we introduce variables ϒ = { μ k , k , π k } k = 1 K . Assume that all patch groups are independent, and the overall objective likelihood function is L = i = 1 S P ( E ) . Taking the log of it, to maximize the objective function of using group-based GMM learning,
I n L = i = 1 S I n k = 1 K π k j = 1 d N ( e i , j | μ k , k )
We can optimize Equation (9) by using the expectation maximization (EM) algorithm [30,31,32]. In the E-step, the posterior probability of the k component calculated by Bayesian formula is
P ( k | e i . j , ϒ ) = π k j = 1 d N ( e i , j | u k , k ) i = 1 K π l j = 1 d N ( e i , j | u l , l )
S k = i = 1 S P ( k | e i , j , ϒ )
In the M-step, for each patch group E i , we update the model parameters as follows
π k = S k / S
μ k = i = 1 S π k j = 1 d e i , j i = 1 S π k
k = i = 1 S P ( k | e i , j , ϒ ) j = 1 d e i , j e i , j T S k
By iteratively alternating between the E-Step and M-Step, the model parameters are iteratively updated until convergence is achieved.

3.2. Gaussian Component Selection

For patch group Y i of degraded image y , we can select the most appropriate Gaussian component from the training GMM. According to [31], assuming that the image is broken by a Gaussian white noise with a variance of σ e 2 , the covariance matrix of the k-th Gaussian component will be expressed as Σ k + σ e 2 I , where I is the unit matrix. The k-th Gaussian component belonging to the image group Y i can be selected by posterior probability
P ( k | Y i ) = j = 1 d N ( y i , j | 0 , k + σ e 2 I ) l = 1 K j = 1 d N ( y i , j | 0 , l + σ e 2 I )
By maximizing Equation (15), the k-th Gaussian component with the highest probability can be selected for each group Y i . Each group E i has the same Gaussian distribution. The covariance matrix of the k-th Gaussian component is denoted by k . By using the eigenvalue factorization to k , we have:
k = U k Λ k U k T
where U k represents the orthogonal matrix composed of the eigenvector k and the diagonal matrix Λ k of the eigenvalues. Based on the above GMM learning, the feature vector of U k can represent the statistical structure of NSS changes of natural images, so U k can be used to represent the structural changes of image groups in this component [33]. Finally, we select the best matched U k for each patch group Y i . Since solving the l 0 -norm minimization problem is an NP-hard problem, the l 0 -norm minimization in Equation (6) is replaced by a non-convex l 1 -norm. The sparse model based on external NSS can be expressed as follows
C ^ i = arg min C i ( 1 2 Y i U k C i 2 2 + ω C i 1 ) , i
where C i represents the sparse coefficient of the i-th image group Y i , and ω represents a non-negative constant. After obtaining all the sparse codes { C ^ i } i = 1 n , a high-quality reconstructed image x ^ can be obtained.

4. The Proposed Hybrid Sparse Representation Model

As mentioned above, the traditional sparse representation model only uses the internal NSS priors of degraded images, which leads to over-fitting in the image restoration process. Therefore, this paper uses both the internal NSS priors of degraded images and the external NSS priors of external image dataset. At the same time, the PSR model usually produces some undesirable visual artifacts, and the GSR model leads to over-smoothing effects in various image processing tasks. In order to overcome their shortcomings and improve the image restoration effect, we have introduced a joint model [8] based on both internal and external NSS priors. This model integrates the PSR model and GSR model, instead of using Equations (4) and (6) separately. Combining Equations (4), (6) and (17), the proposed new hybrid sparse representation model is expressed as
( A ^ i , B ^ i , C ^ i ) = arg min A i , B i , C i 1 2 μ Y i L i N i 2 2 + 1 2 η Y i U k C i 2 2 + τ A i 0 + φ B i 0 + ω C i 1 L i = [ D D i ] ,   N i = A i B i
where N i represents the internal sparse coefficient of the joint sparse representation model, and L i represents the internal joint dictionary of the joint sparse representation model. U k represents the external dictionary, which is learned from the image group of the external image data set using the external NSS prior [26,30]. μ and η represent non-zero constants and act as balance factors to make the solution of Equation (18) more feasible. τ = λ 2 , φ = ρ 2 , ω represents the regularization parameter, which is used to balance the sparse coefficients terms of A i 0 , B i 0 , and C i 1 . The sparse coefficient A i corresponds to the sparseness of the image patch on the basis of maintaining the local consistency of the image, which reduces the over-smoothing effect. The sparse coefficient B i corresponds to the sparseness of patch group on the basis of maintaining the non-local consistency of the image and suppresses the undesirable visual artifacts. For specific details of the joint sparse representation model, please refer to [8]. Based on the above analysis, the proposed hybrid sparse representation model not only uses the internal and external NSS priors, but also unifies the sparse representation model.
The hybrid sparse representation model is used in the task of image restoration, and the joint Equations (1) and (18) are expressed as
( A ^ i , B ^ i , C ^ i ) = arg min A i , B i , C i 1 2 μ y H L N 2 2 + 1 2 η y H U C 2 2 + τ i = 1 n A i 0 + φ i = 1 n B i 0 + ω i = 1 n C i 1 L = [ D D G ] ,   N = A B
In Equation (19), L represents the internal dictionary of the joint sparse representation model, and U represents the external dictionary. N represents the sparse coefficient of the joint sparse representation model, and C represents the external sparse coefficient. The hybrid sparse representation model proposed in Equation (19) not only comprehensively considers the NSS priors of internal image and external image database, which can provide mutually complementary information for image reconstruction, but also unifies the sparse representation model by combining the PSR model and GSR model.

5. The Solution Process of the Proposed Hybrid Sparse Representation Model

In this Section, in order to make the proposed model manageable and robust, the alternating direction method of multipliers (ADMM) [34,35] is adopted to solve the large-scale optimization problem in Equation (19). Specifically, the minimization of Equation (19) involves three sub-problems, including A i , B i , and C i . Different from the traditional optimization strategies that only considers the fixed values of parameters μ , η , τ , φ , and ω , we adaptively adjust all parameters in Equation (19) at each iteration to ensure the stability and practicability of the algorithm. The specific implementation details of the hybrid sparse representation model are given below.

5.1. Solution of Hybrid Sparse Representation Model Based on ADMM

Equation (19) is a large-scale non-convex optimization problem. In order to make the optimization problem easy to handle, the alternating direction multiplier method (ADMM) is used. The basic principle of ADMM is to decompose the unconstrained minimization problem into different constrained sub-problems. The following is a brief introduction to ADMM algorithm through a constraint optimization problem,
min Z R N , N R M f ( Z ) + g ( N ) , s . t . Z = L N
where L R M × N , f : R N R , g : R M R . The basic ADMM is shown in Algorithm 1, where t represents the number of iterations.
Algorithm 1. ADMM
Require: Z and N
1: Initialize t = 0 , μ > 0 , Z 0 = 0 , J 0 = 0
2: for t = 0 to Max-Iter do
3:  Z t + 1 = arg min Z f ( Z ) + v 2 Z N t J t 2 2
4:  N t + 1 = arg min N g ( N ) + v 2 Z t + 1 N J t 2 2
5:  J t + 1 = J t ( Z t + 1 N t + 1 )
6: end for
Going back to the hybrid sparse representation model, we transform Equation (19) into two constraint problems and call the ADMM method to solve it. We first transform Equation (19) into an equivalent constraint form by introducing auxiliary variables Z and Q,
( A ^ i , B ^ i , C ^ i ) = arg min A i , B i , C i 1 2 μ y H Z 2 2 + 1 2 η y H Q 2 2 + τ i = 1 n A i 0 + φ i = 1 n B i 0 + ω i = 1 n C i 1 s . t . Z = L N , Q = U C
To facilitate the solution, Equation (21) can be decomposed into two constrained optimization problems,
( A ^ i , B ^ i ) = arg min A i , B i , C i 1 2 μ y H Z 2 2 + τ i = 1 n A i 0 + φ i = 1 n B i 0 , s . t . Z = L N
( C ^ i ) = arg min C i 1 2 η y H Q 2 2 + ω i = 1 n C i 1 , s . t . Q = U C
Equation (22) represents the constrained optimization problem of solving the internal joint sparse representation model, and Equation (23) represents the constrained optimization problem of solving the external sparse representation model.

5.2. Solution of Internal Sparse Representation Model

Solving the internal sparse representation model in Equation (22), defining f ( Z ) = 1 2 μ y H Z 2 2 , g ( N ) = τ i = 1 n A i 0 + φ i = 1 n B i 0 , and using line 3 in Algorithm 1,
Z ^ t + 1 = arg min Z f ( Z ) + v 2 Z N t J t 2 2 = arg min Z 1 2 μ y H Z 2 2 + v 2 Z [ D D G ] [ A t B G t ] J t 2 2 = arg min Z 1 2 μ y H Z 2 2 + v 2 Z D A t D G B G t J t 2 2
where D represents the fixed dictionary in the PSR model, and D G represents the cascade of all sub-dictionaries D i in the GSR model. Using line 4 in Algorithm 1, we obtain
N ^ t + 1 = arg min N g ( N ) + v 2 Z t + 1 L N J t 2 2 = arg min N τ i = 1 n A i 0 + φ i = 1 n B i 1 + v 2 Z t + 1 [ D D G ] [ A B G ] J t 2 2 = arg min N τ i = 1 n A i 0 + φ i = 1 n B i 1 + v 2 Z t + 1 D A D G B G J t 2 2
The minimization problem N in Equation (25) is decomposed into A i and B i , and solved respectively, as
A ^ i t + 1 = arg min A i τ i = 1 n A i 0 + v 2 Z t + 1 D A D G B G J t 2 2
B ^ i t + 1 = arg min B i φ i = 1 n B i 1 + v 2 Z t + 1 D A D G B G J t 2 2
Finally, using line 5 of Algorithm 1 to update J t ,
J t + 1 = J t ( Z t + 1 D A t + 1 D G B G t + 1 )
In summary, the minimization of Equation (22) involves three minimization problems, including Z , A i , and B i . Equation (26) represents the PSR model, and Equation (27) represents the GSR model. The implementation details of an effective solution to each sub-problem are given below.

5.2.1. Solution of Sub-Problem Z

Given A and B G , the sub-problem Z in Equation (24) is transformed into,
min L 1 ( Z i ) Z i = min Z i i = 1 n 1 2 μ Y i H i Z i 2 2 + v 2 Z i D A i D i B i J i 2 2 , i
Equation (29) is a quadratic form that has a closed-form solution so that
Z ^ i = ( H i T H i + v μ I ) 1 ( H i T Y i + v μ ( D A i + D i B i + J i ) ) , i
In Equation (30), I denotes the unit matrix of the desired dimension, and J i is the corresponding element in J . In Equation (30), (26) and (27) are used in combination to estimate each.

5.2.2. Solution of Sub-Problem A i

For each image patch in Equation (26), the sub-problem can be re-expressed as
min A i L 2 ( A i ) = min A i i = 1 n 1 2 D A i r i 2 2 + μ τ v A i 0 , i
where r i = Z i D i B i J i . Equation (31) is a sparse representation problem, where the constraint form is directly solved,
min A i A i 0 , s . t . D A i r i 2 2 θ , i
where θ represents a small constant. Equation (32) can be effectively solved by the orthogonal matching pursuit (OMP) algorithm [36].

5.2.3. Solution of Sub-Problem B i

Given Z and A , the sub-problem in Equation (27) can be transformed into,
min B i L 2 ( B i ) = min B i i = 1 n 1 2 D i B i R i 2 2 + μ φ v B i 1 , i
where R i = Z i D A i J i . Solving Equation (33), we find,
B ^ i = arg min B i i = 1 n 1 2 R i D i B i 2 2 + μ φ v B i 1 , i
An important problem in solving sub-problem B i is the choice of dictionary D i . To adapt the local structure of the image, a dictionary based on principal component analysis (PCA) is learned for each group R i . Due to the orthogonality of dictionary D i , Equation (34) can be rewritten as
B ^ i = arg min B i i = 1 n 1 2 γ i B i 2 2 + μ φ v B i 1 , i
where R i = D i γ i . We can solve the closed solution for each B i by soft thresholding [31],
B ^ i = s o f t ( γ i , μ φ v )

5.3. Solution of External Sparse Representation Model

Solving the external sparse representation model in Equation (23), defining f ( Q ) = 1 2 η y H Q 2 2 , g ( C ) = ω i = 1 n C i 1 , and using line 3 in Algorithm 1,
Q ^ t + 1 = arg min Q   f ( Q ) + v 2 Q U C t O t 2 2 = arg min Q 1 2 η y H Q 2 2 + v 2 Q U C t O t 2 2
Using line 4 in Algorithm 1, we obtain
C ^ t + 1 = arg min C   g ( C ) + v 2 Q t + 1 U C O t 2 2 = arg min C i   ω i = 1 n C i 0 + v 2 Q t + 1 U C O t 2 2
Finally, using line 5 of Algorithm 1 to update O t ,
O t + 1 = O t ( Q t + 1 U C t + 1 )
In summary, the minimization of Equation (23) involves two minimization sub-problems, including Q and C i . The solution procedure for Q and C i is similar to that in Section 5.2, and the implementation details of an efficient solution for each sub-problem are given below.

5.3.1. Solution of Sub-Problem Q

Given the internal sparse representation model, Equation (23) translates into
C ^ i = arg min C i i = 1 n 1 2 Y i H i Q i 2 2 + η ε v C i 1 , s . t . Q i = U i C i
Equation (37) is a quadratic form that has a closed-form solution so that
Q ^ i = ( H i T H i + η ε I ) 1 ( H i T Y i + η ε ( U C i + O i ) ) , i
In Equation (38), I denotes the unit matrix of the desired dimension and O i is the corresponding element in O .

5.3.2. Solution of Sub-Problem C i

Given Q , the sub-problem C i in Equation (40) can be transformed so that
C ^ i = arg min C i i = 1 n 1 2 Y i U i C i 2 2 + η ε v C i 1 , i
Evidently, Equation (42) can be viewed as a sparse representation problem for each image group Y i . According to Section 2, we can select the best-matching Gaussian component for each group through Equation (15), and then assign the best matching PCA-based dictionary to each group according to Equation (16). Due to the orthogonality of dictionary U i , Equation (42) can be rewritten as
C ^ i = arg min C i i = 1 n 1 2 κ i C i 2 2 + η ε v C i 1 , i
where Y i = U i κ i . We can solve the closed solution for each a by soft threshold [37],
C ^ i = s o f t ( κ i , η ε v )
A complete description of the hybrid sparse representation model for image restoration is given in Algorithm 2.
Algorithm 2. A hybrid sparse representation model for image restoration
Require degraded image y , mask H and group-based GMM
1: Initialize x ^ 0 = y , A i 0 = 0 , B i 0 = 0 , C i 0 = 0
2: Set parameters t , b , W , m , μ , η , τ , φ , ω , v , ς , ε
3: for t = 0 to Max-Iter do
4: Calculate σ e by Equation (45)
5: Update O t + 1 by Equation (39)
6: for Each patch group Y i  do
7:  Select the k-th optimal Gaussian component by Equation (15)
8:  Select the dictionary U k by Equation(16)
9:   Update C i t + 1 by Equation (44)
10: end for
11: Update Z t + 1 by Equation (30)
12:    R t + 1 = Z t + 1 D G B G J t
13:   Create dictionary D by R t + 1 using KSVD
14:   each patch r i  do
15:   Update A i t + 1 by Equation (32)
16:   end for
17:    R G t + 1 = Z t + 1 D A J t
18:   for each patch group R i  do
19:   Create dictionaries D G by R i t + 1 using PCA
20:   Update B i t + 1 by Equation (36)
21:   end for
22:   Update C t + 1 by concatenating all C i
23:   Update A t + 1 by concatenating all A i
24:   Update B t + 1 by concatenating all B i
25:   Update D t + 1 by concatenating all D i
26: end for
27: Output: The final restored image x ^ = x t + 1 .

5.4. Adaptive Parameter Adjustment Strategy

There are six parameters in Equation (21), namely μ , η , τ , φ , ω , and v . A fixed value is usually chosen for each parameter based on experience. However, this makes it difficult to guarantee the stability and effectiveness of the whole algorithm. To address this problem, an adaptive parameter adjustment scheme is proposed to make the proposed algorithm more stable and practical. An iterative regularization strategy [38] is used to update the estimate of the noise variance σ e . The standard deviation of the noise σ e at the t-th iteration is expressed as
σ e t = δ σ e 2 x ^ y 2 2
where t denotes the number of iterations, δ denotes the scale factor controlling the variance estimation, and the scheme has been widely used for image denoising with Gaussian noise variance estimation [30,38].
Therefore, μ t and η t can be expressed as
μ t = a ( σ e 2 ) t
η t = b ( σ e 2 ) t
where γ i and κ i denote the estimated standard deviation of B ^ i and C ^ i [39], respectively. ς denotes a constant with a small value to avoid division by zero. In order to make the proposed algorithm more accurate and practical, according to [37], in the t-th iteration, the ADMM balance factor a is set to
v t = 1 c ( σ e 2 ) t
where c denotes the scale factor.

6. Experimental Results

In this section, the experimental results of the proposed HSR model and seven comparison methods are given, including the SALSA [40], BPFA [41], GSR [3], JSR-SR [8], GSRC-NLP [9], IR-CNN [42], and IDBP [43] methods. All experiments were carried out on Intel (R) Core (TM) I7-6700 CPU and 3.40 GHz CPU PC under Matlab 2018B environment. The source code for all competing methods is open source, and we use the default parameter settings. The 13 images used for the experimental tests are shown in Figure 2. In order to evaluate the quality of the restored images, an experimental comparative analysis of the restored images was performed from both objective and subjective aspects. For objective evaluation, the peak signal to noise ratio (PSNR) and structural similarity (SSIM) [44] metrics were used for the experimental comparison of the restored images. The PSNR was calculated as shown in Equations (49) and (50),
M S E = 1 H × W i = 0 H 1 j = 0 W 1 X ( i , j ) Y ( i , j ) 2
P S N R = 10 · log 10 ( 2 n 1 2 M S E )
where X and Y denote the original image and the restored image, respectively, and H × W denotes the size of the image. Equation (20) is used to calculate the mean squared error MSE of the original image X and the restored image Y . Equation (50) is the calculation formula of PSNR, and n is the number of bits per pixel. A larger value of PSNR indicates less image distortion. The calculation of SSIM is shown in Equations (51)–(55),
l ( X , Y ) = 2 u X u Y + C 1 u X 2 + u Y 2 + C 1 ,   c ( X , Y ) = 2 σ X σ Y + C 2 σ X 2 + σ Y 2 + C 2 ,   s ( X , Y ) = 2 σ X Y + C 3 σ X σ Y + C 3
u X = 1 H × W i = 1 H j = 1 W X i , j
σ X 2 = 1 H × W 1 i = 1 H j = 1 W X i , j u X 2
σ X Y = 1 H × W 1 i = 1 H j = 1 W ( ( X ( i , j ) u X ) ( Y ( i , j ) u Y ) )
S S I M ( X , Y ) = l ( X , Y ) c ( X , Y ) s ( X , Y )
In Equation (51), SSIM measures similarity in terms of luminance l , contrast c , and image structure s . Where u X and u Y denote the mean of the original image X and the restored image Y of size H × W , respectively; σ X and σ Y denote the variance of the original image X and the restored image Y , respectively; and σ X Y denotes the covariance of the original image X and the restored image Y . C 1 , C 2 , and C 3 are constants and introducing a constant can avoid the situation where the denominator is 0. The SSIM indicator is closer to human subjective feelings, and its value range is 0 , 1 . The larger the value of SSIM, the more similar the two images are, and the better the effect of image restoration.
For color images, this paper only focuses on the restoration of the luminance channel in YCrCb space. In the group-based GMM learning phase, the training patch group used in the experiment is collected from the Kodak photoCD dataset, which includes 24 natural images.

6.1. Objective Evaluation

In the image restoration task, the image restoration results are given for four masks, i.e., 80%, 70%, 60%, and 50% of random pixel loss. The parameters of the HSR model used for image restoration are set as follows: the search window W × W is set to 25 × 25 , the size of the image patch is set to 8 × 8 , the number of similar patches is set to 60, σ e = 2 , ς = e 14 , and v = 0.2 . We compared the proposed HSR model with seven restoration methods, including SALSA [40], BPFA [41], GSR [3], JPG-SR [8], GSRC-NLP [9], IR-CNN [42] and IDBP [43]. Among these seven methods for comparison, SALSA [40], BPFA [41], GSR [3], JPG-SR [8], and GSRC-NLP [9] are based on traditional image restoration algorithms. The GSR [3], JPG-SR [8], and GSRC-NLP [9] methods are image restoration algorithms based on the traditional GSR model, which belong to the same type of model as our proposed HSR model. SALSA [40] and BPFA [41] are not based on GSR. In order to comprehensively evaluate the performance of the proposed model for image restoration, the proposed HSR model was also compared with algorithms based on deep learning [42,43].
The SALSA model [40] proposes an algorithm belonging to the augmented Lagrangian method family to deal with constraint problems. The method solves optimization problems where the optimal regularization parameters are tuned by manual trial and error, which requires considerable time and effort to achieve the optimal value of the method. The BPFA model [41] utilizes a non-parametric Bayesian dictionary learning method for image sparse representation, and uses image patches as the basic unit of sparse representation, which ignores the similarity between image patches. In terms of the average value, the proposed HSR model is 4.74 dB and 6.19 dB higher than SALAS and BPFA methods respectively.
The GSR method [3] is a typical representative of the traditional GSR model, and the JPG-SR method [8] and the GSRC-NLP method [9] are both improved methods based on the GSR model. The GSR method, the JPG-SR method and the GSRC-NLP method only utilize the internal NSS prior. However, the HSR model proposed in this paper combines internal and external NSS priors. In terms of the average value, the HSR model proposed in this paper improves 1.47 dB, 1.43 dB, and 1.06 dB over the GSR, JPG-SR, and GSRC-NLP methods respectively. IRCNN [42] and IDBP method [43] are recovery methods based on deep learning, using powerful prior knowledge of deep neural networks. In terms of the average value, the proposed HSR model improves 3.66 dB and 3.01 dB over the IRCNN and IDBP methods, respectively.
As shown in Table 1, Table 2, Table 3 and Table 4, the PSNR of the proposed HSR model on images with a pixel loss rate of 80%, 70%, 60% and 50% is higher than that of SALSA, BPFA, GSR, JRG-SR, GSRC-NLP, IR-CNN and IDBP. It can be seen from the statistical SSIM values in Table 5, Table 6, Table 7 and Table 8, that the HSR model is better than other methods in most cases. The experimental results in Table 1, Table 2, Table 3 and Table 4 and Table 5, Table 6, Table 7 and Table 8 prove that the proposed HSR model is effective and gives good restoration results compared to the comparison method.

6.2. Subjective Assessment

The visual comparison between the proposed HSR model in this paper and SALSA [40], BPFA [41], GSR [3], JPG-SR [8], GSRC-NLP [9], IR-CNN [42] and IDBP [43] methods after restoration of the image Mickey with pixel missing rate 80% is given in Figure 3. It can be observed from Figure 3 that the SALSA [40] and BPFA [41] methods cannot recover sharp edges and fine details. The GSR [3] method is better in recovering details, but produces an over-smoothing effect. The JPG-SR [8] method can obtain better visual quality than GSR [3] method. However, the objective evaluation results in Table 1, Table 2, Table 3, Table 4 and Table 5 and Table 5, Table 6, Table 7 and Table 8 show that although the JPG-SR [8] method has a higher mean value of PSNR than the GSR [3] method in Table 1, Table 2, Table 3, Table 4 and Table 5, in the actual restoration process, the PSNR and SSIM values of some images after restoration are lower than the restoration results of GSR [3] method. The image restoration effect of the JPG-SR [8] method is unstable, and only some of the image restoration results are better than the GSR [3] method. The GSRC-NLP [9] method can obtain similar visual effects as our proposed HSR model, which is not easy to distinguish from the naked eye. However, according to the experimental results in Table 1, Table 2, Table 3, Table 4 and Table 5 and Table 5, Table 6, Table 7 and Table 8, our proposed HSR model has better objective evaluation results. The visual result of our proposed method is also better in recovering details than the results of IR-CNN [42] and IDBP [43]. The visual results in Figure 3 show that our proposed HSR model retains clear edges and details, especially at higher pixel missing rates, and produces the result with the best visual quality.

6.3. Running Time

In this section, we present a comparison of the proposed HSR method with other comparison methods in terms of running time. Taking image Butterfly as an example, the running time of all comparison methods is compared when the pixel loss rate is 50%. As can be seen from Table 9, the processing time of HSR method proposed in this paper is 5000.22 s for the image, which is less than 5027.67 s of the GSRC-NLP method. The proposed HSR method utilizes NSS to construct internal and external image groups and needs to learn the corresponding dictionaries, which requires higher computational workload and therefore consumes more time. To reduce processing time in our future work, learning external NSS priors from the external data set will be done in advance in the Kodak photoCD data set. Through one-time learning from Kodak photoCD data set, the external NSS priors are obtained. The priors learned in advance are applied to speed up the proposed HSR method.

7. Conclusions

In order to improve the repair performance of the traditional GSR model, we propose a new hybrid sparse representation model. The model uses the NSS prior of degraded image and external image data set, so that the model is complementary in the feature space and the plane. And on this basis, we introduced a joint sparse representation model. The joint model integrates the PSR model and the GSR model, while retaining their advantages, overcoming their shortcomings, and unifying the sparse representation model. Experimental results show that the model is comparable to the test method, and it is better than several state-of-the-art image restoration and map methods in both objective and subjective aspects.

Author Contributions

Conceptualization, C.Z. (Caiyue Zhou) and C.Z. (Chongbo Zhou); methodology, C.Z. (Chongbo Zhou); software, Y.K.; validation, C.Z. (Chuanyong Zhang) and C.Z. (Chongbo Zhou); formal analysis, L.S.; investigation, D.W.; resources, C.Z. (Chongbo Zhou); data curation, L.S.; writing—original draft preparation, C.Z. (Caiyue Zhou); writing—review and editing, C.Z. (Chongbo Zhou); visualization, L.S.; supervision, C.Z. (Chongbo Zhou); project administration, C.Z. (Chongbo Zhou); funding acquisition, L.S. All authors have read and agreed to the published version of the manuscript.

Funding

The research of this paper was supported by the Doctoral Scientific Research Fund (No. BSQD20130154) and the Shandong Provincial Natural Science Foundation, China (No. ZR2021MD115). The APC was funded by the Shandong Provincial Natural Science Foundation, China (No. ZR2021MD115).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Datasets used in this research were cited through citing papers corresponding to them. The datasets can be found at the following websites. SALSA: http://sedumi.ie.lehigh.edu. BPFA: http://www.ee.duke.edu/~mz1/Results/BPFAImage/. GSR: https://github.com/jianzhangcs/GSR. JPG-SR: https://drive.google.com/open?id=1KMIERcJtZYKdGt2HvUySFtviAC5RprHu. GSRC-NLP: https://drive.google.com/open?id=1jWtRQ9mUxVzBR0pec. IR-CNN: https://github.com/cszn/IRCNN. IDBP: https://github.com/tomtirer/IDBP. The Kodak photoCD dataset: http://r0k.us/graphics/kodak/.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Buades, A.; Coll, B.; Morel, J.-M. A non-local algorithm for imagedenoising. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), San Diego, CA, USA, 20–25 June 2005; Volume 2, pp. 60–65. [Google Scholar]
  2. Dong, W.; Wang, P.; Yin, W.; Shi, G.; Wu, F.; Lu, X. Denoising Prior Driven Deep Neural Network for Image Restoration. IEEE Trans. Pattern Anal. Mach. Intell. 2019, 41, 2305–2318. [Google Scholar] [CrossRef] [Green Version]
  3. Zhang, J.; Zhao, D.; Gao, W. Group-based sparse representation for image restoration. IEEE Trans. Image Process. 2014, 23, 3336–3351. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  4. Zhang, Q.; Yuan, Q.; Zeng, C.; Li, X.; Wei, Y. Missing data reconstruction in remote sensing image with a unified spatial-temporal-spectral deep convolutional neural network. IEEE Trans. Geosci. Remote Sens. 2018, 56, 4274–4288. [Google Scholar] [CrossRef] [Green Version]
  5. Pan, J.; Sun, D.; Pfister, H.; Yang, M.H. Blind image deblurring using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; Volume 2016-December, pp. 1628–1636. [Google Scholar] [CrossRef]
  6. Ren, C.; He, X.; Nguyen, T.Q. Adjusted non-local regression and directional smoothness for image restoration. IEEE Trans. Multimed. 2019, 21, 731–745. [Google Scholar] [CrossRef]
  7. Yuan, X. Generalized alternating projection based total variation minimization for compressive sensing. In Proceedings of the 2016 IEEE International Conference on Image Processing (ICIP), Phoenix, AZ, USA, 25–28 September 2016; Volume 2016, pp. 2539–2543. [Google Scholar] [CrossRef] [Green Version]
  8. Zha, Z.; Yuan, X.; Wen, B.; Zhang, J.; Zhou, J.; Zhu, C. Image Restoration Using Joint Patch-Group-Based Sparse Representation. IEEE Trans. Image Process. 2020, 29, 7735–7750. [Google Scholar] [CrossRef]
  9. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhu, C. Group Sparsity Residual Constraint with Non-Local Priors for Image Restoration. IEEE Trans. Image Process. 2020, 29, 8960–8975. [Google Scholar] [CrossRef]
  10. Elad, M.; Aharon, M. Image Denoising Via Sparse and Redundant Representations over Learned Dictionaries. IEEE Trans. Image Process. 2006, 15, 3736–3745. [Google Scholar] [CrossRef] [PubMed]
  11. Wu, F.; Dong, W.; Huang, T.; Shi, G.; Cheng, S.; Li, X. Hybrid sparsity learning for image restoration: An iterative and trainable approach. Signal Process. 2021, 178, 107751. [Google Scholar] [CrossRef]
  12. Yuan, X.; Haimi-Cohen, R. Image Compression Based on Compressive Sensing: End-to-End Comparison with JPEG. IEEE Trans. Multimed. 2020, 22, 2889–2904. [Google Scholar] [CrossRef] [Green Version]
  13. Zha, Z.; Yuan, X.; Zhou, J.T.; Zhou, J.; Wen, B.; Zhu, C. The Power of Triply Complementary Priors for Image Compressive Sensing. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; pp. 983–987. [Google Scholar] [CrossRef]
  14. Li, X.; Shen, H.; Zhang, L.; Li, H. Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information. ISPRS J. Photogramm. Remote Sens. 2015, 106, 1–15. [Google Scholar] [CrossRef]
  15. Wen, B.; Ravishankar, S.; Bresler, Y. Structured Overcomplete Sparsifying Transform Learning with Convergence Guarantees and Applications. Int. J. Comput. Vis. 2015, 114, 137–167. [Google Scholar] [CrossRef]
  16. Li, H.; Liu, F. Image denoising via sparse and redundant representations over learned dictionaries in wavelet domain. In Proceedings of the 2009 Fifth International Conference on Image and Graphics, Xi’an, China, 20–23 September 2009; Volume 15, pp. 754–758. [Google Scholar] [CrossRef]
  17. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process. 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  18. Zhang, Q.; Li, B. Discriminative K-SVD for dictionary learning in face recognition. In Proceedings of the 2010 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2010; pp. 2691–2698. [Google Scholar] [CrossRef]
  19. Lun, D.P.K. Robust fringe projection profilometry via sparse representation. IEEE Trans. Image Process. 2016, 25, 1726–1739. [Google Scholar] [CrossRef] [PubMed]
  20. Dong, W.; Zhang, L.; Shi, G.; Wu, X. Image deblurring and super-resolution by adaptive sparse domain selection and adaptive regularization. IEEE Trans. Image Process. 2011, 20, 1838–1857. [Google Scholar] [CrossRef] [Green Version]
  21. Zha, Z.; Yuan, X.; Wen, B.; Zhou, J.; Zhang, J.; Zhu, C. A Benchmark for Sparse Coding: When Group Sparsity Meets Rank Minimization. IEEE Trans. Image Process. 2020, 29, 5094–5109. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  22. Dabov, K.; Foi, A.; Egiazarian, K. Video denoising by sparse 3D transform-domain collaborative filtering. Eur. Signal Process. Conf. 2007, 16, 145–149. [Google Scholar]
  23. Zhang, J.; Ghanem, B. ISTA-Net: Interpretable Optimization-Inspired Deep Network for Image Compressive Sensing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–23 June 2018; pp. 1828–1837. [Google Scholar] [CrossRef] [Green Version]
  24. Zhang, L.; Dong, W.; Zhang, D.; Shi, G. Two-stage image denoising by principal component analysis with local pixel grouping. Pattern Recognit. 2010, 43, 1531–1549. [Google Scholar] [CrossRef] [Green Version]
  25. Dong, W.; Shi, G.; Ma, Y.; Li, X. Image Restoration via Simultaneous Sparse Coding: Where Structured Sparsity Meets Gaussian Scale Mixture. Int. J. Comput. Vis. 2015, 114, 217–232. [Google Scholar] [CrossRef]
  26. Zha, Z.; Yuan, X.; Zhou, J.; Zhu, C.; Wen, B. Image Restoration via Simultaneous Nonlocal Self-Similarity Priors. IEEE Trans. Image Process. 2020, 29, 8561–8576. [Google Scholar] [CrossRef]
  27. Dabov, K.; Foi, A.; Katkovnik, V.; Egiazarian, K. Image denoising by sparse 3-D transform-domain collaborative filtering. IEEE Trans. Image Process. 2007, 16, 2080–2095. [Google Scholar] [CrossRef] [PubMed]
  28. Mairal, J.; Bach, F.; Ponce, J.; Sapiro, G.; Zisserman, A. Non-local sparse models for image restoration. In Proceedings of the 2009 IEEE 12th International Conference on Computer Vision, Kyoto, Japan, 29 September–2 October 2009; Volume 2, pp. 2272–2279. [Google Scholar] [CrossRef]
  29. Li, M.; Liu, J.; Xiong, Z.; Sun, X.; Guo, Z. MARlow: A joint multiplanar autoregressive and low-rank approach for image completion. In Computer Vision—ECCV 2016; Lecture Notes in Computer Science; Springer: Cham, Switzerland, 2016; Volume 9911, pp. 819–834. [Google Scholar] [CrossRef]
  30. Xu, J.; Zhang, L.; Zuo, W.; Zhang, D.; Feng, X. Patch group based nonlocal self-similarity prior learning for image denoising. In Proceedings of the IEEE International Conference on Computer Vision (ICCV), Santiago, Chile, 7–13 December 2015; Volume 2015 Inter, pp. 244–252. [Google Scholar] [CrossRef]
  31. Zoran, D.; Weiss, Y. From learning models of natural image patches to whole image restoration. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 479–486. [Google Scholar] [CrossRef] [Green Version]
  32. Yu, G.; Sapiro, G.; Mallat, S. Solving inverse problems with piecewise linear estimators: From gaussian mixture models to structured sparsity. IEEE Trans. Image Process. 2012, 21, 2481–2499. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  33. Yang, J.; Yuan, X.; Liao, X.; Llull, P.; Brady, D.J.; Sapiro, G.; Carin, L. Video compressive sensing using gaussian mixture models. IEEE Trans. Image Process. 2014, 23, 4863–4878. [Google Scholar] [CrossRef] [PubMed]
  34. He, B.; Liao, L.Z.; Han, D.; Yang, H. A new inexact alternating directions method for monotone variational inequalities. Math. Program. Ser. B 2002, 92, 103–118. [Google Scholar] [CrossRef]
  35. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. ADMM slide. Found. Trends Mach. Learn. 2010, 3, 1–122. [Google Scholar] [CrossRef]
  36. Tropp, J.A.; Gilbert, A.C. Signal recovery from random measurements via orthogonal matching pursuit. IEEE Trans. Inf. Theory 2007, 53, 4655–4666. [Google Scholar] [CrossRef] [Green Version]
  37. Cai, J.F.; Osher, S.; Shen, Z. Split bregman methods and frame based image restoration. Multiscale Model. Simul. 2009, 8, 337–369. [Google Scholar] [CrossRef]
  38. Osher, S.; Burger, M.; Goldfarb, D.; Xu, J.; Yin, W. An iterative regularization method for total variation-based image restoration. Multiscale Model. Simul. 2005, 4, 460–489. [Google Scholar] [CrossRef]
  39. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef] [PubMed]
  40. Afonso, M.V.; Bioucas-Dias, J.M.; Figueiredo, M.A.T. An augmented lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 2011, 20, 681–695. [Google Scholar] [CrossRef] [Green Version]
  41. Zhou, M.; Chen, H.; Paisley, J.; Ren, L.; Li, L.; Xing, Z.; Dunson, D.; Sapiro, G.; Carin, L. Nonparametric bayesian dictionary learning for analysis of noisy and incomplete images. IEEE Trans. Image Process. 2012, 21, 130–144. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  42. Zhang, K.; Zuo, W.; Gu, S.; Zhang, L. Learning deep CNN denoiser prior for image restoration. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA, 21–26 July 2017; Volume 2017, pp. 2808–2817. [Google Scholar] [CrossRef] [Green Version]
  43. Tirer, T.; Giryes, R. Image Restoration by Iterative Denoising and Backward Projections. IEEE Trans. Image Process. 2019, 28, 1220–1234. [Google Scholar] [CrossRef] [PubMed]
  44. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed] [Green Version]
Figure 1. HSR-based image restoration.
Figure 1. HSR-based image restoration.
Sensors 22 00537 g001
Figure 2. Test images.
Figure 2. Test images.
Sensors 22 00537 g002
Figure 3. Results of Mickey image restoration with pixel missing rate of 80%. (a) Mickey; (b) Mickey with pixel loss rate 80%; (c) the result of SALSA [40] (PSNR = 24.26, SSIM = 0.8000); (d) the result of BPFA [41] (PSNR = 21.96, SSIM = 0.6512); (e) the result of GSR [3] (PSNR = 26.50, SSIM = 0.8816); (f) the result of JPG-SR [8] (PSNR = 26.75, SSIM = 0.8696); (g) the result of GSRC-NLP [9] (PSNR = 27.00, SSIM = 0.9197); (h) the result of IR-CNN [42] (PSNR = 24.87, SSIM = 0.8482); (i) the result of IDBP [43] (PSNR = 25.26, SSIM = 8422); (j) the result of our proposed HSR (PSNR = 28.25, SSIM = 8963).
Figure 3. Results of Mickey image restoration with pixel missing rate of 80%. (a) Mickey; (b) Mickey with pixel loss rate 80%; (c) the result of SALSA [40] (PSNR = 24.26, SSIM = 0.8000); (d) the result of BPFA [41] (PSNR = 21.96, SSIM = 0.6512); (e) the result of GSR [3] (PSNR = 26.50, SSIM = 0.8816); (f) the result of JPG-SR [8] (PSNR = 26.75, SSIM = 0.8696); (g) the result of GSRC-NLP [9] (PSNR = 27.00, SSIM = 0.9197); (h) the result of IR-CNN [42] (PSNR = 24.87, SSIM = 0.8482); (i) the result of IDBP [43] (PSNR = 25.26, SSIM = 8422); (j) the result of our proposed HSR (PSNR = 28.25, SSIM = 8963).
Sensors 22 00537 g003
Table 1. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 80%.
Table 1. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 80%.
Pixels Missing = 80%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon24.4123.2524.5725.4025.5523.7725.0826.17
Barbara22.6222.6031.3230.1630.9324.3322.7331.62
Butterfly22.8521.0626.0326.5826.7824.5025.2428.30
Corn24.2822.3726.9126.4026.7623.8926.0527.39
Cowboy23.7222.1625.3725.6125.9524.3624.4327.54
Fence21.8022.8729.6629.4030.0226.0925.0330.57
Girl23.7922.4725.5025.5526.0224.1725.0327.13
Leaves22.0319.3027.4627.3327.6223.5725.8429.10
Lena28.2027.5631.4131.2531.8629.5329.6932.55
Mickey24.4621.9626.5026.7527.0024.8525.4028.25
Mural23.1521.1026.0126.2926.5624.8725.2627.59
Nanna24.1222.3825.2425.9226.1724.6825.5127.48
Starifish25.7023.9527.8427.8028.0425.6426.8828.99
Average23.9322.5427.2227.2627.6424.9425.5528.66
Table 2. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 70%.
Table 2. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 70%.
Pixels Missing = 70%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon25.7124.5726.1726.8026.9825.1726.4127.47
Barbara23.3825.4934.4333.4533.9827.1226.2334.43
Butterfly25.0623.9528.9229.2429.4727.3428.2930.12
Corn26.1125.3129.3528.8229.1026.5828.0929.63
Cowboy25.7024.5527.6327.7828.0426.4927.1529.33
Fence23.5725.5631.7331.5331.8228.7129.2432.26
Girl25.4724.7127.8627.9628.2026.4727.4528.99
Leaves24.3622.4331.1830.6730.8827.0928.9932.02
Lena28.8230.3633.5433.4033.8531.9832.2134.48
Mickey28.9824.1629.0228.9329.3127.7428.6930.14
Mural25.0023.3428.4628.5028.7127.4227.6829.25
Nanna25.4424.4727.8928.2428.5126.9027.2429.45
Starifish27.5526.8330.3130.1130.4628.1629.2831.09
Average25.7825.0629.7329.6529.9527.4728.2730.67
Table 3. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 60%.
Table 3. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 60%.
Pixels Missing = 60%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon26.7825.8227.7428.1428.3126.5327.8329.37
Barbara24.5728.0536.4235.7236.5929.6728.7337.00
Butterfly26.7926.0631.0931.1531.4629.3230.0332.57
Corn27.7527.5431.3930.8431.1828.8929.9432.34
Cowboy26.9926.3629.4929.5929.9928.7028.9732.50
Fence25.4527.7433.2333.1433.5130.4631.2534.32
Girl27.0226.6829.4729.8730.1728.6029.3231.71
Leaves26.2925.1933.3932.8233.3429.8831.7435.06
Lena31.4932.3833.5435.4435.9533.9533.9336.91
Mickey27.4125.7531.1031.1231.2929.7831.1832.74
Mural26.6625.1729.9830.1530.3329.1629.7931.30
Nanna26.9426.1430.1330.3730.5928.9529.5132.05
Starfish29.0928.7632.8932.3732.7130.4131.7933.68
Average27.1727.0531.5331.5931.9629.5630.3133.19
Table 4. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 50%.
Table 4. PSNR values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 50%.
Pixels Missing = 50%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon27.9827.1329.4229.6129.7527.9929.2332.19
Barbara25.6631.1239.1437.7938.7731.9531.5739.58
Butterfly28.5228.1632.7832.8333.0231.0832.4435.28
Corn29.3929.7833.7732.9433.7831.2631.6135.40
Cowboy28.5928.1831.6931.9431.9030.7931.4035.65
Fence27.2529.9235.0134.6234.9932.3133.2436.61
Girl28.6028.4631.9331.7731.9530.5731.0934.84
Leaves28.1128.1335.8635.2135.7932.6234.3438.20
Lena33.0834.1537.6437.1837.6435.7136.2539.54
Mickey28.9827.4333.8633.3533.5832.2433.1435.87
Mural28.2027.2031.7331.7231.8830.7131.3533.92
Nanna28.5328.1732.1632.2132.3631.7131.3535.02
Starfish30.9030.8734.9434.3134.6132.4633.8136.53
Average28.7529.1333.5033.8534.1931.6532.3736.04
Table 5. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 80%.
Table 5. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 80%.
Pixels Missing = 80%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon0.60400.63280.68920.66290.69250.67850.66060.7154
Barbara0.67820.62870.93340.89890.92420.79530.75220.9305
Butterfly0.81610.66880.92230.91950.92690.88800.89010.9386
Corn0.71030.71060.88220.85740.87170.81850.84300.8843
Cowboy0.79650.65890.88070.86680.88230.85070.84070.8989
Fence0.63390.62360.88620.86440.88170.81790.80130.8896
Girl0.71960.67820.90140.81780.83810.80220.79790.8581
Leaves0.76950.65760.94520.93640.94120.90370.91260.9510
Lena0.84250.81080.92490.90620.92270.88310.88210.9282
Mickey0.80000.65120.88160.86960.91970.84820.84220.8963
Mural0.67850.60460.81580.79150.81350.77660.76150.8286
Nanna0.74940.67210.85330.83950.85520.82040.81200.8722
Starfish0.75940.70240.86910.85160.86530.81750.82860.8787
Average0.73520.66930.87580.85250.87190.82310.81730.8823
Table 6. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 70%.
Table 6. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 70%.
Pixels Missing = 70%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon0.70240.73050.77960.75970.78180.77140.74560.8012
Barbara0.75800.80320.96280.94740.95790.88830.86800.9606
Butterfly0.88380.82810.95060.94750.95300.93260.93290.9569
Corn0.86240.84920.92950.91420.92240.89780.89690.9298
Cowboy0.87420.82650.92320.91270.92370.90790.89520.9341
Fence0.75120.77260.92300.90690.92030.88770.88400.9272
Girl0.82500.80210.90140.88610.89910.87920.86850.9106
Leaves0.87260.82090.97430.96640.96970.95310.95000.9754
Lena0.85760.90220.95070.93930.94990.92770.92390.9544
Mickey0.86210.80970.92480.91420.92400.90600.90020.9297
Mural0.79170.75320.87430.85650.87180.85510.83380.8819
Nanna0.83690.80300.90760.89730.90750.88950.86970.9177
Starfish0.86750.84660.91840.90320.91410.88760.88390.9227
Average0.82660.81140.91690.90400.91500.89110.88100.9232
Table 7. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 60%.
Table 7. SSIM values of our proposed HSR model and other comparison models after image restoration with pixels missing rate 60%.
Pixels Missing = 60%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon0.77560.79900.84460.82640.84390.83550.81400.8709
Barbara0.81910.88790.97650.96570.97440.93050.91700.9757
Butterfly0.91910.89740.96660.96200.96710.95350.95130.9715
Corn0.90220.90920.95430.94420.95090.93750.92900.9599
Cowboy0.90640.89460.94970.94130.94990.94170.92730.9613
Fence0.82220.85230.94700.93420.94450.92290.91450.9525
Girl0.87540.87640.93590.92520.93540.92270.90900.9474
Leaves0.91730.90640.98490.98000.98330.97370.96960.9869
Lena0.92830.93410.96680.95840.96640.94990.94480.9705
Mickey0.89770.87170.94800.93920.94720.93560.93010.9547
Mural0.84830.83370.90860.89440.90720.89720.87810.9199
Nanna0.88230.87070.93830.92920.93820.92670.91640.9490
Starfish0.90360.89970.94530.93360.94300.92600.92090.9514
Average0.87670.87950.94360.93340.94210.92720.91710.9517
Table 8. SSIM values of our proposedSR model and other comparison models after image restoration with pixels rate 50%.
Table 8. SSIM values of our proposedSR model and other comparison models after image restoration with pixels rate 50%.
Pixels Missing = 50%
ImagesSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Bahoon0.83570.85080.89240.87810.88980.88340.86400.9287
Barbara0.86510.93670.98500.97650.98390.95620.94710.9852
Butterfly0.94320.93400.97590.97190.97620.96710.96620.9820
Corn0.93100.94470.97190.96400.96920.96210.95140.9783
Cowboy0.93440.93220.96680.95990.96630.96180.95180.9770
Fence0.87050.90480.96270.95240.96050.94670.90460.9700
Girl0.91080.91990.95810.94920.95690.94970.93650.9699
Leaves0.94440.95340.99090.97860.99010.98470.98210.9928
Lena0.94740.95250.97790.97010.97710.96490.96260.9816
Mickey0.92430.89320.96610.96700.96450.95630.95060.9728
Mural0.88760.89320.93450.92420.93400.92750.91180.9509
Nanna0.91730.92020.95890.95040.95770.95050.93920.9705
Starfish0.93350.93630.96340.95410.96150.95120.94580.9714
Average0.91120.92090.96190.95360.96060.95090.93950.9716
Table 9. Comparison of running time in seconds of different methods.
Table 9. Comparison of running time in seconds of different methods.
MethodsSALSABPFAGSRJPG-SRGSRC-NLPIR-CNNIDBPHSR
Time1.811200.23923.24499.485027.679.2420.335000.22
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Zhou, C.; Kong, Y.; Zhang, C.; Sun, L.; Wu, D.; Zhou, C. A Hybrid Sparse Representation Model for Image Restoration. Sensors 2022, 22, 537. https://doi.org/10.3390/s22020537

AMA Style

Zhou C, Kong Y, Zhang C, Sun L, Wu D, Zhou C. A Hybrid Sparse Representation Model for Image Restoration. Sensors. 2022; 22(2):537. https://doi.org/10.3390/s22020537

Chicago/Turabian Style

Zhou, Caiyue, Yanfen Kong, Chuanyong Zhang, Lin Sun, Dongmei Wu, and Chongbo Zhou. 2022. "A Hybrid Sparse Representation Model for Image Restoration" Sensors 22, no. 2: 537. https://doi.org/10.3390/s22020537

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop