Next Article in Journal
An Easy-to-Use Airborne LiDAR Data Filtering Method Based on Cloth Simulation
Previous Article in Journal
Mapping Crop Planting Quality in Sugarcane from UAV Imagery: A Pilot Study in Nicaragua
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

1
Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
2
Institute of Space Science and Technology, Nanchang University, Nanchang 330031, China
3
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
4
School of Computer Science, China University of Geosciences, Wuhan 430074, China
*
Author to whom correspondence should be addressed.
Remote Sens. 2016, 8(6), 499; https://doi.org/10.3390/rs8060499
Submission received: 4 March 2016 / Revised: 1 May 2016 / Accepted: 31 May 2016 / Published: 14 June 2016

Abstract

:
Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI) and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1) we propose a nonconvex low-rank approximation for reconstructing RSI; (2) we inject reference prior information to overcome over smoothed edges and texture detail losses; (3) on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT) simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR) and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

Graphical Abstract

1. Introduction

The multispectral sensors of a satellite system sometimes fail or degrade after the system is deployed, and degraded sensors provide blurred and noisy images. For example, one band of the multispectral sensor on board the MODIS satellite degraded after the launch, while the rest of the bands remained unaffected. Damage to images can also be caused by defocusing, atmospheric turbulence and some other factors besides the damage of sensors. Owing to sensor failure and poor observation conditions, remote sensing images are easily subjected to information loss, which hinders our effective analysis of the Earth. Digital imaging processing techniques are commonly used to improve image quality and increase application potential.
In a typical multispectral satellite imaging system, multiple images from different sensors in the same area are available. When one of those images in a multiple image set is degraded, another image in the set can be used as a prior image for reconstruction. However, some image reconstruction approaches have used multiple reference input images. Reconstruction approaches utilize multiple reference input images, which are used as the prior to improve the reconstruction performance in terms of spatial resolution and noise reduction. Li et al. [1] proposed a model utilizing spectral and temporal information as the complementary information for reconstructing the missing information in remote sensing images. Lizhe et al. and Hao et al. [2,3] developed a compressed sensing object function that uses a reference image as a prior. The sparsity constraints in the transform domain come from the target image, and the gradient priors in the spatial domain come from the auxiliary reference image. Peng et al. [4] proposed an algorithm based on the total variation approach using an auxiliary image. These methods have seen the potential for higher reconstruction performance when aided references are present, but the accuracy and quality of reconstruction are lacking.
Since the image prior knowledge plays a critical role in the performance of image reconstruction algorithms, designing effective regularization terms to reflect the image priors is the core of remote sensing images (RSI) reconstruction. To sum up, most of these reconstruction methods are based on some specific prior knowledge of the images, and they use the spectral or spatial information of the remote sensing images to recover the current pixel or to obtain high resolution images. Low-rank-inducing regularization terms have recently received considerable attention in sparse representation. The principle of low-rank approximation is that similar patches are grouped to share a similar underlying structure and form a low-rank matrix appropriately [5,6], and it can be solved by using the efficient singular value decomposition as the optimization tool. Using the low-rank framework, Dong et al. [7] proposed a nonlocal low-rank algorithm, called the spatially-adaptive iterative singular-value thresholding method. Making use of similar information from another relative band, it is easy to extend these nonconvex penalty functions on singular values of a matrix to improve low-rank matrix recovery performance.
In the proposed algorithm, instead of using the 0 norm as the minimization constraint of compressive sensing, the generalized nonconvex low-rank approximation is exploited as the basic unit of sparse representation. The model integrates the wavelet texture feature as structural and textural reference information to establish a novel compressive sensing algorithm for remote sensing image reconstruction (GNLR-RI). Compared to traditional 0 norm-based compressive sensing, the proposed GNLR-RI algorithm makes three contributions: (1) to overcome over-smoothing and texture detail loss of sparse representation, the wavelet coefficient of the reference image is injected as texture reference constraint information; (2) nonlocal similar patches from a reference image are extracted in the low-rank approximation step; (3) the spectral reference information and similarity within bands are jointly used. Therefore, the proposed algorithm can get accurate reconstruction results and improve calculation speed. The solution of the proposed algorithm is derived by the conjugate gradient algorithm, single value threshold (SVT), simultaneously calculating the low-rank matrix of similar image patches and then estimating the reconstructed image.
The remainder of this paper is organized as follows. A brief review is given on the related work of remote sensing image reconstruction with reference images and other generalized reconstruction methods, and auxiliary information is added in Section 2. Section 3 introduces the generalized nonconvex low-rank approximation model, which contains eight surrogate functions. The proposed model of introducing the generalized nonconvex low-rank approximation algorithm and the solution of the proposed model are presented in Section 4. Experiments are illustrated in Section 5. Finally, Section 6 summarizes our conclusions.

2. Related Work

2.1. RSI Reconstruction with a Reference

Reference images are used to restore missing information of pixel blocks, possibly caused by thick clouds or contrails, invalid detectors, optical dust, etc. For example, toward solving problems with the Aqua Moderate Resolution Imaging Spectroradiometer (MODIS) Band 6 and the Scan Line Corrector (SLC) failure on Landsat ETM+, Li et al. [1] introduced spectral and temporal pixels for reconstructing the missing information in remote sensing images by introducing information from auxiliary images, modulating the injected gray level and regularizing with a synthesis model and an analysis model on the basis of sparse representation. Their experiments showed that the synthesis model is suitable for using both spectral and temporal complementation, while spectral complementation is the best way for the analysis model.
For remote sensing reconstruction or restoration, some work has emerged that uses reference images to improve reconstruction performance. Lizhe et al. [2] proposed a compressed sensing-based model that used an auxiliary image from another sensor or time to reconstruct remote sensing images. Their method assumed that directions of edges and texture are similar in both the reference image and the reconstruction image, and therefore, they built a gradient similarity-constrained cost item to regularize the reconstruction image. Hao et al. [3] proposed a reconstruction model of a remote sensing image using priors of the auxiliary image from another sensor or time. The model employed priors from both the transform domain and spatial domain. Furthermore, the sparsity priors in the transform domain come from the target image, and the gradient priors in the spatial domain come from the reference image. The algorithm is based on the Bregman split method to optimize the hybrid regularization. Peng et al. [8,9] proposed a total variation approach with an auxiliary image to recover a multispectral image. The reference image provides texture and edge similarities to the degraded image by the regulating strength and direction of smoothness in an anisotropic way, and the amount of prior information is adaptively estimated with normalized local mutual information. Similar techniques are implemented in the partial differential equations for denoising of remote sensing images. These methods have the potential for higher reconstruction performance when aided references are present.

2.2. Generalized RSI Reconstruction

For ensuring image quality, additional prior information from images is needed. There are generalized remote sensing image reconstruction methods to improve image quality, to increase application potential and to implement digital image processing. The compressed sensing (CS) theory has gained much attention as a fundamental and newly-developed methodology in the information field. The CS theory states that the image that has a sparse representation in a certain domain can be recovered from a reduced set of measurements largely below Nyquist sampling rates [10,11,12]. However, Fowler [13] proposed a reconstruction strategy, compressive-projection principal component analysis (CPPCA), which recovers the hyperspectral image dataset using principal component analysis (PCA). Specifically, the CPPCA recovers both the coefficients associated with the PCA transform and an approximation to the PCA transform basis itself. Ly et al. [14] extended the concept of partitioned reconstruction to the spectral dimension of a hyperspectral dataset and incorporated into CPPCA the paradigm of segmented PCA, in which a dataset is segmented into multiple spectral partitions, and PCA is applied in each segment independently. Since multiple predictions drawn for a pixel vector of interest were made from spatially-neighboring pixel vectors within an initial non-predicted reconstruction, Chen et al. [15] proposed a two-phase hypothesis-generation procedure based on the partitioning and merging of spectral bands according to the correlation coefficients between bands to fine-tune the hypotheses.
The transforms that allow the image to have a sparse representation are named sparsifying transforms. One of the most commonly-used sparsifying transforms is finite difference [16], which is more often known as total variation (TV) regularization for image recovery. The popularity of TV regularization lies in its desirable properties, such as convexity, simplicity and its ability to preserve edges. For example, Xiaohua et al. [17] proposed a novel Inverse Synthetic Aperture Radar (ISAR) imaging framework combining a local sparsity constraint and a nonlocal total variation (NLTV). The sparsity is a prior form in which the number of strong scattering points is smaller than that of pixels in the image plane. It plays the role of the classification of the strong scattering points from the clutter background. Xiyan et al. [18] considered the fusion problem as the colorization of each pixel in the panchromatic image and, thus, introduced a term concerning the gradient of the panchromatic image in the function of total variation to preserve edges. Peng et al. [9] indicated that the distributions of pixel intensity of the multimodal image of different CCD sensors differ greatly form each other, and the directions of their edges are very similar. Then, the edge information and these similar structures are used as the important priors or constraints in the total variation image restoration.
On the other hand, similar patches are grouped such that patches in each group share a similar underlying structure to form a low-rank approximation. The low rank induces penalties related to simultaneous sparse coding (SSC), group sparsity and structured sparsity by employing the low-rank characteristic of nonlocal patches in sparse representation. Hongyan et al. [19] introduced a new hyperspectral imagery restoration method based on the low-rank matrix recovery, which suggested that a clean hyperspectral imagery patch can be regarded as a low-rank matrix. Wei et al. [20] proposed a method that integrates the nuclear norm, TV regularization and the 1 norm into a unified framework. The nuclear norm is used to exploit the spectral low-rank property, and the TV regularization is adopted to explore the spatial piecewise smooth structure of the hyperspectral imagery. Usually, nonconvex approaches, like p ( 0 < p < 1 ) or the nuclear 1 norm minimization will guarantee a better recovery by directly attacking the 0 minimization problem [21,22].
An important research area is the sparse representation method with dictionary learning (DL), which builds an adaptive basis from particular image instances for sparse approximations [23]. Qiegen et al. [16] propose a novel gradient-based dictionary learning method for image recovery that effectively integrates the popular total variation (TV) and dictionary learning technique into the same framework. Specifically, they train dictionaries from the horizontal and vertical gradients of the image and then reconstruct the desired image using the sparse representations of both derivatives. The dictionary learning strategy alleviates the drawback of a fixed basis (finite difference, wavelet, etc.) that a given basis might not be universally optimal for all images [24], but at the cost of non-convexity and non-linearity [25]. Most existing DL methods adopt a two-step iterative scheme, where the sparse approximations are found with the dictionary fixed, and the dictionary is subsequently optimized based on the current sparse coefficients [26].
Exact reconstruction of sparse reconstruction can be achieved by nonlinear algorithms, such as orthogonal matching pursuit (OMP) [27], the iterative shrinkage algorithm [28], the Bregman split algorithm [29] and the alternative direction multiplier method [30]. Recent advances have shown that replacing the 0 norm with a nonconvex surrogate function can obtain better sparse representation recovery performance. Weisheng et al. [6] proposed a spatially-adaptive iterative singular value thresholding (SAIST) method by revealing the assumption that the basis is orthogonal, and the SSC problem can be rewritten in a low-rank view in some special cases. Shuyuan et al. [31] proposed a multi-task learning and K-SVD-based super resolution image restoration method that learned a redundant dictionary from some example image patches.

2.3. Reconstruction Model Based on Reference Images

As in psychological research of the visual cortex, energy distribution in the wavelet coefficient can be identified as the unique features of texture characterization, so we have extra texture features of the reference image and target image. Let α ( i , j ) denote the wavelet coefficient, F 1 _ e n e r g y denote the 1 norm-based energy, F s d denote the standard deviation and F a a d and F e n t r o p y denote the average absolute deviation and entropy.
F l 1 _ e n e r g y α = 1 M N i = 1 M j = 1 N α ( i , j )
F s d α = 1 M N i = 1 M j = 1 N α ( i , j ) - α ¯ 2 1 / 2
F a a d α = 1 M N i = 1 M j = 1 N α ( i , j ) - α ¯
F e n t r o p y α = - 1 M N i = 1 M j = 1 N α ( i , j ) 2 log α i , j 2
where α ( i , j ) is the wavelet coefficient and α ¯ is the mean value of the wavelet coefficient. Research shows that the combination of these characteristics results in a better performance. Therefore, in this study, the empirical combination with the 1 energy norm F 1 _ e n e r g y , the standard deviation F s d , the average absolute deviation F a a d and the entropy F e n t r o p y of every wavelet coefficient sub-patch were computed to form the texture feature vector F. As for the reference image, we obtain the texture feature vector F r e f in the same way and compute the metric distance to objectively measure the similarity of these two texture feature vectors. The literature shows that not only the texture feature, but also the similarity measurement influence the accuracy of texture feature extraction. Two feature vectors from target images and reference images are as follows:
F k = F l 1 _ e n e r g y α k , F s d α k , F a d d α k , F e n t r o p y α k
F r e f _ k = F l 1 _ e n α r e f _ k , F s d α r e f _ k , F a d d α r e f _ k , F e n α r e f _ k
where F k denotes the wavelet coefficient sub-blocks of the target image and F r e f _ k denotes the wavelet coefficient sub-blocks of the reference image. As for the relationship of the k t h sub-block of in the reference image and target image, the Canberra distance is adopted to calculate their similarity:
w k = C a n b F k , F r e f _ k = h = 1 u F k h - F r e f _ k h F k h + F r e f _ k h
In the formulation above, h represents the divergence of these two feature vectors, and the denominator normalizes the divergence. Therefore, the similarity measurement can avoid the scale effects, and the constrained item is formed as follows:
k = 1 U w k α k - α r e f _ k 2 2 < ξ
where ξ is a small constant, U = M × N / m × n is the number of the sub-blocks of the wavelet coefficients and w k denotes the distance between the wavelet coefficient k-th block of the texture features in the target image and the reference image. More similar texture feature vectors correspond to blocks of wavelet coefficients that are more alike.
The reconstruction object formula of the compressive sensing with the reference information model can be presented as:
α = arg min α 1 , s . t y - ϕ ψ α 2 2 < δ k U w k α k - α r e f _ k F 2 < ξ
where y - ϕ ψ α 2 2 is the 2 data fidelity term and α 1 represents the regularization term denoting prior knowledge. There are many optimization methods to solve the above 1 minimization problem, such as the iterative shrinkage algorithm [28], the Bregman split [29] and the alternative direction multiplier method [30]. However, recent advances have shown that replacing the l1 norm with the nonconvex surrogate function can obtain better CS recovery performance. For example, the methods proposed by Fazel [32], Gasso [33], Chartrand [34] and Trzasko [35] have proved that in certain situations, the nonconvex surrogate function is able to recover the sparsity coefficient more efficient.

3. Generalized Nonconvex Low-Rank Approximation Model

A nonconvex low-rank model for CS recovery exploits the nonlocal structured sparsity via low-rank approximation for image reconstruction. Under the assumption that each exemplar patch of x i C n (size n × n at position i) is able to find plenty of similar patches in its neighborhood area, a large number of k-nearest-neighbor searches have been implemented for each exemplar patch in a local window, namely:
H i = i j x i - x i j < T
where T is a pre-defined threshold value and H i denotes the assemblage of positions relating to the patches similar to x i . Under the assumption that these image patches have similar structures, the data-formed data matrix x i has a low-rank property. A data matrix X i = x i 0 , x i 1 , , x i m - 1 , X i C n × m is acquired after the search, which is decomposed into X i = L i + W i , where L i denotes the low-rank matrix and W i denotes the Gaussian noise. Then, the low-rank problem can be solved as:
L i = arg min L i r a n k ( L i ) , s . t . X i - L i F 2 σ ω 2
where · F 2 denotes the Frobenius norm and σ ω 2 denotes the variance of additive Gaussian noise. The rank minimization almost is an NP-hard problem; hence, we use the nuclear norm · * (sum of singular values) to replace a series of convex surrogate functions of the rank to obtain an approximated solution. Using the nuclear norm, the rank minimization problem can be efficiently solved by the technique of singular value thresholding (SVT).
In this paper, we consider a smooth, but non-convex surrogate of the rank rather than the nuclear norm. Specifically, according to [32], the rank minimization problem with regard to L i can be approximately solved by minimizing the following function:
L i = arg min L i S L i , ε , s . t . X i - L i F 2 σ ω 2
where F 2 denotes the Frobenius norm, σ ω 2 denotes the variance of additive Gaussian noise, S L i , ε = G det L i L i T 1 / 2 + ε I = G det U Σ 1 / 2 U - 1 + ε I = G det Σ 1 / 2 + ε I , ε is a small constant value, Σ denotes the eigenvalue matrix of L i L i T , i.e., L i L i T = U Σ U - 1 , and Σ 1 / 2 is a diagonal matrix whose diagonal elements are the singular values of the matrix L i . By taking a proper parameter λ, Equation (12) can be transformed into:
L i = arg min L i X i - L i F 2 + λ S L i , ε
For each exemplar image patch, we can approximate the matrix X i with a low-rank matrix L i by solving Equation (13). Many nonconvex surrogate functions have been proposed to achieve a better approximation to the 0 norm [36], including the logarithm function (Log) [37], the p norm (Lp) [38], Geman (Geman) [39], Laplace (Lap) [35] and the exponential type penalty (Etp) [40]. In addition, there are several discontinuous functions, such as the smoothly-clipped absolute deviation (Scad) [41], Capped 1 (Cappedl1) [42] and the minimax concave penalty (Mcp) [43]. The definitions and super-gradients of these surrogate functions have similar monotonous trends, as displayed in Table 1 and Figure 1.

4. Problem Definition

In multispectral remote sensing applications, the target images’ and the reference images’ intensity distributions are different, but their edge directions and texture information are often similar. One of the reference images can be used as a prior image in the reconstruction process. This section gives the generalized nonconvex low-rank approximation algorithm for CS recovery exploiting the nonlocal structured sparsity via low-rank approximation for remote sensing image reconstruction. Let x C N denote the target images, where x is sparsely expressed as x = ψ α with a sparse signal α, and y C M denotes the observed data; the measurement matrix ϕ C M × N ( M < N ) maps x to y. The remote sensing image reconstruction problem is to reconstruct x. Figure 2 shows the relationship between high resolution target images and low resolution observed images. The different low and high resolution images contain different time series or different bands. In some cases, we use one or two different time series or different bands as the reference image and the other as the unknown target image.
In this section, a compressed sensing model with reference images integrating low-rank regularization and wavelet textural constraints is proposed and solved with the conjugate gradient algorithm and single value threshold (SVT). Figure 3 shows the framework of the proposed GNLR-RI method. The reconstruction object formula of the generalized nonconvex low-rank approximation with reference information model can be presented as:
α ^ , L ^ i = arg min α , L i η i R ˜ i ψ α - L i F 2 + λ S L i , ε , s . t . y - ϕ ψ α 2 2 < δ j U w j α j - α r e f _ j F 2 < ξ
where R ˜ i x = R i 0 x , R i 1 x , , R i m - 1 x = X i denotes the patch matrix similar to the patch x i . The variable R denotes the location of the similarity patches; i R i ψ α represents the patch average result. The minimization of the objective constraint function turns into non-constraint optimization.
α ^ , L ^ i = arg min α , L i y - ϕ ψ α 2 2 + j U w j α j - α r e f _ j F 2 + η i R ˜ i ψ α - L i F 2 + λ S L i , ε

4.1. Solving the Proposed Model

As it is difficult to get a closed solution from Equation (15) directly, the optimization process is divided into three steps.
α k + 1 = arg min α y - ϕ ψ α 2 2 + j U w j α j - α r e f _ j F 2
L ^ i = arg min L i i R ˜ i ψ α - L i F 2 + λ S L i , ε
α k + 1 = arg min α , L i y - ϕ ψ α 2 2 + η i R ˜ i ψ α - L i F 2
For the first step, the reference image wavelet information is 2 norm regularization; thus, the conjugate gradient algorithm is used to solve it. By setting the derivatives of the objective formula in Equation (16) with respect to the sparse coefficient α to zero, we can obtain:
- 2 ψ T ϕ T y - ϕ ψ α k + 1 + 2 j U w j α j - α r e f _ j = 0
ψ T ϕ T ϕ ψ α k + 1 = ψ T ϕ T y + j U w j α j - α r e f _ j
and where α = j U w j α j , we can employ the conjugate gradient algorithm to solve Equation (20).
For the second step, the solution of L i can be obtained by solving the following minimization problem:
L i = arg min L i η R ˜ i x - L i F 2 + λ S L i , ε
where R ˜ i x = R i 0 x , R i 1 x , , R i m - 1 x = X i denotes the patch matrix similar to the patch x i . By substituting Equation (11) into Equation (21), L i can be rewritten as:
min L i X i - L i F 2 + λ η j = 1 n 0 G σ j L i + ε
where σ j is the j-th singular value of L i . Let f ( σ ) = j = 1 n 0 G ( σ j + ε ) , which can be solved by using a local minimization method. With the first order Taylor expansion, f σ can be approximated as:
f σ = f σ k + f σ k , σ - σ k
where σ k denotes the value of σ in the k-th iteration. This can be worked out by solving the following equation iteratively,
L i k + 1 = arg min L i X i - L i F 2 + λ η l = 1 n 0 j = 1 n 0 σ j k + ε σ l k + ε G j = 1 n 0 σ j k + ε σ l
After the constants in Equation (24) are ignored, Equation (22) is rewritten into:
L i k + 1 = arg min L i 1 2 X i - L i F 2 + τ Ψ L i , ω k
where τ = λ / 2 η and Ψ L i , ω k = l = 1 n 0 σ l ω l k . ω l k = G j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε / σ l k + ε denotes the weighted nuclear norm. According to the proximal operator of the weighted nuclear norm, the solution in the ( k + 1 ) t h iteration can be obtained as:
L i k + 1 = U Σ ˜ - τ d i a g ω k + V T
where U Σ ˜ V T denotes the SVT of X i , x + = max x , 0 .
For the third step, after the solution of all L i is obtained, the following minimization problem can be solved to reconstruct the wavelet coefficient matrix by using the conjunction gradient algorithm (CG):
- 2 ψ T ϕ T y - ϕ ψ α k + 1 + 2 η i ψ T R ˜ i T R ˜ i ψ α k + 1 - L i = 0
By setting the derivatives of the objective formula in Equation (27) with respect to the sparse coefficient α to zero, we can obtain:
ψ T ϕ T ϕ ψ + i ψ T R ˜ i T R ˜ i ψ α k + 1 = ψ T ϕ T y + i ψ T R ˜ i T L i
Noting that the value of i R ˜ i T R ˜ i represents the number of overlapping patches and i R ˜ i L i represents the patch average result, we can employ the conjugate gradient algorithm to solve Equation (28).
Now, this image reconstruction algorithm is summarized here, is called GNLR-RI and represents the eight nonconvex surrogate functions mentioned above. The detailed description of the proposed method is listed in Algorithm 1. The difference between the NLR-CS-baseline and the GNLR-RI mainly lies in the approximation scheme of the 1 norm and the representation of texture features. Therefore, during the reconstruction process, the low-rank operation strongly encourages the similarity of the patches between the target image and the reference image.
Algorithm 1: GNLR-RI.
Input: Sparse coefficient of reference image α r e f
Initialization: Set target wavelet coefficient α to zero; ω i = 1 , 1 , , 1 T , Λ 1 , μ 1 = 0 λ , η , p , τ = λ λ 2 η 2 η , β , K , J and α
While convergence criterion not met, do
1. Compute the texture feature vectors F, F r e f of target wavelet coefficient α and reference wavelet coefficient α r e f using Equations (5) and (6).
2. Add the constrained term k = 1 U w k α k - α r e f _ k 2 2 to the compressed sensing target objective function.
3. Add the low-rank approximation based on the reference image term.
4. Solve the optimization problem Equation (20) via the conjunction gradient algorithm.
5. Iteration: For k > K 0 , do
 1) Form a matrix X i making up similar patches of x k and set L i 0 = X i
 2) For j = 1 , 2 , , J , do
  a) If k > K 0 , update the weights
ω l k = G j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε σ l k + ε σ l k + ε
  b) Compute L i via Equation (26) and output α i = α i j when j = J .
  End for
 End for
6. Solve the optimization problem Equation (28) via the conjunction gradient algorithm.
Output: X ^ = X . K

5. Experiment Results

During the experiments, single-channel and multichannel satellite images from MODIS, Landsat 7, Landsat 8 and Google Earth are used as the simulated data to test the performance of our proposed reconstruction framework. (1) Landsat 7 and Landsat 8 provide PAN images at 15-m spatial resolution and 30-m spatial resolution. (2) MODIS provides images at 500-m resolution. (3) Google Earth provides three-channel color tested images. For the Landsat 7 and Landsat 8 image data, Band 4 with 30-m resolution is the near-infrared band with a better spectrum characteristic in which a body of water has a clear outline, and Band 8 is panchromatic with 15-m resolution; thus, Band 4 is chosen as the target image, and Band 8 is chosen as the reference image, which is down-sampled into 30-m spatial resolution to group similar blocks injected into the reconstructed image to compensate the smoothed areas nonlinearly. For the MODIS data, we use Band 4 as the target image and Band 3 as the reference image to evaluate the proposed algorithm.
Furthermore, our proposed algorithm is compared to the well-known reconstruction-based models NLR-CS-baseline [7], orthogonal matching pursuit (OMP) and compressive sampling matching pursuit (CoSaMP) [24] by evaluating the experimental results quantitatively and visually. NLR-CS-baseline uses the standard nuclear norm, which is the l 1 norm minimization. CoSaMp and OMP reconstructs signal by using a very small number of points.
Furthermore, our proposed algorithm is compared to the well-known reconstruction-based models NLR-CS-baseline [7], orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP) [27], multi-hypothesis predictions combined with block-based compressed sensing with smoothed projected Landweber reconstruction (MH-BCS-SPL) [44], recovery via collaborative sparsity (RCoS) [45] and adaptively-learned sparsifying basis via L0 minimization (ALSB) [46] by evaluating the experimental results quantitatively and visually. NLR-CS-baseline uses the standard nuclear norm, which is the L1 norm minimization. CoSaMp and OMP reconstruct the signal by using a very small number of points. The multi-hypothesis prediction is used to generate a residual in the domain of the compressed-sensing random projections in MH-BCS-SPL. RcoS compels local 2D sparsity and nonlocal 3D sparsity simultaneously in an adaptive hybrid space-transform domain to utilize the intrinsic sparsity of natural images and to confine the CS solution space. ALSB enforces the intrinsic sparsity of natural images substantially by sparsely representing overlapped image patches using the adaptively-learned sparsifying basis in the form of the L0 norm.

5.1. Evaluation of the Low Rank Penalty Function and Different Nonconvex Surrogate Functions

First, we justify the necessity of adding the low-rank approximation method. For convenience, the objective function Equation (15) recovering the L 1 norm without reference information is denoted as L1-WRI. The reconstruction method introduced the reference information constraint with L 1 norm Equation (9) denoted as L1-RI. The objective function Equation (15) that employed the nonconvex surrogate logarithm function is denoted as GNLR-RI-Log. Figure 4 demonstrates the results of these three methods. The proposed algorithm that combines these and introduces low-rank approximation provides better performance.
Second, in this section, the reference images of Landsat 7 and Landsat 8 are down-sampled to the same scale of the target images, and the MODIS images are the same scale with 256 × 256, where 256 is the length and width of the input image. Common indices, such as the peak signal to noise ratio (PSNR) and root mean square error (RMSE) are adopted to give a quantitative assessment. Table 2 shows PSNR (dB)/RMSE with different generalized nonconvex surrogate functions. The reconstructed images with different nonconvex functions are displayed in Figure 5. It can be observed that continuous nonconvex surrogate functions, such as Log, Lp, Geman, Lap and Etp, have the same performance as piecewise nonconvex surrogate functions, such as Scad, Cappedl1 and Mcp.
In this subsection, we focus on usual nonconvex penalties proposed for recovering sparsity in Figure 1. As all of the penalty functions share common properties, concave and monotonically increasing on 0 , , thus their super-gradients are nonnegative and monotonically decreasing. Our proposed general solver is based on this key observation. One of the nonconvex penalties that circumvents Lasso weak points is the SCAD penalty, which has the unbiasedness properties. Among all other penalty functions that lead to sparsity, a popular one is the Lp pseudonorm when 0 < p < 1 . The main interest of this penalty resides in its quasi-smooth approximation of the L0 sparsity measure as p tends toward the null value. It can provide sparser solutions than Lasso. For the log penalty, we shift the coefficients by a small quantity to avoid an infinite value when the parameter vanishes.

5.2. Performance Comparison for Single-Channel Compressed Sensing

In this subsection, one of the nonconvex surrogate logarithm functions (GNLR-RI-Log) is selected to test the performance of the single-channel experiment. The correlation coefficient (CC) and structural similarity (SSIM) are added to give a quantitative assessment of the reconstruction results. Table 3 shows the performance indexes PSNR (dB), RMSE, CC and SSIM of the Log function with different images and patch sizes. Figure 6 demonstrates image recovery quality intuitively of different images. It can be observed that the proposed method, GNLR-RI-Log, can reconstruct the images more precisely and obtain good group similarity by extracting structural information from the reference image. Obviously, the results illustrate that we should select an appropriate patch size in the procedure of processed images to derive optimal reconstruction quality.

5.3. Multichannel Reconstruction with References

For the multichannel, Bands 4/5/6 of Landsat 8 data and Bands 1/2/3 of Google Earth data are set as simulated images to be reconstructed in the multichannel reconstruction subsection. Reference images are derived from related Band 8 of Landsat 8 and related gray one of Google Earth. Commonly-used metrics for multichannel images, such as spectral angle mapper (SAM), RASE, relative dimensionless global error (ERGAS) and Q4, are evaluated to check the availability of quantitative remote sensing, since PSNR is not the only index that reflects the reconstruction effect. Note that the ideal result is one for Q4, while it is zero for SAM, RASE and ERGAS. Table 4 shows that GNLR-RI-Log has more advantages with Google Earth data than Landsat 8 data. It can be observed that for Landsat 8 data, our method GNLR-RI-Log works better than NLR-CS-baseline, OMP, CoSaMp and RCoS in terms of PSNR, while it performs worse than MH-BCS-SPL and ALSB. Except ERGAS, for the other three indexes, SAM, RASM and Q4, GNLR-RI-Log obtains comparable and better results than most of the competing methods. On the other hand, for the Google Earth data, GNLR-RI-Log produces more moderate results and performs best among the competing methods in terms of PSNR, SAM, RASM, ERGAS and Q4.

5.4. Parameter Evaluation

Similar to the detail-preserving regularity scheme, this subsection evaluates the sensitivity of the proposed method to parameter settings by varying one parameter at a time while keeping the rest fixed at their nominal values. In the reference information constrained reconstruction algorithm, there are two free parameters, λ and η, in the reconstructed object formula Equation (17). Bands 4/8 of Landsat 8 and Bands 2/3 and 3/4 of MODIS data were tested in this subsection. PSNR, RMSE, CC and SSIM values versus the parameters λ and η are plotted in Figure 7. It is obvious that more fine-tuning of the parameters may lead to better results, but the results with the parameter settings are consistently promising. By manually changing the parameters λ and η, some experiments are used to analyzing their variation tendency in Table 5.
To make a fair comparison among the competing methods, we have carefully tuned their parameters to achieve the best performance. The parameters of the other competing methods are designed as follow: for OMP and CoSaMp, default parameters (if required as input arguments) are used; for the NLR-CS-baseline algorithm, the main parameters are set as follows: patch size 6 × 6 and similar patches m = 45 are selected for each exemplar patch. For the MH-BCS-SPL method, we use an empirical regularization parameter value λ, and the initial search window is set to w = 1 . We have also carefully tuned the parameters of the RCoS and ALSB algorithms for the purpose of hopefully achieving the best possible performance.

5.5. Performances with Varied Noise Levels

In this subsection, we discuss the impact of noise on the reconstruction performance of the proposed algorithm with a 0.1 sampling rate, which means observation data in the compressive sensing. One of the nonconvex surrogate logarithm functions (GNLR-RI-Log) is selected to compare to the NLR-CS-baseline, CoSaMP and OMP algorithms. Bands 4/8 of Landsat 8 and Bands 3/4 of MODIS images were tested in the experiments with added Gaussian noise with a mean of zero and variance of (0,4,8,12). As can be seen in Table 6, the proposed GNLR-RI-Log method performs much better than the NLR-CS-baseline method in terms of noise added. Figure 8 shows the reconstruction results for a sample rate of 0.1 and a noise level of σ = 6 . It can be observed that our GNLR-RI-Log method can reconstruct the images more precisely than NLR-CS-baseline, CoSaMP and OMP, compensate for the over-smoothness and obtain good group similarities compared to the other algorithms by extracting structural information from reference images.

5.6. Computational Complexity

A good remote sensing reconstruction model is expected to be not only effective, but also computationally efficient. The single-channel image is processed on an Intel(R) Core(TM) i7-6700 CPU @ 3.41 GHz with our MATLAB implementation (MATLAB 2015 with 64 bit). The computational cost performances of some representative methods are shown in Table 7, where we use a 256 × 256 single-channel image as the input. As can be seen in Table 7 where we abbreviate iteration to iter., in terms of execution time, reconstruction with GNLR-RI-Log is, as expected, faster than RCoS and ALSB due to low-rank approximation. On the other hand, the execution times of RCoS and ALSB are much slower than GNLR-RI-Log and MH-MS-BCS-SPL due to RCoS and ALSB learning the adaptive sparsifying basis from a fraction of all patches. OMP and CoSamp are the fastest reconstruction methods with their runtimes of 0.106 and 0.031 s per iteration, respectively. In summary, GNLR-RI-Log achieves favorable performance in terms of both reconstruction accuracy and efficiency.

6. Conclusions

This paper proposes a novel image reconstruction scheme that introduces wavelet coefficients of reference images as reference information based on compressed sensing and generalized nonconvex low-rank approximation. Nonlocal low-rank regularization enables us to exploit the similarity of patches and the nonconvexity of metric rank minimization. In addition, the single value threshold and conjugate gradient algorithms jointly offer a principled and computationally-efficient solution to image reconstruction. This approach has the following characteristics: (1) the wavelet coefficient acted as the texture feature constraint and extracted structural information from reference images to reconstruct remote sensing images; (2) high-resolution images can be achieved via simultaneously using compressed sensing and generalized nonlocal low-rank approximation. Some experiments are made to evaluate the proposed method, and the results show that the proposed method can achieve higher resolution than the state-of-the-art approaches. However, the proposed method has limited performance for the regions and bands where the spatial correlation is not high. In the future, we will further investigate it and use some change detection approaches to improve the performance of our algorithm.

Acknowledgments

The anonymous reviewers’ comments and suggestions greatly improved our paper. We are grateful for their kind help. This work is supported by the National Natural Science Foundation of China (No. 41471368, No. 41571413 and No. 61362001), the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences (RADI) director foundation, Jiangxi Advanced Projects for Post-doctoral Research Funds (2014KY02) and the Graduate Innovation Foundation of Jiangxi province (No. YC2015-S038).

Author Contributions

All co-authors of this manuscript significantly contributed to all phases of the investigation. They contributed equally to the preparation, analysis, review and editing of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Shen, H.; Zhang, L.; Li, H. Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information. ISPRS J. Photogramm. Remote Sens. 2015, 106, 1–15. [Google Scholar] [CrossRef]
  2. Wang, L.; Lu, K.; Liu, P. Compressed sensing of a remote sensing image based on the priors of the reference image. IEEE Geosci. Remote Sens. Lett. 2015, 12, 736–740. [Google Scholar] [CrossRef]
  3. Geng, H.; Liu, P.; Wang, L.; Chen, L. Compressed sensing based remote sensing image reconstruction using an auxiliary image as priors. In Proceedings of the 2014 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014; pp. 2499–2502.
  4. Liu, P.; Huang, F.; Li, G.; Liu, Z. Remote-sensing image denoising using partial differential equations and auxiliary images as priors. IEEE Geosci. Remote Sens. Lett. 2012, 9, 358–362. [Google Scholar] [CrossRef]
  5. Hu, T.; Zhang, H.; Shen, H.; Zhang, L. Robust registration by rank minimization for fultiangle hyper/multispectral remotely sensed imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2443–2457. [Google Scholar] [CrossRef]
  6. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef] [PubMed]
  7. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef] [PubMed]
  8. Peng, L.; Eom, K.B. Restoration of multispectral images by total variation with auxiliary image. Opt. Lasers Eng. 2013, 51, 873–882. [Google Scholar]
  9. Peng, L.; Dingsheng, L.; Zhu, L. Total variation restoration of the defocus image based on spectral priors. Int. Soc. Opt. Photon. Remote Sens. 2010. [Google Scholar] [CrossRef]
  10. Madni, A.M. A systems perspective on compressed sensing and its use in reconstructing sparse networks. IEEE Syst. J. 2014, 8, 23–27. [Google Scholar] [CrossRef]
  11. Wei, L.; Prasad, S.; Fowler, J.E. Spatial information for hyperspectral image reconstruction from compressive random projections. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1379–1383. [Google Scholar]
  12. Xiao, D.; Yunhua, Z. A novel compressive sensing algorithm for SAR imaging. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 708–720. [Google Scholar]
  13. Fowler, J.E. Compressive-projection principal component analysis. IEEE Trans. Image Process. 2009, 18, 2230–2242. [Google Scholar] [CrossRef] [PubMed]
  14. Ly, N.H.; Du, Q.; Fowler, J.E. Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 466–472. [Google Scholar] [CrossRef]
  15. Chen, C.; Wei, L.; Eric, W.T.; James, E.F. Reconstruction of hyperspectral imagery from random projections using multi-hypothesis prediction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 365–374. [Google Scholar] [CrossRef]
  16. Qiegen, L.; Shanshan, W.; Ying, L. Adaptive dictionary learning in sparse gradient domain for image recovery. IEEE Trans. Image Process. 2013, 22, 4652–4663. [Google Scholar]
  17. Zhang, X.; Bai, T.; Meng, H.; Chen, J. Compressive sensing-based ISAR imaging via the combination of the sparsity and nonlocal total variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 990–994. [Google Scholar] [CrossRef]
  18. He, X.; Condat, L.; Chanussot, J.; Xia, J. Pansharpening using total variation regularization. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 166–169.
  19. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  20. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total variation regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  21. Zhang, J.; Zhong, P.; Chen, Y.; Li, S. Regularized deconvolution network for the representation and restoration of optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2617–2627. [Google Scholar] [CrossRef]
  22. Yue, L.; Shen, H.; Yuan, Q.; Zhang, L. A locally adaptive L1/L2 norm for multi-frame super-resolution of images with mixed noise and outliers. Signal Process. 2014, 105, 156–174. [Google Scholar] [CrossRef]
  23. Zheng, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  24. Xiaobo, Q.; Di, G.; Bende, N.; Yingkun, H.; Yulan, L.; Cai, S.; Zhong, C. Undersampled MRI reconstruction with patch-based directional wavelets. Magn. Resonance Imaging 2012, 30, 964–977. [Google Scholar]
  25. Skretting, K.; Engan, K. Recursive least squares dictionary learning algorithm. IEEE Trans. Image Process 2010, 58, 2121–2130. [Google Scholar] [CrossRef]
  26. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  27. Zobly, S.M.S.; Kadah, Y.M. Orthogonal matching pursuit and compressive sampling matching pursuit for Doppler ultrasound signal reconstruction. In Proceedings of the 2012 Cairo International, Biomedical Engineering Conference (CIBEC), Giza, Egypt, 20–22 December 2012; pp. 52–55.
  28. Ravazzi, C.; Fosson, S.M.; Magli, E. Distributed iterative thresholding for 0/1-regularized linear inverse problems. IEEE Trans. Inf. Theory 2015, 61, 2081–2100. [Google Scholar] [CrossRef]
  29. Xiaoqun, Z.; Martin, B.; Xavier, B.; Stanley, O. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 2010, 3, 253–276. [Google Scholar]
  30. Liu, Q.; Peng, X.; Liu, J.; Yang, D.; Liang, D. A weighted two-level bregman method with dictionary updating for nonconvex MR image reconstruction. J. Biomed. Imaging 2014. [Google Scholar] [CrossRef] [PubMed]
  31. Yang, S.; Liu, Z.; Wang, M.; Sun, F.; Jiao, L. Multitask learning and sparse representation based super-resolution reconstruction of synthetic aperture radar images. In Proceedings of the 2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM), Xiamen, China, 10–12 January 2011; pp. 1–5.
  32. Fazel, M.; Hindi, H.; Boyd, S.P. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. Proc. Am. Control Conf. 2003, 3, 2156–2162. [Google Scholar]
  33. Gasso, G.; Rakotomamonjy, A.; Canu, S. Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans. Signal Process. 2009, 57, 4686–4698. [Google Scholar] [CrossRef]
  34. Chartrand, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar] [CrossRef]
  35. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic-minimization. IEEE Trans. Med. Imaging 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
  36. Canyi, L.; Jinhui, T.; Shuicheng, Y.; Zhouchen, L. Generalized nonconvex nonsmooth low-rank minimization. In Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition (CVPR), Zurich, Switzerland, 6–12 September 2014; pp. 4130–4137.
  37. Friedman, J.H. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
  38. Frank, L.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  39. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  40. Gao, C.; Wang, N.; Yu, Q.; Zhang, Z. A Feasible Nonconvex Relaxation Approach to Feature Selection. Available online: http://bcmi.sjtu.edu.cn/ zhzhang/papers/etp.pdf (accessed on 4 March 2016).
  41. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  42. Tong, Z. Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 2010, 11, 1081–1107. [Google Scholar]
  43. CunHui, Z. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 894–942. [Google Scholar]
  44. Chen, C.; Eric, W.T.; James, E.F. Compressed-sensing recovery of images and video using multi-hypothesis predictions. In Proceedings of the 45th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 6–9 November 2011; pp. 1193–1198.
  45. Zhang, J.; Zhao, D.; Zhao, C.; Xiong, R.; Ma, S.; Gao, W. Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerg. Sel. Top. Circuits Syst. 2012, 2, 380–391. [Google Scholar] [CrossRef]
  46. Zhanga, J.; Zhaob, C.; Zhaoa, D.; Gaob, W. Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 2014, 103, 114–126. [Google Scholar] [CrossRef]
Figure 1. Some popular nonconvex surrogate functions of the 1 norm (left) and their super-gradients (right). (a) Log penalty; (b) p norm (Lp) penalty; (c) Geman penalty; (d) Laplace (Lap) penalty; (e) Exponential type penalty (Etp) penalty; (f) Smoothly-clipped absolute deviation (Scad) penalty; (g) Capped 1 (Cappedl1) penalty; (h) Minimax concave penalty (Mcp) penalty.
Figure 1. Some popular nonconvex surrogate functions of the 1 norm (left) and their super-gradients (right). (a) Log penalty; (b) p norm (Lp) penalty; (c) Geman penalty; (d) Laplace (Lap) penalty; (e) Exponential type penalty (Etp) penalty; (f) Smoothly-clipped absolute deviation (Scad) penalty; (g) Capped 1 (Cappedl1) penalty; (h) Minimax concave penalty (Mcp) penalty.
Remotesensing 08 00499 g001
Figure 2. Relationship between the high resolution image and the low resolution image.
Figure 2. Relationship between the high resolution image and the low resolution image.
Remotesensing 08 00499 g002
Figure 3. Framework of the proposed GNLR-RI method.
Figure 3. Framework of the proposed GNLR-RI method.
Remotesensing 08 00499 g003
Figure 4. The reconstruction image of three methods. (a) L1-without reference information (WRI); (b) L1-RI; (c) GNLR-RI-Log.
Figure 4. The reconstruction image of three methods. (a) L1-without reference information (WRI); (b) L1-RI; (c) GNLR-RI-Log.
Remotesensing 08 00499 g004
Figure 5. Reconstructed images with different nonconvex surrogate functions. (a) GNLR-Log; (b) GNLR-Lp; (c) GNLR-Geman; (d) GNLR-Lap; (e) GNLR-Etp; (f) GNLR-Scad; (g) GNLR-Cappedl1; (h) GNLR-Mcp.
Figure 5. Reconstructed images with different nonconvex surrogate functions. (a) GNLR-Log; (b) GNLR-Lp; (c) GNLR-Geman; (d) GNLR-Lap; (e) GNLR-Etp; (f) GNLR-Scad; (g) GNLR-Cappedl1; (h) GNLR-Mcp.
Remotesensing 08 00499 g005
Figure 6. The reconstruction image of Landsat 7 and Landsat 8. (ad) Reconstruction Image (1–4). (a) Reconstruction Image 1; (b) Reconstruction Image 2; (c) Reconstruction Image 3; (d) Reconstruction Image 4.
Figure 6. The reconstruction image of Landsat 7 and Landsat 8. (ad) Reconstruction Image (1–4). (a) Reconstruction Image 1; (b) Reconstruction Image 2; (c) Reconstruction Image 3; (d) Reconstruction Image 4.
Remotesensing 08 00499 g006
Figure 7. Results of the proposed method with different values of parameters λ and η tested in Bands 4/3 of MODIS. (a) PSNR versus λ and η; (b) RMSE versus λ and η; (c) CC versus λ and η; (d) SSIM versus λ and η.
Figure 7. Results of the proposed method with different values of parameters λ and η tested in Bands 4/3 of MODIS. (a) PSNR versus λ and η; (b) RMSE versus λ and η; (c) CC versus λ and η; (d) SSIM versus λ and η.
Remotesensing 08 00499 g007
Figure 8. The reconstruction results for a sample rate of 0.1 and a noise level of σ = 8 . (ad) Reconstructing Landsat 8 images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP; (eh) Reconstructing MODIS images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP. (a) GNLR-RI-Log; (b) NLR-CS-baseline; (c) CoSaMP; (d) OMP; (e) GNLR-RI-Log; (f) NLR-CS-baseline; (g) CoSaMP; (h) OMP.
Figure 8. The reconstruction results for a sample rate of 0.1 and a noise level of σ = 8 . (ad) Reconstructing Landsat 8 images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP; (eh) Reconstructing MODIS images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP. (a) GNLR-RI-Log; (b) NLR-CS-baseline; (c) CoSaMP; (d) OMP; (e) GNLR-RI-Log; (f) NLR-CS-baseline; (g) CoSaMP; (h) OMP.
Remotesensing 08 00499 g008
Table 1. Generalized nonconvex surrogate functions.
Table 1. Generalized nonconvex surrogate functions.
Nonconvex Surrogate FunctionsFormulation G θ ( x ) , x 0 , θ > 0 Super-Gradient G θ ( x )
Log θ log γ + 1 log γ x + 1 γ θ γ x + 1 log γ + 1
Lp θ x p , 0 < p < 1 , x = 0 , θ p x p - 1 , x > 0 .
Geman θ x x + γ θ γ x + γ 2
Lap θ 1 - exp - x γ θ γ exp - x γ
Etp θ 1 - exp ( - γ ) 1 - exp - γ x θ γ 1 - exp ( - γ ) exp - γ x
Scad θ x , x < θ , - x 2 + 2 γ θ x - θ 2 2 γ - 1 θ , x γ θ , θ 2 γ + 1 2 , x > γ θ . θ , if x < θ , γ θ - x γ - 1 , λ < x γ θ , 0 , if x > γ θ .
Cappedl1 θ x , x < γ , θ γ , x γ . θ , if x < γ , 0 , θ , if x = γ , 0 , if x > γ .
Mcp θ x - x 2 2 γ , if x < γ θ , 1 2 γ θ 2 , if x γ θ . θ - x γ , if x < γ θ , 0 , if x γ λ .
Table 2. PSNR (dB)/RMSE with different generalized nonconvex surrogate functions.
Table 2. PSNR (dB)/RMSE with different generalized nonconvex surrogate functions.
FunctionsLandsat 8MODISLandsat 7
Log32.655/3.26327.843/3.27425.616/11.477
Lp32.804/5.46326.068/3.26625.569/11.259
Geman32.655/5.43426.194/3.25325.964/10.922
Lap32.714/5.38727.193/3.31225.580/10.843
Etp32.803/5.46627.310/3.28225.681/11.499
Scad32.667/5.43526.223/3.29325.958/11.319
Cappedl132.695/5.47827.081/3.30525.757/11.271
Mcp32.752/5.49926.251/3.26825.863/11.018
Table 3. The performance indexes PSNR (dB), RMSE, correlation coefficient (CC) and structural similarity (SSIM) of the Log function with different images and patch sizes.
Table 3. The performance indexes PSNR (dB), RMSE, correlation coefficient (CC) and structural similarity (SSIM) of the Log function with different images and patch sizes.
ImagesPatch SizesPSNRRMSECCSSIM
(1)2 × 224.00617.9300.9580.898
(1)4 × 424.12817.7800.9580.897
(1)6 × 624.21217.7770.9580.896
(1)8 × 824.28717.9370.9570.896
(2)2 × 230.4698.9080.9800.844
(2)4 × 430.6158.7090.9800.832
(2)6 × 630.5528.8160.9790.828
(2)8 × 830.8068.8230.9790.832
(3)2 × 224.40411.2640.9250.802
(3)4 × 424.48511.4700.9210.782
(3)6 × 624.93611.2950.9220.790
(3)8 × 823.91911.8220.9140.773
(4)2 × 224.12817.7800.9580.897
(4)4 × 424.21217.7770.9580.896
(4)6 × 624.28717.9370.9570.896
(4)8 × 830.4698.9080.9800.844
Table 4. Reconstruction evaluation of Landsat 8 Bands 4/5/6 with Band 8 as the reference and Google Earth Bands 1/2/3 with the gray one as the reference. SAM, spectral angle mapper; ERGAS, relative dimensionless global error; OMP, orthogonal matching pursuit; CoSaMP, compressive sampling matching pursuit; CS, compressed sensing; MH-BCS-SPL, multi-hypothesis predictions combined with block-based compressed sensing with smoothed projected Landweber reconstruction; RCoS, recovery via collaborative sparsity; ALSB, adaptively-learned sparsifying basis.
Table 4. Reconstruction evaluation of Landsat 8 Bands 4/5/6 with Band 8 as the reference and Google Earth Bands 1/2/3 with the gray one as the reference. SAM, spectral angle mapper; ERGAS, relative dimensionless global error; OMP, orthogonal matching pursuit; CoSaMP, compressive sampling matching pursuit; CS, compressed sensing; MH-BCS-SPL, multi-hypothesis predictions combined with block-based compressed sensing with smoothed projected Landweber reconstruction; RCoS, recovery via collaborative sparsity; ALSB, adaptively-learned sparsifying basis.
Landsat 8 Bands 4/5/6Google Earth Bands 1/2/3
PSNRSAMRASMERGASQ4PSNRSAMRASMERGASQ4
OMP17.6440.0920.1620.0410.98619.7410.0180.0590.0150.944
CoSaMP17.3270.0740.1460.0370.98919.4620.0170.0540.0140.954
NLR-CS-baseline17.2940.1950.4340.1090.86530.0550.0100.0620.0160.930
MH-BCS-SPL27.5680.1620.1650.0410.98531.9630.0100.0710.0180.917
RCoS18.4120.1400.5110.1280.82023.4950.0250.2190.0540.607
ALSB28.7530.1380.1440.0360.98932.8210.0280.0640.0160.932
referenceband 8gray
GNLR-RI-Log22.4590.0230.2230.7620.95141.9670.0090.0180.0050.995
Table 5. PSNR (dB) with different parameters in different images.
Table 5. PSNR (dB) with different parameters in different images.
ληLandsat 8MODIS
Band 4Band 8Band 2Band 3
0.250.0724.66826.67424.22927.311
0.450.4525.83427.05023.76326.969
0.670.3324.84326.70223.79827.427
0.870.2524.47426.90224.02727.604
Table 6. The performance indexes PSNR (dB) with different reconstruction methods and noise levels. Landsat 8 Band 4 (reference: Band 8) and MODIS Band 6 (reference: Band 7).
Table 6. The performance indexes PSNR (dB) with different reconstruction methods and noise levels. Landsat 8 Band 4 (reference: Band 8) and MODIS Band 6 (reference: Band 7).
GNLR-RI-LogNLR-CS-baselineCoSaMPOMPGNLR-RI-LogNLR-CS-baselineCoSaMPOMP
σ = 0 32.86323.50228.75428.02131.22522.02524.80324.274
σ = 4 32.78623.49928.20227.66431.27922.02324.61024.122
σ = 8 32.61523.49626.79826.87930.99422.02223.97423.717
σ = 12 32.63923.48325.10325.65130.87422.01523.000723.086
Table 7. Reconstruction time for a 256 × 256 single-channel image.
Table 7. Reconstruction time for a 256 × 256 single-channel image.
MethodsOMPCoSaMPNLR-CS-baselineMH-BCS-SPLRCoSALSBGNLR-RI-Log
Runtime (s)0.106/iter.0.031/iter.3.56/iter.1.39/iter.21.08/iter.17.11/iter.0.22/iter.

Share and Cite

MDPI and ACS Style

Lu, H.; Wei, J.; Wang, L.; Liu, P.; Liu, Q.; Wang, Y.; Deng, X. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation. Remote Sens. 2016, 8, 499. https://doi.org/10.3390/rs8060499

AMA Style

Lu H, Wei J, Wang L, Liu P, Liu Q, Wang Y, Deng X. Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation. Remote Sensing. 2016; 8(6):499. https://doi.org/10.3390/rs8060499

Chicago/Turabian Style

Lu, Hongyang, Jingbo Wei, Lizhe Wang, Peng Liu, Qiegen Liu, Yuhao Wang, and Xiaohua Deng. 2016. "Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation" Remote Sensing 8, no. 6: 499. https://doi.org/10.3390/rs8060499

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop