Skip to Content
Remote SensingRemote Sensing
  • Article
  • Open Access

14 June 2016

Reference Information Based Remote Sensing Image Reconstruction with Generalized Nonconvex Low-Rank Approximation

,
,
,
,
,
and
1
Department of Electronic Information Engineering, Nanchang University, Nanchang 330031, China
2
Institute of Space Science and Technology, Nanchang University, Nanchang 330031, China
3
Institute of Remote Sensing and Digital Earth, Chinese Academy of Sciences, Beijing 100094, China
4
School of Computer Science, China University of Geosciences, Wuhan 430074, China

Abstract

Because of the contradiction between the spatial and temporal resolution of remote sensing images (RSI) and quality loss in the process of acquisition, it is of great significance to reconstruct RSI in remote sensing applications. Recent studies have demonstrated that reference image-based reconstruction methods have great potential for higher reconstruction performance, while lacking accuracy and quality of reconstruction. For this application, a new compressed sensing objective function incorporating a reference image as prior information is developed. We resort to the reference prior information inherent in interior and exterior data simultaneously to build a new generalized nonconvex low-rank approximation framework for RSI reconstruction. Specifically, the innovation of this paper consists of the following three respects: (1) we propose a nonconvex low-rank approximation for reconstructing RSI; (2) we inject reference prior information to overcome over smoothed edges and texture detail losses; (3) on this basis, we combine conjugate gradient algorithms and a single-value threshold (SVT) simultaneously to solve the proposed algorithm. The performance of the algorithm is evaluated both qualitatively and quantitatively. Experimental results demonstrate that the proposed algorithm improves several dBs in terms of peak signal to noise ratio (PSNR) and preserves image details significantly compared to most of the current approaches without reference images as priors. In addition, the generalized nonconvex low-rank approximation of our approach is naturally robust to noise, and therefore, the proposed algorithm can handle low resolution with noisy inputs in a more unified framework.

1. Introduction

The multispectral sensors of a satellite system sometimes fail or degrade after the system is deployed, and degraded sensors provide blurred and noisy images. For example, one band of the multispectral sensor on board the MODIS satellite degraded after the launch, while the rest of the bands remained unaffected. Damage to images can also be caused by defocusing, atmospheric turbulence and some other factors besides the damage of sensors. Owing to sensor failure and poor observation conditions, remote sensing images are easily subjected to information loss, which hinders our effective analysis of the Earth. Digital imaging processing techniques are commonly used to improve image quality and increase application potential.
In a typical multispectral satellite imaging system, multiple images from different sensors in the same area are available. When one of those images in a multiple image set is degraded, another image in the set can be used as a prior image for reconstruction. However, some image reconstruction approaches have used multiple reference input images. Reconstruction approaches utilize multiple reference input images, which are used as the prior to improve the reconstruction performance in terms of spatial resolution and noise reduction. Li et al. [1] proposed a model utilizing spectral and temporal information as the complementary information for reconstructing the missing information in remote sensing images. Lizhe et al. and Hao et al. [2,3] developed a compressed sensing object function that uses a reference image as a prior. The sparsity constraints in the transform domain come from the target image, and the gradient priors in the spatial domain come from the auxiliary reference image. Peng et al. [4] proposed an algorithm based on the total variation approach using an auxiliary image. These methods have seen the potential for higher reconstruction performance when aided references are present, but the accuracy and quality of reconstruction are lacking.
Since the image prior knowledge plays a critical role in the performance of image reconstruction algorithms, designing effective regularization terms to reflect the image priors is the core of remote sensing images (RSI) reconstruction. To sum up, most of these reconstruction methods are based on some specific prior knowledge of the images, and they use the spectral or spatial information of the remote sensing images to recover the current pixel or to obtain high resolution images. Low-rank-inducing regularization terms have recently received considerable attention in sparse representation. The principle of low-rank approximation is that similar patches are grouped to share a similar underlying structure and form a low-rank matrix appropriately [5,6], and it can be solved by using the efficient singular value decomposition as the optimization tool. Using the low-rank framework, Dong et al. [7] proposed a nonlocal low-rank algorithm, called the spatially-adaptive iterative singular-value thresholding method. Making use of similar information from another relative band, it is easy to extend these nonconvex penalty functions on singular values of a matrix to improve low-rank matrix recovery performance.
In the proposed algorithm, instead of using the 0 norm as the minimization constraint of compressive sensing, the generalized nonconvex low-rank approximation is exploited as the basic unit of sparse representation. The model integrates the wavelet texture feature as structural and textural reference information to establish a novel compressive sensing algorithm for remote sensing image reconstruction (GNLR-RI). Compared to traditional 0 norm-based compressive sensing, the proposed GNLR-RI algorithm makes three contributions: (1) to overcome over-smoothing and texture detail loss of sparse representation, the wavelet coefficient of the reference image is injected as texture reference constraint information; (2) nonlocal similar patches from a reference image are extracted in the low-rank approximation step; (3) the spectral reference information and similarity within bands are jointly used. Therefore, the proposed algorithm can get accurate reconstruction results and improve calculation speed. The solution of the proposed algorithm is derived by the conjugate gradient algorithm, single value threshold (SVT), simultaneously calculating the low-rank matrix of similar image patches and then estimating the reconstructed image.
The remainder of this paper is organized as follows. A brief review is given on the related work of remote sensing image reconstruction with reference images and other generalized reconstruction methods, and auxiliary information is added in Section 2. Section 3 introduces the generalized nonconvex low-rank approximation model, which contains eight surrogate functions. The proposed model of introducing the generalized nonconvex low-rank approximation algorithm and the solution of the proposed model are presented in Section 4. Experiments are illustrated in Section 5. Finally, Section 6 summarizes our conclusions.

3. Generalized Nonconvex Low-Rank Approximation Model

A nonconvex low-rank model for CS recovery exploits the nonlocal structured sparsity via low-rank approximation for image reconstruction. Under the assumption that each exemplar patch of x i C n (size n × n at position i) is able to find plenty of similar patches in its neighborhood area, a large number of k-nearest-neighbor searches have been implemented for each exemplar patch in a local window, namely:
H i = i j x i - x i j < T
where T is a pre-defined threshold value and H i denotes the assemblage of positions relating to the patches similar to x i . Under the assumption that these image patches have similar structures, the data-formed data matrix x i has a low-rank property. A data matrix X i = x i 0 , x i 1 , , x i m - 1 , X i C n × m is acquired after the search, which is decomposed into X i = L i + W i , where L i denotes the low-rank matrix and W i denotes the Gaussian noise. Then, the low-rank problem can be solved as:
L i = arg min L i r a n k ( L i ) , s . t . X i - L i F 2 σ ω 2
where · F 2 denotes the Frobenius norm and σ ω 2 denotes the variance of additive Gaussian noise. The rank minimization almost is an NP-hard problem; hence, we use the nuclear norm · * (sum of singular values) to replace a series of convex surrogate functions of the rank to obtain an approximated solution. Using the nuclear norm, the rank minimization problem can be efficiently solved by the technique of singular value thresholding (SVT).
In this paper, we consider a smooth, but non-convex surrogate of the rank rather than the nuclear norm. Specifically, according to [32], the rank minimization problem with regard to L i can be approximately solved by minimizing the following function:
L i = arg min L i S L i , ε , s . t . X i - L i F 2 σ ω 2
where F 2 denotes the Frobenius norm, σ ω 2 denotes the variance of additive Gaussian noise, S L i , ε = G det L i L i T 1 / 2 + ε I = G det U Σ 1 / 2 U - 1 + ε I = G det Σ 1 / 2 + ε I , ε is a small constant value, Σ denotes the eigenvalue matrix of L i L i T , i.e., L i L i T = U Σ U - 1 , and Σ 1 / 2 is a diagonal matrix whose diagonal elements are the singular values of the matrix L i . By taking a proper parameter λ, Equation (12) can be transformed into:
L i = arg min L i X i - L i F 2 + λ S L i , ε
For each exemplar image patch, we can approximate the matrix X i with a low-rank matrix L i by solving Equation (13). Many nonconvex surrogate functions have been proposed to achieve a better approximation to the 0 norm [36], including the logarithm function (Log) [37], the p norm (Lp) [38], Geman (Geman) [39], Laplace (Lap) [35] and the exponential type penalty (Etp) [40]. In addition, there are several discontinuous functions, such as the smoothly-clipped absolute deviation (Scad) [41], Capped 1 (Cappedl1) [42] and the minimax concave penalty (Mcp) [43]. The definitions and super-gradients of these surrogate functions have similar monotonous trends, as displayed in Table 1 and Figure 1.
Table 1. Generalized nonconvex surrogate functions.
Figure 1. Some popular nonconvex surrogate functions of the 1 norm (left) and their super-gradients (right). (a) Log penalty; (b) p norm (Lp) penalty; (c) Geman penalty; (d) Laplace (Lap) penalty; (e) Exponential type penalty (Etp) penalty; (f) Smoothly-clipped absolute deviation (Scad) penalty; (g) Capped 1 (Cappedl1) penalty; (h) Minimax concave penalty (Mcp) penalty.

4. Problem Definition

In multispectral remote sensing applications, the target images’ and the reference images’ intensity distributions are different, but their edge directions and texture information are often similar. One of the reference images can be used as a prior image in the reconstruction process. This section gives the generalized nonconvex low-rank approximation algorithm for CS recovery exploiting the nonlocal structured sparsity via low-rank approximation for remote sensing image reconstruction. Let x C N denote the target images, where x is sparsely expressed as x = ψ α with a sparse signal α, and y C M denotes the observed data; the measurement matrix ϕ C M × N ( M < N ) maps x to y. The remote sensing image reconstruction problem is to reconstruct x. Figure 2 shows the relationship between high resolution target images and low resolution observed images. The different low and high resolution images contain different time series or different bands. In some cases, we use one or two different time series or different bands as the reference image and the other as the unknown target image.
Figure 2. Relationship between the high resolution image and the low resolution image.
In this section, a compressed sensing model with reference images integrating low-rank regularization and wavelet textural constraints is proposed and solved with the conjugate gradient algorithm and single value threshold (SVT). Figure 3 shows the framework of the proposed GNLR-RI method. The reconstruction object formula of the generalized nonconvex low-rank approximation with reference information model can be presented as:
α ^ , L ^ i = arg min α , L i η i R ˜ i ψ α - L i F 2 + λ S L i , ε , s . t . y - ϕ ψ α 2 2 < δ j U w j α j - α r e f _ j F 2 < ξ
where R ˜ i x = R i 0 x , R i 1 x , , R i m - 1 x = X i denotes the patch matrix similar to the patch x i . The variable R denotes the location of the similarity patches; i R i ψ α represents the patch average result. The minimization of the objective constraint function turns into non-constraint optimization.
α ^ , L ^ i = arg min α , L i y - ϕ ψ α 2 2 + j U w j α j - α r e f _ j F 2 + η i R ˜ i ψ α - L i F 2 + λ S L i , ε
Figure 3. Framework of the proposed GNLR-RI method.

4.1. Solving the Proposed Model

As it is difficult to get a closed solution from Equation (15) directly, the optimization process is divided into three steps.
α k + 1 = arg min α y - ϕ ψ α 2 2 + j U w j α j - α r e f _ j F 2
L ^ i = arg min L i i R ˜ i ψ α - L i F 2 + λ S L i , ε
α k + 1 = arg min α , L i y - ϕ ψ α 2 2 + η i R ˜ i ψ α - L i F 2
For the first step, the reference image wavelet information is 2 norm regularization; thus, the conjugate gradient algorithm is used to solve it. By setting the derivatives of the objective formula in Equation (16) with respect to the sparse coefficient α to zero, we can obtain:
- 2 ψ T ϕ T y - ϕ ψ α k + 1 + 2 j U w j α j - α r e f _ j = 0
ψ T ϕ T ϕ ψ α k + 1 = ψ T ϕ T y + j U w j α j - α r e f _ j
and where α = j U w j α j , we can employ the conjugate gradient algorithm to solve Equation (20).
For the second step, the solution of L i can be obtained by solving the following minimization problem:
L i = arg min L i η R ˜ i x - L i F 2 + λ S L i , ε
where R ˜ i x = R i 0 x , R i 1 x , , R i m - 1 x = X i denotes the patch matrix similar to the patch x i . By substituting Equation (11) into Equation (21), L i can be rewritten as:
min L i X i - L i F 2 + λ η j = 1 n 0 G σ j L i + ε
where σ j is the j-th singular value of L i . Let f ( σ ) = j = 1 n 0 G ( σ j + ε ) , which can be solved by using a local minimization method. With the first order Taylor expansion, f σ can be approximated as:
f σ = f σ k + f σ k , σ - σ k
where σ k denotes the value of σ in the k-th iteration. This can be worked out by solving the following equation iteratively,
L i k + 1 = arg min L i X i - L i F 2 + λ η l = 1 n 0 j = 1 n 0 σ j k + ε σ l k + ε G j = 1 n 0 σ j k + ε σ l
After the constants in Equation (24) are ignored, Equation (22) is rewritten into:
L i k + 1 = arg min L i 1 2 X i - L i F 2 + τ Ψ L i , ω k
where τ = λ / 2 η and Ψ L i , ω k = l = 1 n 0 σ l ω l k . ω l k = G j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε / σ l k + ε denotes the weighted nuclear norm. According to the proximal operator of the weighted nuclear norm, the solution in the ( k + 1 ) t h iteration can be obtained as:
L i k + 1 = U Σ ˜ - τ d i a g ω k + V T
where U Σ ˜ V T denotes the SVT of X i , x + = max x , 0 .
For the third step, after the solution of all L i is obtained, the following minimization problem can be solved to reconstruct the wavelet coefficient matrix by using the conjunction gradient algorithm (CG):
- 2 ψ T ϕ T y - ϕ ψ α k + 1 + 2 η i ψ T R ˜ i T R ˜ i ψ α k + 1 - L i = 0
By setting the derivatives of the objective formula in Equation (27) with respect to the sparse coefficient α to zero, we can obtain:
ψ T ϕ T ϕ ψ + i ψ T R ˜ i T R ˜ i ψ α k + 1 = ψ T ϕ T y + i ψ T R ˜ i T L i
Noting that the value of i R ˜ i T R ˜ i represents the number of overlapping patches and i R ˜ i L i represents the patch average result, we can employ the conjugate gradient algorithm to solve Equation (28).
Now, this image reconstruction algorithm is summarized here, is called GNLR-RI and represents the eight nonconvex surrogate functions mentioned above. The detailed description of the proposed method is listed in Algorithm 1. The difference between the NLR-CS-baseline and the GNLR-RI mainly lies in the approximation scheme of the 1 norm and the representation of texture features. Therefore, during the reconstruction process, the low-rank operation strongly encourages the similarity of the patches between the target image and the reference image.
Algorithm 1: GNLR-RI.
Input: Sparse coefficient of reference image α r e f
Initialization: Set target wavelet coefficient α to zero; ω i = 1 , 1 , , 1 T , Λ 1 , μ 1 = 0 λ , η , p , τ = λ λ 2 η 2 η , β , K , J and α
While convergence criterion not met, do
1. Compute the texture feature vectors F, F r e f of target wavelet coefficient α and reference wavelet coefficient α r e f using Equations (5) and (6).
2. Add the constrained term k = 1 U w k α k - α r e f _ k 2 2 to the compressed sensing target objective function.
3. Add the low-rank approximation based on the reference image term.
4. Solve the optimization problem Equation (20) via the conjunction gradient algorithm.
5. Iteration: For k > K 0 , do
 1) Form a matrix X i making up similar patches of x k and set L i 0 = X i
 2) For j = 1 , 2 , , J , do
  a) If k > K 0 , update the weights
ω l k = G j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε j = 1 n 0 σ j k + ε σ l k + ε σ l k + ε
  b) Compute L i via Equation (26) and output α i = α i j when j = J .
  End for
 End for
6. Solve the optimization problem Equation (28) via the conjunction gradient algorithm.
Output: X ^ = X . K

5. Experiment Results

During the experiments, single-channel and multichannel satellite images from MODIS, Landsat 7, Landsat 8 and Google Earth are used as the simulated data to test the performance of our proposed reconstruction framework. (1) Landsat 7 and Landsat 8 provide PAN images at 15-m spatial resolution and 30-m spatial resolution. (2) MODIS provides images at 500-m resolution. (3) Google Earth provides three-channel color tested images. For the Landsat 7 and Landsat 8 image data, Band 4 with 30-m resolution is the near-infrared band with a better spectrum characteristic in which a body of water has a clear outline, and Band 8 is panchromatic with 15-m resolution; thus, Band 4 is chosen as the target image, and Band 8 is chosen as the reference image, which is down-sampled into 30-m spatial resolution to group similar blocks injected into the reconstructed image to compensate the smoothed areas nonlinearly. For the MODIS data, we use Band 4 as the target image and Band 3 as the reference image to evaluate the proposed algorithm.
Furthermore, our proposed algorithm is compared to the well-known reconstruction-based models NLR-CS-baseline [7], orthogonal matching pursuit (OMP) and compressive sampling matching pursuit (CoSaMP) [24] by evaluating the experimental results quantitatively and visually. NLR-CS-baseline uses the standard nuclear norm, which is the l 1 norm minimization. CoSaMp and OMP reconstructs signal by using a very small number of points.
Furthermore, our proposed algorithm is compared to the well-known reconstruction-based models NLR-CS-baseline [7], orthogonal matching pursuit (OMP), compressive sampling matching pursuit (CoSaMP) [27], multi-hypothesis predictions combined with block-based compressed sensing with smoothed projected Landweber reconstruction (MH-BCS-SPL) [44], recovery via collaborative sparsity (RCoS) [45] and adaptively-learned sparsifying basis via L0 minimization (ALSB) [46] by evaluating the experimental results quantitatively and visually. NLR-CS-baseline uses the standard nuclear norm, which is the L1 norm minimization. CoSaMp and OMP reconstruct the signal by using a very small number of points. The multi-hypothesis prediction is used to generate a residual in the domain of the compressed-sensing random projections in MH-BCS-SPL. RcoS compels local 2D sparsity and nonlocal 3D sparsity simultaneously in an adaptive hybrid space-transform domain to utilize the intrinsic sparsity of natural images and to confine the CS solution space. ALSB enforces the intrinsic sparsity of natural images substantially by sparsely representing overlapped image patches using the adaptively-learned sparsifying basis in the form of the L0 norm.

5.1. Evaluation of the Low Rank Penalty Function and Different Nonconvex Surrogate Functions

First, we justify the necessity of adding the low-rank approximation method. For convenience, the objective function Equation (15) recovering the L 1 norm without reference information is denoted as L1-WRI. The reconstruction method introduced the reference information constraint with L 1 norm Equation (9) denoted as L1-RI. The objective function Equation (15) that employed the nonconvex surrogate logarithm function is denoted as GNLR-RI-Log. Figure 4 demonstrates the results of these three methods. The proposed algorithm that combines these and introduces low-rank approximation provides better performance.
Figure 4. The reconstruction image of three methods. (a) L1-without reference information (WRI); (b) L1-RI; (c) GNLR-RI-Log.
Second, in this section, the reference images of Landsat 7 and Landsat 8 are down-sampled to the same scale of the target images, and the MODIS images are the same scale with 256 × 256, where 256 is the length and width of the input image. Common indices, such as the peak signal to noise ratio (PSNR) and root mean square error (RMSE) are adopted to give a quantitative assessment. Table 2 shows PSNR (dB)/RMSE with different generalized nonconvex surrogate functions. The reconstructed images with different nonconvex functions are displayed in Figure 5. It can be observed that continuous nonconvex surrogate functions, such as Log, Lp, Geman, Lap and Etp, have the same performance as piecewise nonconvex surrogate functions, such as Scad, Cappedl1 and Mcp.
Table 2. PSNR (dB)/RMSE with different generalized nonconvex surrogate functions.
Figure 5. Reconstructed images with different nonconvex surrogate functions. (a) GNLR-Log; (b) GNLR-Lp; (c) GNLR-Geman; (d) GNLR-Lap; (e) GNLR-Etp; (f) GNLR-Scad; (g) GNLR-Cappedl1; (h) GNLR-Mcp.
In this subsection, we focus on usual nonconvex penalties proposed for recovering sparsity in Figure 1. As all of the penalty functions share common properties, concave and monotonically increasing on 0 , , thus their super-gradients are nonnegative and monotonically decreasing. Our proposed general solver is based on this key observation. One of the nonconvex penalties that circumvents Lasso weak points is the SCAD penalty, which has the unbiasedness properties. Among all other penalty functions that lead to sparsity, a popular one is the Lp pseudonorm when 0 < p < 1 . The main interest of this penalty resides in its quasi-smooth approximation of the L0 sparsity measure as p tends toward the null value. It can provide sparser solutions than Lasso. For the log penalty, we shift the coefficients by a small quantity to avoid an infinite value when the parameter vanishes.

5.2. Performance Comparison for Single-Channel Compressed Sensing

In this subsection, one of the nonconvex surrogate logarithm functions (GNLR-RI-Log) is selected to test the performance of the single-channel experiment. The correlation coefficient (CC) and structural similarity (SSIM) are added to give a quantitative assessment of the reconstruction results. Table 3 shows the performance indexes PSNR (dB), RMSE, CC and SSIM of the Log function with different images and patch sizes. Figure 6 demonstrates image recovery quality intuitively of different images. It can be observed that the proposed method, GNLR-RI-Log, can reconstruct the images more precisely and obtain good group similarity by extracting structural information from the reference image. Obviously, the results illustrate that we should select an appropriate patch size in the procedure of processed images to derive optimal reconstruction quality.
Table 3. The performance indexes PSNR (dB), RMSE, correlation coefficient (CC) and structural similarity (SSIM) of the Log function with different images and patch sizes.
Figure 6. The reconstruction image of Landsat 7 and Landsat 8. (ad) Reconstruction Image (1–4). (a) Reconstruction Image 1; (b) Reconstruction Image 2; (c) Reconstruction Image 3; (d) Reconstruction Image 4.

5.3. Multichannel Reconstruction with References

For the multichannel, Bands 4/5/6 of Landsat 8 data and Bands 1/2/3 of Google Earth data are set as simulated images to be reconstructed in the multichannel reconstruction subsection. Reference images are derived from related Band 8 of Landsat 8 and related gray one of Google Earth. Commonly-used metrics for multichannel images, such as spectral angle mapper (SAM), RASE, relative dimensionless global error (ERGAS) and Q4, are evaluated to check the availability of quantitative remote sensing, since PSNR is not the only index that reflects the reconstruction effect. Note that the ideal result is one for Q4, while it is zero for SAM, RASE and ERGAS. Table 4 shows that GNLR-RI-Log has more advantages with Google Earth data than Landsat 8 data. It can be observed that for Landsat 8 data, our method GNLR-RI-Log works better than NLR-CS-baseline, OMP, CoSaMp and RCoS in terms of PSNR, while it performs worse than MH-BCS-SPL and ALSB. Except ERGAS, for the other three indexes, SAM, RASM and Q4, GNLR-RI-Log obtains comparable and better results than most of the competing methods. On the other hand, for the Google Earth data, GNLR-RI-Log produces more moderate results and performs best among the competing methods in terms of PSNR, SAM, RASM, ERGAS and Q4.
Table 4. Reconstruction evaluation of Landsat 8 Bands 4/5/6 with Band 8 as the reference and Google Earth Bands 1/2/3 with the gray one as the reference. SAM, spectral angle mapper; ERGAS, relative dimensionless global error; OMP, orthogonal matching pursuit; CoSaMP, compressive sampling matching pursuit; CS, compressed sensing; MH-BCS-SPL, multi-hypothesis predictions combined with block-based compressed sensing with smoothed projected Landweber reconstruction; RCoS, recovery via collaborative sparsity; ALSB, adaptively-learned sparsifying basis.

5.4. Parameter Evaluation

Similar to the detail-preserving regularity scheme, this subsection evaluates the sensitivity of the proposed method to parameter settings by varying one parameter at a time while keeping the rest fixed at their nominal values. In the reference information constrained reconstruction algorithm, there are two free parameters, λ and η, in the reconstructed object formula Equation (17). Bands 4/8 of Landsat 8 and Bands 2/3 and 3/4 of MODIS data were tested in this subsection. PSNR, RMSE, CC and SSIM values versus the parameters λ and η are plotted in Figure 7. It is obvious that more fine-tuning of the parameters may lead to better results, but the results with the parameter settings are consistently promising. By manually changing the parameters λ and η, some experiments are used to analyzing their variation tendency in Table 5.
Figure 7. Results of the proposed method with different values of parameters λ and η tested in Bands 4/3 of MODIS. (a) PSNR versus λ and η; (b) RMSE versus λ and η; (c) CC versus λ and η; (d) SSIM versus λ and η.
Table 5. PSNR (dB) with different parameters in different images.
To make a fair comparison among the competing methods, we have carefully tuned their parameters to achieve the best performance. The parameters of the other competing methods are designed as follow: for OMP and CoSaMp, default parameters (if required as input arguments) are used; for the NLR-CS-baseline algorithm, the main parameters are set as follows: patch size 6 × 6 and similar patches m = 45 are selected for each exemplar patch. For the MH-BCS-SPL method, we use an empirical regularization parameter value λ, and the initial search window is set to w = 1 . We have also carefully tuned the parameters of the RCoS and ALSB algorithms for the purpose of hopefully achieving the best possible performance.

5.5. Performances with Varied Noise Levels

In this subsection, we discuss the impact of noise on the reconstruction performance of the proposed algorithm with a 0.1 sampling rate, which means observation data in the compressive sensing. One of the nonconvex surrogate logarithm functions (GNLR-RI-Log) is selected to compare to the NLR-CS-baseline, CoSaMP and OMP algorithms. Bands 4/8 of Landsat 8 and Bands 3/4 of MODIS images were tested in the experiments with added Gaussian noise with a mean of zero and variance of (0,4,8,12). As can be seen in Table 6, the proposed GNLR-RI-Log method performs much better than the NLR-CS-baseline method in terms of noise added. Figure 8 shows the reconstruction results for a sample rate of 0.1 and a noise level of σ = 6 . It can be observed that our GNLR-RI-Log method can reconstruct the images more precisely than NLR-CS-baseline, CoSaMP and OMP, compensate for the over-smoothness and obtain good group similarities compared to the other algorithms by extracting structural information from reference images.
Table 6. The performance indexes PSNR (dB) with different reconstruction methods and noise levels. Landsat 8 Band 4 (reference: Band 8) and MODIS Band 6 (reference: Band 7).
Figure 8. The reconstruction results for a sample rate of 0.1 and a noise level of σ = 8 . (ad) Reconstructing Landsat 8 images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP; (eh) Reconstructing MODIS images with GNLR-RI-Log, NLR-CS-baseline, CoSaMp and OMP. (a) GNLR-RI-Log; (b) NLR-CS-baseline; (c) CoSaMP; (d) OMP; (e) GNLR-RI-Log; (f) NLR-CS-baseline; (g) CoSaMP; (h) OMP.

5.6. Computational Complexity

A good remote sensing reconstruction model is expected to be not only effective, but also computationally efficient. The single-channel image is processed on an Intel(R) Core(TM) i7-6700 CPU @ 3.41 GHz with our MATLAB implementation (MATLAB 2015 with 64 bit). The computational cost performances of some representative methods are shown in Table 7, where we use a 256 × 256 single-channel image as the input. As can be seen in Table 7 where we abbreviate iteration to iter., in terms of execution time, reconstruction with GNLR-RI-Log is, as expected, faster than RCoS and ALSB due to low-rank approximation. On the other hand, the execution times of RCoS and ALSB are much slower than GNLR-RI-Log and MH-MS-BCS-SPL due to RCoS and ALSB learning the adaptive sparsifying basis from a fraction of all patches. OMP and CoSamp are the fastest reconstruction methods with their runtimes of 0.106 and 0.031 s per iteration, respectively. In summary, GNLR-RI-Log achieves favorable performance in terms of both reconstruction accuracy and efficiency.
Table 7. Reconstruction time for a 256 × 256 single-channel image.

6. Conclusions

This paper proposes a novel image reconstruction scheme that introduces wavelet coefficients of reference images as reference information based on compressed sensing and generalized nonconvex low-rank approximation. Nonlocal low-rank regularization enables us to exploit the similarity of patches and the nonconvexity of metric rank minimization. In addition, the single value threshold and conjugate gradient algorithms jointly offer a principled and computationally-efficient solution to image reconstruction. This approach has the following characteristics: (1) the wavelet coefficient acted as the texture feature constraint and extracted structural information from reference images to reconstruct remote sensing images; (2) high-resolution images can be achieved via simultaneously using compressed sensing and generalized nonlocal low-rank approximation. Some experiments are made to evaluate the proposed method, and the results show that the proposed method can achieve higher resolution than the state-of-the-art approaches. However, the proposed method has limited performance for the regions and bands where the spatial correlation is not high. In the future, we will further investigate it and use some change detection approaches to improve the performance of our algorithm.

Acknowledgments

The anonymous reviewers’ comments and suggestions greatly improved our paper. We are grateful for their kind help. This work is supported by the National Natural Science Foundation of China (No. 41471368, No. 41571413 and No. 61362001), the Institute of Remote Sensing and Digital Earth of Chinese Academy of Sciences (RADI) director foundation, Jiangxi Advanced Projects for Post-doctoral Research Funds (2014KY02) and the Graduate Innovation Foundation of Jiangxi province (No. YC2015-S038).

Author Contributions

All co-authors of this manuscript significantly contributed to all phases of the investigation. They contributed equally to the preparation, analysis, review and editing of this manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, X.; Shen, H.; Zhang, L.; Li, H. Sparse-based reconstruction of missing information in remote sensing images from spectral/temporal complementary information. ISPRS J. Photogramm. Remote Sens. 2015, 106, 1–15. [Google Scholar] [CrossRef]
  2. Wang, L.; Lu, K.; Liu, P. Compressed sensing of a remote sensing image based on the priors of the reference image. IEEE Geosci. Remote Sens. Lett. 2015, 12, 736–740. [Google Scholar] [CrossRef]
  3. Geng, H.; Liu, P.; Wang, L.; Chen, L. Compressed sensing based remote sensing image reconstruction using an auxiliary image as priors. In Proceedings of the 2014 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Quebec City, QC, Canada, 13–18 July 2014; pp. 2499–2502.
  4. Liu, P.; Huang, F.; Li, G.; Liu, Z. Remote-sensing image denoising using partial differential equations and auxiliary images as priors. IEEE Geosci. Remote Sens. Lett. 2012, 9, 358–362. [Google Scholar] [CrossRef]
  5. Hu, T.; Zhang, H.; Shen, H.; Zhang, L. Robust registration by rank minimization for fultiangle hyper/multispectral remotely sensed imagery. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 2443–2457. [Google Scholar] [CrossRef]
  6. Dong, W.; Shi, G.; Li, X. Nonlocal image restoration with bilateral variance estimation: A low-rank approach. IEEE Trans. Image Process. 2013, 22, 700–711. [Google Scholar] [CrossRef] [PubMed]
  7. Dong, W.; Shi, G.; Li, X.; Ma, Y.; Huang, F. Compressive sensing via nonlocal low-rank regularization. IEEE Trans. Image Process. 2014, 23, 3618–3632. [Google Scholar] [CrossRef] [PubMed]
  8. Peng, L.; Eom, K.B. Restoration of multispectral images by total variation with auxiliary image. Opt. Lasers Eng. 2013, 51, 873–882. [Google Scholar]
  9. Peng, L.; Dingsheng, L.; Zhu, L. Total variation restoration of the defocus image based on spectral priors. Int. Soc. Opt. Photon. Remote Sens. 2010. [Google Scholar] [CrossRef]
  10. Madni, A.M. A systems perspective on compressed sensing and its use in reconstructing sparse networks. IEEE Syst. J. 2014, 8, 23–27. [Google Scholar] [CrossRef]
  11. Wei, L.; Prasad, S.; Fowler, J.E. Spatial information for hyperspectral image reconstruction from compressive random projections. IEEE Geosci. Remote Sens. Lett. 2013, 10, 1379–1383. [Google Scholar]
  12. Xiao, D.; Yunhua, Z. A novel compressive sensing algorithm for SAR imaging. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2014, 7, 708–720. [Google Scholar]
  13. Fowler, J.E. Compressive-projection principal component analysis. IEEE Trans. Image Process. 2009, 18, 2230–2242. [Google Scholar] [CrossRef] [PubMed]
  14. Ly, N.H.; Du, Q.; Fowler, J.E. Reconstruction from random projections of hyperspectral imagery with spectral and spatial partitioning. IEEE J. Sel. Top. Appl. Earth Observ. Remote Sens. 2013, 6, 466–472. [Google Scholar] [CrossRef]
  15. Chen, C.; Wei, L.; Eric, W.T.; James, E.F. Reconstruction of hyperspectral imagery from random projections using multi-hypothesis prediction. IEEE Trans. Geosci. Remote Sens. 2014, 52, 365–374. [Google Scholar] [CrossRef]
  16. Qiegen, L.; Shanshan, W.; Ying, L. Adaptive dictionary learning in sparse gradient domain for image recovery. IEEE Trans. Image Process. 2013, 22, 4652–4663. [Google Scholar]
  17. Zhang, X.; Bai, T.; Meng, H.; Chen, J. Compressive sensing-based ISAR imaging via the combination of the sparsity and nonlocal total variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 990–994. [Google Scholar] [CrossRef]
  18. He, X.; Condat, L.; Chanussot, J.; Xia, J. Pansharpening using total variation regularization. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Munich, Germany, 22–27 July 2012; pp. 166–169.
  19. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  20. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total variation regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  21. Zhang, J.; Zhong, P.; Chen, Y.; Li, S. Regularized deconvolution network for the representation and restoration of optical remote sensing images. IEEE Trans. Geosci. Remote Sens. 2014, 52, 2617–2627. [Google Scholar] [CrossRef]
  22. Yue, L.; Shen, H.; Yuan, Q.; Zhang, L. A locally adaptive L1/L2 norm for multi-frame super-resolution of images with mixed noise and outliers. Signal Process. 2014, 105, 156–174. [Google Scholar] [CrossRef]
  23. Zheng, Z.; Xu, Y.; Yang, J.; Li, X.; Zhang, D. A survey of sparse representation: algorithms and applications. IEEE Access 2015, 3, 490–530. [Google Scholar] [CrossRef]
  24. Xiaobo, Q.; Di, G.; Bende, N.; Yingkun, H.; Yulan, L.; Cai, S.; Zhong, C. Undersampled MRI reconstruction with patch-based directional wavelets. Magn. Resonance Imaging 2012, 30, 964–977. [Google Scholar]
  25. Skretting, K.; Engan, K. Recursive least squares dictionary learning algorithm. IEEE Trans. Image Process 2010, 58, 2121–2130. [Google Scholar] [CrossRef]
  26. Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An algorithm for designing overcomplete dictionaries for sparse representation. IEEE Trans. Signal Process 2006, 54, 4311–4322. [Google Scholar] [CrossRef]
  27. Zobly, S.M.S.; Kadah, Y.M. Orthogonal matching pursuit and compressive sampling matching pursuit for Doppler ultrasound signal reconstruction. In Proceedings of the 2012 Cairo International, Biomedical Engineering Conference (CIBEC), Giza, Egypt, 20–22 December 2012; pp. 52–55.
  28. Ravazzi, C.; Fosson, S.M.; Magli, E. Distributed iterative thresholding for 0/1-regularized linear inverse problems. IEEE Trans. Inf. Theory 2015, 61, 2081–2100. [Google Scholar] [CrossRef]
  29. Xiaoqun, Z.; Martin, B.; Xavier, B.; Stanley, O. Bregmanized nonlocal regularization for deconvolution and sparse reconstruction. SIAM J. Imaging Sci. 2010, 3, 253–276. [Google Scholar]
  30. Liu, Q.; Peng, X.; Liu, J.; Yang, D.; Liang, D. A weighted two-level bregman method with dictionary updating for nonconvex MR image reconstruction. J. Biomed. Imaging 2014. [Google Scholar] [CrossRef] [PubMed]
  31. Yang, S.; Liu, Z.; Wang, M.; Sun, F.; Jiao, L. Multitask learning and sparse representation based super-resolution reconstruction of synthetic aperture radar images. In Proceedings of the 2011 International Workshop on Multi-Platform/Multi-Sensor Remote Sensing and Mapping (M2RSM), Xiamen, China, 10–12 January 2011; pp. 1–5.
  32. Fazel, M.; Hindi, H.; Boyd, S.P. Log-det heuristic for matrix rank minimization with applications to Hankel and Euclidean distance matrices. Proc. Am. Control Conf. 2003, 3, 2156–2162. [Google Scholar]
  33. Gasso, G.; Rakotomamonjy, A.; Canu, S. Recovering sparse signals with a certain family of nonconvex penalties and DC programming. IEEE Trans. Signal Process. 2009, 57, 4686–4698. [Google Scholar] [CrossRef]
  34. Chartrand, R. Exact reconstruction of sparse signals via nonconvex minimization. IEEE Signal Process. Lett. 2007, 14, 707–710. [Google Scholar] [CrossRef]
  35. Trzasko, J.; Manduca, A. Highly undersampled magnetic resonance image reconstruction via homotopic-minimization. IEEE Trans. Med. Imaging 2009, 28, 106–121. [Google Scholar] [CrossRef] [PubMed]
  36. Canyi, L.; Jinhui, T.; Shuicheng, Y.; Zhouchen, L. Generalized nonconvex nonsmooth low-rank minimization. In Proceedings of the IEEE Conference onComputer Vision and Pattern Recognition (CVPR), Zurich, Switzerland, 6–12 September 2014; pp. 4130–4137.
  37. Friedman, J.H. Fast sparse regression and classification. Int. J. Forecast. 2012, 28, 722–738. [Google Scholar] [CrossRef]
  38. Frank, L.E.; Friedman, J.H. A statistical view of some chemometrics regression tools. Technometrics 1993, 35, 109–135. [Google Scholar] [CrossRef]
  39. Geman, D.; Yang, C. Nonlinear image recovery with half-quadratic regularization. IEEE Trans. Image Process. 1995, 4, 932–946. [Google Scholar] [CrossRef] [PubMed]
  40. Gao, C.; Wang, N.; Yu, Q.; Zhang, Z. A Feasible Nonconvex Relaxation Approach to Feature Selection. Available online: http://bcmi.sjtu.edu.cn/ zhzhang/papers/etp.pdf (accessed on 4 March 2016).
  41. Fan, J.; Li, R. Variable selection via nonconcave penalized likelihood and its oracle properties. J. Am. Stat. Assoc. 2001, 96, 1348–1360. [Google Scholar] [CrossRef]
  42. Tong, Z. Analysis of multi-stage convex relaxation for sparse regularization. J. Mach. Learn. Res. 2010, 11, 1081–1107. [Google Scholar]
  43. CunHui, Z. Nearly unbiased variable selection under minimax concave penalty. Ann. Stat. 2010, 894–942. [Google Scholar]
  44. Chen, C.; Eric, W.T.; James, E.F. Compressed-sensing recovery of images and video using multi-hypothesis predictions. In Proceedings of the 45th Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 6–9 November 2011; pp. 1193–1198.
  45. Zhang, J.; Zhao, D.; Zhao, C.; Xiong, R.; Ma, S.; Gao, W. Image compressive sensing recovery via collaborative sparsity. IEEE J. Emerg. Sel. Top. Circuits Syst. 2012, 2, 380–391. [Google Scholar] [CrossRef]
  46. Zhanga, J.; Zhaob, C.; Zhaoa, D.; Gaob, W. Image compressive sensing recovery using adaptively learned sparsifying basis via L0 minimization. Signal Process. 2014, 103, 114–126. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.