Next Article in Journal
Geomatics and EO Data to Support Wildlife Diseases Assessment at Landscape Level: A Pilot Experience to Map Infectious Keratoconjunctivitis in Chamois and Phenological Trends in Aosta Valley (NW Italy)
Next Article in Special Issue
A Modified Cartesian Factorized Backprojection Algorithm Integrating with Non-Start-Stop Model for High Resolution SAR Imaging
Previous Article in Journal
Influence of DEM Elaboration Methods on the USLE Model Topographical Factor Parameter on Steep Slopes
Previous Article in Special Issue
A Bistatic Analytical Approximation Model for Doppler Rate Estimation Error from Real-Time Spaceborne SAR Onboard Orbit Determination Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization

1
Department of Information and Communications Engineering, School of Engineering, Tokyo Institute of Technology, Kanagawa 226-8503, Japan
2
Department of Computer Science, School of Computing, Tokyo Institute of Technology, Kanagawa 226-8503, Japan
3
Laboratory for Future Interdisciplinary Research of Science and Technology (FIRST), Institute of Innovative Research (IIR), Tokyo Institute of Technology, Kanagawa 226-8503, Japan
*
Author to whom correspondence should be addressed.
Remote Sens. 2020, 12(21), 3541; https://doi.org/10.3390/rs12213541
Submission received: 9 September 2020 / Revised: 23 October 2020 / Accepted: 25 October 2020 / Published: 28 October 2020
(This article belongs to the Special Issue Signal and Image Processing for Remote Sensing)

Abstract

:
We propose a new constrained optimization approach to hyperspectral (HS) image restoration. Most existing methods restore a desirable HS image by solving some optimization problems, consisting of a regularization term(s) and a data-fidelity term(s). The methods have to handle a regularization term(s) and a data-fidelity term(s) simultaneously in one objective function; therefore, we need to carefully control the hyperparameter(s) that balances these terms. However, the setting of such hyperparameters is often a troublesome task because their suitable values depend strongly on the regularization terms adopted and the noise intensities on a given observation. Our proposed method is formulated as a convex optimization problem, utilizing a novel hybrid regularization technique named Hybrid Spatio-Spectral Total Variation (HSSTV) and incorporating data-fidelity as hard constraints. HSSTV has a strong noise and artifact removal ability while avoiding oversmoothing and spectral distortion, without combining other regularizations such as low-rank modeling-based ones. In addition, the constraint-type data-fidelity enables us to translate the hyperparameters that balance between regularization and data-fidelity to the upper bounds of the degree of data-fidelity that can be set in a much easier manner. We also develop an efficient algorithm based on the alternating direction method of multipliers (ADMM) to efficiently solve the optimization problem. We illustrate the advantages of the proposed method over various HS image restoration methods through comprehensive experiments, including state-of-the-art ones.

Graphical Abstract

1. Introduction

Hyperspectral (HS) imagery has 1D spectral information, including invisible light and narrow wavelength interval, in addition to 2D spatial information and thus can visualize unseen intrinsic characteristics of scene objects and environmental lighting. This makes HS imaging a key technique in many applications in various fields, e.g., earth observation, agriculture, and medical and biological imaging [1,2,3].
Observed HS images are often affected by noise because of the small amount of light in narrow wavelength and/or sensor failure. In addition, in compressive HS imaging scenarios [4,5], we have to estimate a full HS image from a minimal number of measurements. Thus, we need some methods for restoring desirable HS images from such degraded observations in HS applications.
Most HS image restoration methods are established based on optimization: a desirable HS image is characterized as a solution to some optimization problem, which consists of a regularization term and a data-fidelity term. The regularization term evaluates apriori knowledge about underlying properties on HS images, and the data-fidelity term keeps the consistency with a given observation. Thanks to the design, these methods get a reasonable result under ill-posed or ill-conditioned scenarios typical in HS image restoration.
Regularization techniques for HS image restoration are roughly classified into two groups: total variation (TV)-based approach and low-rank modeling (LRM)-based one. TV models the total absolute magnitude of local differences to exploit the piecewise-smooth structures of an image. Many TV-based approaches [6,7,8,9] have been proposed for HS image restoration. Besides, LRM-based approaches [10,11] exploit the underlying low-rank structure in the spectral direction of an HS image. A popular example is the so-called Low-rank matrix recovery (LRMR) [10].
Many recent methods [9,12,13,14,15,16,17,18,19,20,21,22] combine TV-based and LRM-based approaches, and in general, they perform better than approaches using either regularization. This is because TV-based approaches model the spatial structure of an HS image, whereas LRM-based approaches the spectral one. Naturally, the methods have to handle multiple regularization terms and a data-fidelity term(s) simultaneously in one objective function. So the methods must carefully control the hyperparameter(s) balancing these terms. Specifically, such hyperparameters are interdependent, which means that a suitable value of a hyperparameter varies depending on the multiple regularization terms used and the noise intensities on a given observation. Hence, the hyperparameter settings in such combined approaches are often troublesome tasks. Table 1 summarizes the features of the methods reviewed in this section.
Based on the above discussion, we propose a new constrained convex optimization approach to HS image restoration. Our proposed method restores a desirable HS image by solving a convex optimization problem involving a new TV-based regularization and hard constraints on data-fidelity. The regularization, named Hybrid Spatio-Spectral Total Variation (HSSTV), is designed to evaluate two types of local differences: direct local spatial differences and local spatio-spectral differences in a unified manner to effectively exploit both the underlying spatial and spectral structures of an HS image. Thanks to this design, HSSTV has a strong noise and artifact removal ability while avoiding oversmoothing and spectral distortion without combining LRM. Moreover, the constrained-type data-fidelity in the proposed method enables us to translate interdependent hyperparameters to the upper bounds of the degree of data-fidelity that can be determined based only on the noise intensity. As a result, the proposed method has no interdependent hyperparameter. We also develop an efficient algorithm for solving the optimization problem based on the well-known alternating direction method of multipliers (ADMM) [23,24,25].
The remainder of the paper is organized as follows. Section 2 introduces notation and mathematical ingredients. Section 3 reviews existing methods related to our method. In Section 4, we define HSSTV, formulate HS image restoration as a convex optimization problem involving HSSTV and hard-constraints on data-fidelity, and present an ADMM-based algorithm. Extensive experiments on denoising and compressed sensing (CS) reconstruction of HS images are given in Section 5, where we illustrate the advantages of our method over several state-of-the-art methods. In Section 6, we discuss the impact of the parameters in our proposed method. Section 7 concludes the paper. We have published the preliminary versions of this work in conference proceedings [26,27], including only limited experiments and discussions. In this paper, we newly conduct real noise removal experiments (see Section 5.2) and give much deeper discussions on the performance, utility, and parameter settings of the proposed method.

2. Preliminaries

2.1. Notation and Definitions

In this paper, let R be the set of real numbers. We shall use boldface lowercase and capital to represent vectors and matrices, respectively, and : = to define something. We denote the transpose of a vector/matrix by ( · ) , and the Euclidean norm (the 2 norm) of a vector by · .
For notational convenience, we treat an HS image U R N v × N h × B as a vector u R N B ( N : = N v N h is the number of the pixels of each band, and B is the number of the bands) by stacking its columns on top of one another, i.e., the index of the component of the ith pixel in kth band is i + ( k 1 ) N (for i = 1 , , N and k = 1 , , B ).

2.2. Proximal Tools

A function f : R L ( , ] is called proper lower semicontinuous convex if d o m ( f ) : = { x R L | f ( x ) < } , l e v α ( f ) : = { x R L | f ( x ) α } is closed for every α R , and f ( λ x + ( 1 λ ) y ) λ f ( x ) + ( 1 λ ) f ( y ) for every x , y R L and λ ( 0 , 1 ) , respectively. Let Γ 0 ( R L ) be the set of all proper lower semicontinuous convex functions on R L .
The proximity operator [28] plays a central role in convex optimization based on proximal splitting. The proximity operator of f Γ 0 ( R L ) with an index γ > 0 is defined by
prox γ f ( x ) : = argmin y f ( y ) + 1 2 γ y x 2 .
We introduce the indicator function of a nonempty closed convex set C R L , which is defined as follows:
ι C ( x ) : = 0 , if x C , , otherwise .
Then, for any γ > 0 , its proximity operator is given by
prox γ ι C ( x ) = P C ( x ) : = argmin y C x y ,
where P C ( x ) is the metric projection onto C.

2.3. Alternating Direction Method of Multipliers (ADMM)

ADMM [23,24,25] is a popular proximal splitting method, and it can solve convex optimization problems of the form:
min x , z f ( x ) + g ( z ) s . t . z = G x ,
where f Γ 0 ( R L 1 ) , g Γ 0 ( R L 2 ) , and G R L 2 × L 1 . Here, we assume that f is quadratic, g is proximable, i.e., the proximity operator of g is computable in an efficient manner, and G is a full-column rank matrix. For arbitrarily chosen z ( 0 ) , d ( 0 ) and a step size γ > 0 , ADMM iterates the following steps:
x ( n + 1 ) = argmin x f ( x ) + 1 2 γ z ( n ) G x d ( n ) 2 , z ( n + 1 ) = prox γ g ( G x ( n + 1 ) + d ( n ) ) , d ( n + 1 ) = d ( n ) + G x ( n + 1 ) z ( n + 1 ) ,
The convergence property of ADMM is given as follows.
Theorem 1
(Convergence of ADMM [24]). Consider Problem (1), and assume that G G is invertible and that a saddle point of its unaugmented Lagrangian L 0 ( x , z , y ) : = f ( x ) + g ( z ) d , G x z exists. A triplet ( x ^ , z ^ , d ^ ) is a saddle point of an unaugmented Lagrangian L 0 if and only if L 0 ( x ^ , z ^ , d ) L 0 ( x ^ , z ^ , d ^ ) L 0 ( x , z , d ^ ) , for any ( x , z , d ) R L 1 × R L 2 × R L 2 . Then the sequence ( x n ) n > 0 generated by (2) converges to an optimal solution to Problem (1).

3. Related Works

In this section, we elaborate on existing HS image restoration methods based on optimization.

3.1. TV-Based Methods

The methods proposed in [6,7,8] restore a desirable HS image by solving a convex optimization problem involving TV-based regularization. Let u ¯ R N B be the desirable HS image, and the authors assume that an observation v R N B is modeled as follows:
v = u ¯ + s + n ,
where n and s are an additive white Gaussian noise and a sparse noise, respectively. Here, the sparse noise corrupts only a few pixels in the HS image but heavily, e.g., impulse noise, salt-and-pepper noise, and line noise. The observation and the restoration problem of the methods are given by the following forms:
min u , s v u s 2 + λ 1 R TV ( u ) + λ 2 s 1 ,
where R TV is a regularization function based on TV, and λ 1 and λ 2 are hyperparameters. Here, The first and third terms evaluate data-fidelity on Gaussian and sparse noise, respectively. The hyperparameters λ 1 and λ 2 represent the priorities of each term. If we can choose suitable values of the hyperparameters, then this formulation yields high-quality restoration. However, the hyperparameters are interdependent, which means that suitable values of the hyperparameters vary depending on the used TV-based regularization term and the noise intensities on a given observation. Therefore, the settings of the hyperparameters are an essential but troublesome task.
In the following, we explain each TV. Let D = ( D v D h ) R 2 N B × N B be spatial differences operator with D v and D h being vertical and horizontal differences operator, respectively, and spectral differences operator are D b R N B × N B . In [6,7,8], HTV, ASSTV, and SSTV are defined as follows:
HTV ( u ) : = D u TV ,
ASSTV ( u ) : = τ v D v u 1 + τ h D h u 1 + τ b D b u 1 ,
SSTV ( u ) : = D D b u 1 ,
where · TV is a TV norm, which takes the 2 norm of spacial difference vectors for all band and then summing up for all spatial pixels, and τ v , τ h , and τ b are the weight of the vertical, horizontal, and spectral differences. HTV evaluates direct spatial piecewise-smoothness and can be seen as a generalization of the standard color TV [29]. HTV does not consider spectral correlation, resulting in spatial oversmoothing. To consider the spectral correlation, the authors of [6] proposed SSAHTV. SSAHTV is a weighted HTV, and the weight is determined by spectral information. However, since SSAHTV does not directly evaluate spectral correlation, it still causes spatial oversmoothing. ASSTV evaluates direct spatial and spectral piecewise-smoothness (Figure 1, blue line). The weights τ v , τ h , and τ b in (5) balance the smoothness of vertical, horizontal, and spectral differences, respectively. Owing to the definition, ASSTV can evaluate spatial and spectral correlation, but it produces spectral oversmoothing even if we carefully adjust τ v , τ h , and τ b . SSTV evaluate a-prior knowledge on HS images using spatio-spectral piecewise-smoothness. It is derived by calculating spatial differences through spectral differences (Figure 1, yellow line). SSTV can restore a desirable HS image without any weight, but it produces noise-like artifacts, especially when a given observation is contaminated by heavy noise and/or degradation.

3.2. LRM-Based Method

LRMR [10] is one of the popular LRM-based methods for HS image restoration, which evaluates the low rankness of an HS image in the spectral direction. To preserve the local details, LRMR restores a desirable HS image through patch-wise processing. Each patch is a local cube of the size of q × q × B , and LRMR handles it as a matrix of size q 2 × B obtained by lexicographically arranging the spatial vectors in the patch cube in the row direction. The observation model is expressed like Section 3.1, and the restoration problem is formulated as follows:
min U i , j , S i , j V i , j U i , j S i , j F 2 s . t . rank ( U i , j ) r , card ( S i , j ) k ,
where U i , j , V i , j , and S i , j represent the patches of a restored HS image, an observation, and a sparse noise, respectively, which are centered at (i,j) pixel. Then, the · F is a Frobenius norm, rank ( · ) represents a rank function, and card ( · ) is a cardinality function. The method evaluates the low rankness of the estimated HS image and sparsity of the sparse noise by limiting the number of the rank of U i , j and the cardinality of S i , j using r and k, respectively. Thanks to the design, LRMR achieves high-quality restoration for especially spectral information. Meanwhile, since LRMR does not fully consider spatial correlation, the result by LRMR tends to have spatial artifacts when an observation is corrupted by heavy noise and/or degradation. Besides, the rank and cardinality functions are nonconvex, and so it is a troublesome task to seek the global optimal solution of Problem (7).

3.3. Combined Method

The methods [9,12,13,14,15,16,17,18,19,20,21,22,30] combine TV-based and LRM-based approaches. Since they can evaluate multiple types of apriori knowledge, i.e., piecewise-smoothness and low rankness, they can restore a more desirable HS image than the approaches only using TV-based or LRM-based regularization. Some methods [9,12,14,15,17,18,19,20] approximate the rank and cardinality functions by their convex surrogates. As a result, the restoration problems are convex and can be solved by optimization methods based on proximal splitting.
However, the methods have to handle multiple regularization terms and/or a data-fidelity term(s) simultaneously in one objective function; they require to carefully control the hyperparameters balancing these terms. Since the hyperparameters rely on both the regularizations and the noise intensity on an observation, i.e., the hyperparameters are interdependent, the hyperparameter settings are often troublesome tasks.

4. Proposed

4.1. Hybrid Spatio-Spectral Total Variation

We propose a new regularization technique for HS image restoration, named HSSTV. HSSTV simultaneously handles both direct local spatial differences and local spatio-spectral differences of an HS image. Then, HSSTV is defined by
HSSTV ( u ) : = A ω u 1 , p   with A ω : = D D b ω D ,
where · 1 , p is the mixed 1 , p norm, and ω 0 . We assume p = 1 or 2, i.e., the 1 norm ( · 1 , 1 = · 1 ) or the mixed 1 , 2 norm, respectively. We would like to mention that we can also see 1 -HSSTV ( p = 1 ) as anisotropic HSSTV and 1 , 2 -HSSTV ( p = 2 ) as isotropic HSSTV.
In (8), D D b u and D u correspond to local spatio-spectral and direct local spatial differences, respectively, as shown in Figure 1 (red lines). The weight ω adjusts the relative importance of direct spatial piecewise-smoothness to spatio-spectral piecewise-smoothness. HSSTV evaluates two kinds of smoothness by taking the p norm ( p = 1 or 2) of these differences associated with each pixel and then summing up for all pixels, i.e., calculating the 1 norm. Thus, it can be defined via the mixed 1 , p norm. When we set ω = 0 and p = 1 , HSSTV recovers SSTV as (6), meaning that HSSTV can be seen as a generalization of SSTV.
As reviewed in Section 3, since SSTV only evaluates spatio-spectral piecewise-smoothness, it cannot remove similar noise in adjacent bands. The direct spatial differences in HSSTV help to remove such noise. Figure 2 is restored HS images from an observation contaminated by similar noise in adjacent bands (the upper half area) and random noise (the lower half area). One can see that large noise remains in the upper half area of the result by SSTV. In contrast, HSSTV effectively removes all noise. However, since minimizing the direct spatial differences strongly promotes spatial piecewise-smoothness, HSSTV produces spatial oversmoothing when the weight ω is large. Thus, the weight ω should be set to less than one, as demonstrated in Section 5.

4.2. HS Image Restoration by HSSTV

We consider restoring a desirable HS image u ¯ R N B from an observation v R M ( M N B ) contaminated by a Gaussian-sparse mixed noise. The observation model is given by the following form:
v = Φ u ¯ + n + s ,
where Φ R M × N B is a matrix representing a linear observation process, e.g., random sampling, n R M is Gaussian noise with the standard deviation σ , and s R M is a sparse noise.
Based on the above model, we formulate HS image restoration using HSSTV as the following optimization problem:
min u , s HSSTV ( u ) s . t . Φ u + s B 2 , ε v : = { x R M | v x ε } , s B 1 , η : = { x R M | x 1 η } , u [ μ min , μ max ] N B ,
where B 2 , ε v is a v -centered 2 -norm ball with the radius ε > 0 , B 1 , η is a 0 -centered 1 -norm ball with the radius η > 0 , and [ μ min , μ max ] N B is a dynamic range of an HS image ( μ min < μ max ). This method simultaneously estimates the desirable HS image u and the sparse noise s for noise-robust restoration. The first and second constraints measure data fidelities to the observation v and the sparse noise s , respectively. As mentioned in [13,16,26,31,32,33,34,35,36,37,38,39], such a constraint-type data-fidelity enables us to translate the hyperparameter(s) balancing between regularization and data-fidelity like λ 1 and λ 2 in (3) to the upper bound of the degree of data-fidelity ε and η that can be set in a much easier manner.
Since all constraints are closed convex sets and HSSTV is a convex function, Problem (10) is a constrained convex optimization problem. In this paper, we adopt ADMM (see Section 2.3) for solving the problem. In what follows, we reformulate Problem (10) into Problem (1).
By using the indicator functions of the constraints, Problem (10) can be rewritten as
min u , s A ω u 1 , p + ι B 2 , ε v ( Φ u + s ) + ι B 1 , η ( s ) + ι [ μ min , μ max ] N B ( u ) .
Note that from the definition of the indicator function, Problem (11) is exactly equal to Problem (10). By letting
f : R N B R 2 : u ( 0 , 0 ) ,
g : R 5 N B + 2 M R { } : ( z 1 , z 2 , z 3 , z 4 ) z 1 1 , p + ι B 2 , ε v ( z 2 ) + ι B 1 , ε ( z 3 ) + ι [ μ min , μ max ] N B ( z 4 ) ,
G : R N B R 5 N B + 2 M : u ( A ω u , Φ u + s , s , u ) .
Problem (11) is reduced to Problem (1). The resulting algorithm based on ADMM is summarized in Algorithm 1.
Algorithm 1: ADMM method for Problem (10)
Remotesensing 12 03541 i001
The update of u and s in Algorithm 1 comes down to the following forms:
u ( n + 1 ) = A ω A ω + Φ Φ + 1 2 I 1 R H S , R H S = A ω ( z 1 ( n ) d 1 ( n ) ) + 1 2 Φ ( z 2 ( n ) d 2 ( n ) ) 1 2 ( z 3 ( n ) d 3 ( n ) ) + ( z 4 ( n ) d 4 ( n ) ) , s ( n + 1 ) = 1 2 ( z 2 ( n ) u ( n + 1 ) d 2 ( n ) + z 3 ( n ) d 3 ( n ) ) ,
Since the update of u and s in Algorithm 1 is strictly-convex quadratic minimization, one can obtain these updated forms by differentiating it. Here, we should consider the structure of Φ because it affects the matrix inversion in (15). If Φ is a block-circulant-with-circulant-blocks (BCCB) matrix [40], we can leverage 3DFFT to efficiently solve the inversion in Step 2 with the difference operators having a periodic boundary, i.e., A ω A ω + Φ Φ + I can be diagonalized by the 3D FFT matrix and its inverse. If Φ is a semiorthogonal matrix, i.e., Φ Φ = α I ( α > 0 ) , we leave it to the update of z 2 , which means that we replace ι B 2 , ε v by ι B 2 , ε v Φ in (13) and Φ u by u in (14). This is because the proximity operator of ι B 2 , ε v Φ , in this case, can be computed by using [41] (Table 1.1-x) as follows:
prox γ ι B 2 , ε v Φ ( x ) = x + α 1 Φ ( P B 2 , ε v ( Φ x ) Φ x ) .
If Φ is a sparse matrix, we offer to use a preconditioned conjugate gradient method [42] for approximately solving the inversion or to apply primal-dual splitting methods [43,44,45] instead of ADMM (Primal-dual splitting methods require no matrix inversion but in general their convergence speed is slower than ADMM. Otherwise, randomized image restoration methods using stochastic proximal splitting algorithms [46,47,48,49] might be useful for reducing the computational cost.
For the update of z 1 , the proximity operators are reduced to simple soft-thresholding type operations: for γ > 0 and for i = 1 , , 4 N B , (i) in the case of p = 1 ,
[ prox γ · 1 ( x ) ] i = sgn ( x i ) max | x i | γ , 0 ,
where sgn is the sign function, and (ii) in the case of p = 2 ,
[ prox γ · 1 , 2 ( x ) ] i = max 1 γ j = 0 3 x i ˜ + j N B 2 1 2 , 0 x i ,
where i ˜ : = ( ( i 1 ) mod N B ) + 1 .
The updates of z 2 , z 3 , and z 4 require the proximity operators of the indicator functions of B 2 , ε v , B 1 , η , and [ μ min , μ max ] N B , respectively, which equate the metric projections onto them (see Section 2.2). Specifically, the metric projection to B 2 , ε v is given by
P B 2 , ε v ( x ) = x , if   x B 2 , ε v , v + ε ( x v ) x v , otherwise ,
that onto B 1 , η is given by
P B 1 , η = sgn ( x ) max ( | x | η , 0 ) ,
and that onto [ μ min , μ max ] N B is given, for i = 1 , , N B , by
[ P [ μ min , μ max ] N B ( x ) ] i = min { max { x i , μ min } , μ max } .

5. Results

We demonstrate the advantages of the proposed method by applying it to two specific HS image restoration problems: denoising and CS reconstruction. In these experiments, we used 13 HS images taken from the SpecTIR [50], MultiSpec [51], and GIC [52], with their dynamic ranges normalized into [ 0 , 1 ] . All the experiments were conducted by MATLAB 2018a.
The proposed method was compared with HTV [6], SSAHTV [6], SSTV [7], and ASSTV [38]. For a fair comparison, we replaced HSSTV in Problem (10) with HTV, SSAHTV, SSTV, or ASSTV and solved the problem by ADMM. In the denoising experiments, we also compared our proposed method with LRMR [10], TV-regularized low-rank matrix factorization (LRTV) [13], and local low-rank matrix recovery and global spatial-spectral TV (LLRGTV) [16]. Since LRMR, LRTV, and LLRGTV are customized to the mixed noise removal problem, we cannot adopt them for CS reconstruction. We did not compare our proposed method with recent CNN-based HS image denoising methods [53,54]. The CNN-based methods cannot be represented as explicit regularization functions and are fully customized to denoising tasks. In contrast, our proposed method can be used as a building block in various HS image restoration methods based on optimization. Meanwhile, CNN-based methods strongly depend on what training data are used, which means that they cannot adapt to a wide range of noise intensity. Thus, the design concepts of these methods are different from TVs and LRM-based approaches.
To quantitatively evaluate restoration performance, we used the peak signal-to-noise ratio (PSNR) [dB] index and the structural similarity (SSIM) [55] index between a groundtruth HS image u ¯ and a restored HS image u . PSNR is defined by 10 log 10 ( M A X I 2 / M S E ) , where M A X I is the max value of the dynamic range of HS images, and M S E is the mean square error between groundtruth HS images and restored HS images. The higher PSNR is, the more similar the two images are. SSIM is an image quality assessment index based on the human vision system, which is defined as follows:
SSIM ( u , u ¯ ) = 1 P i = 1 P SSIM i ( u , u ¯ ) , SSIM i ( u , u ¯ ) = ( 2 μ u i μ u ¯ i + C 1 ) ( 2 σ u i u ¯ i + C 2 ) ( μ u i 2 + μ u ¯ i 2 + C 1 ) ( σ u i 2 + σ u ¯ i 2 + C 2 ) ,
where u i and u ¯ i are the ith pixel-centered local patches of a restored HS image and a groundtruth HS image, respectively, P is the number of patches, μ u i and μ u ¯ i is the average values of the local patches of the restored and groundtruth HS images, respectively, σ u i and σ u ¯ i represent the variances of u i and u ¯ i , respectively, and σ u i u ¯ i denotes the covariance between u i and u ¯ i . Moreover, C 1 and C 2 are two constants, which avoid the numerical instability when either μ u i 2 + μ u ¯ i 2 or σ u i 2 + σ u ¯ i 2 is very close to zero. SSIM gives a normalized score between zero and one, where the maximum value means that u equals to u ¯ .
We set the max iteration number, the stepsize γ , and the stopping criterion of ADMM to 10,000, 0.05, and u ( n ) u ( n + 1 ) < 0.01 , respectively.

5.1. Denoising

First, we experimented on the Gaussian-sparse mixed noise removal of HS images. The observed HS images included an additive white Gaussian noise n with the standard deviation σ and sparse noise s . In these experiments, we assumed that sparse noise consists of salt-and-pepper noise and vertical and horizontal line noise, with these noise ratio in all pixels being s p , l v , and l h , respectively. We generated noisy HS images by adding two types of mixed noise to groundtruth HS images: (i) ( σ , s p , l v = l h ) = ( 0.05 , 0.04 , 0.04 ) , (ii) ( 0.1 , 0.05 , 0.05 ) . In the denoising case, Φ = I in (9), and the radiuses ε and η in Problem (10) were set to 0.83 N B ( 1 ( s p ( 1 l v l h ) + l v + l h l v l h ) ) σ 2 and N B ( 0.45 s p + ( l v + l h ) v a v e l v l h v a v e ) , respectively, where v a v e is the average of the observed image. Table 2 shows the parameters settings for ASSTV, LRMR, LRTV, LLRGTV, and the proposed method. We set these parameters to achieve the best performance for each method.
In Table 3, we show the PSNR and SSIM of the denoised HS images by each method for two types of noise intensity and HS images. For HTV, SSAHTV, SSTV, ASSTV, and LRMR, the proposed method outperforms the existing methods. Besides, one can see that the PSNR and SSIM of the results by the proposed method are higher than those by LLRGTV for most situations. Meanwhile, LRTV outperforms the proposed method in some cases. This would be because LRTV combines TV-based and LRM-based regularization techniques. We would like to mention that the proposed method outperforms LRTV in over half of the situations, even though it uses only TV-based regularization.
Figure 3 shows the resulting images on Salinas (the noise level (i), top) and PaviaU (the noise level (ii), bottom), with their PSNR (left) and SSIM (right). Here, we depicted these HS images as RGB images (R = 8th, G = 16th, and B = 32nd bands). One can see that the results by HTV, SSAHTV, and ASSTV lose spatial details, and noise remains in the results by SSTV, LRMR, and LLRGTV. Besides, since the restored images by SSTV and LRTV lose color with large noise intensity, SSTV and LRTV change spectral variation. In contrast, the proposed method can restore HS images preserving both details and spectral information without artifacts.
Figure 4 plots band-wise PSNR and SSIM (left), and spatial and spectral responses (right) of the denoised Suwannee HS image in the case of the noise level (ii). The graphs regarding band-wise PSNR and SSIM show that the proposed method achieves higher-quality restoration than HTV, SSAHTV, and SSTV for all bands and ASSTV and LRMR for most bands. Besides, even though the proposed method only utilizes HSSTV, the results by the proposed method outperform those by LRTV for some bands of the SSIM case and those by LLRGTV for many bands. Graph (c) plots the spatial response of the 243rd row of the 30th band. Similarly, graph (d) plots the spectral response of the 243rd row and 107th col. We can see that the spatial response of the results by HTV and SSAHTV is too smooth compared with the groundtruth one. On the other hand, there exist undesirable variations in the spatial response of the results by SSTV, LRMR, and LLRGTV. In contrast, ASSTV, LRTV, and the proposed method restore similar responses to the groundtruth one. In graph (d), one can see that (i) HTV, SSAHTV, and LRMR produce spectral artifacts, (ii) the shape of the spectral responses of the results by SSTV is similar to that of the groundtruth one, but the mean value is larger than the groundtruth one, (iii) the spectral response of the results by ASSTV is too smooth and different from the groundtruth one, and (iv) LRTV, LLRGTV, and the proposed method can restore a spectral response very similar to the groundtruth one.

5.2. Real Noise Removal

We also examined HTV, SSAHTV, SSTV, ASSTV, LRMR, LRTV, LLRGTV, and the proposed method on an HS image with real noise. We selected noisy 16 bands from Suwannee and used it as a real observed HS image v . To maximize the performance of each method, we searched for suitable values of σ , s p , l v , and l h , and we set the parameters as with Section 5.1. Specifically, we set the parameter ε = 3.1893 and η = 31893 for all TVs. Besides, the parameters τ v , τ h , and τ b in ASSTV are set as 1, 1, 3, respectively, r and k in LRMR are set as 3 and 0.1204 , respectively, r and t a u in LRTV are set as 2 and 0.008 , and r, λ , and τ in LLRGTV are set as 2, 0.1 and 0.01 .
Figure 5 shows the results, where the HS images are depicted as RGB images (R = 2nd, G = 6th, and B = 13rd bands). The results by HTV and SSAHTV have spatial oversmoothing, and SSTV, ASSTV, LRMR, and LLRGTV produce spatial artifacts. Besides, one can see that the results by LRMR and LRTV have spectral artifacts. On the other hand, the proposed method can restore a detail-preserved HS image without artifacts.

5.3. Compressed Sensing Reconstruction

We experimented on compressed sensing (CS) reconstruction [56,57]. The CS theory says that high-dimensional signal information can be reconstructed from incomplete random measurements by exploiting sparsity in some domains, e.g., the gradient domain (TV). In general, HS imaging captures an HS image by scanning 1D spatial and spectral information because it senses spectral information by dispersing the incident light. Therefore, capturing moving objects is very difficult in HS imaging. To overcome the drawback, one-shot HS imaging based on CS has been actively studied [4,5].
In this experiment, we assume that Φ R M × N B in (9) is a random sampling matrix with the sampling rate m ( 0 < m < 1 and M = m N B ). Here, since Φ is a semiorthogonal matrix, we can efficiently solve the problem, as explained in Section 4.2. Moreover, since the main objective in the experiments is to verify CS reconstruction performance by HSSTV, we assume that the observations are contaminated by only an additive white Gaussian noise n with noise intensity σ = 0.1 .
We set the CS reconstruction problem as follows:
min u HSSTV ( u ) s . t . Φ u B 2 , ε v , u [ μ min , μ max ] N B .
The problem is derived by removing the second constraint and s from Problem (10). Therefore, we can solve the above problem by removing s , z 3 , and d 3 in Algorithm (1) and replacing z 4 and d 4 with z 3 and d 3 , respectively. As in Section 4.2, the update of u is strictly-convex quadratic minimization, and so it comes down to
u ( n + 1 ) = ( A ω A ω + Φ Φ + I ) 1 R H S , RHS = A ω ( z 1 ( n ) d 1 ( n ) ) + Φ ( z 2 ( n ) d 2 ( n ) ) + ( z 3 ( n ) d 3 ( n ) ) .
We set m = 0.2 or 0.4 and ε = m N B σ 2 . In the ASSTV case, we set the parameters ( τ v , τ h , τ b ) = ( 1 , 1 , 0.5 ) , which experimentally achieves the best performance.
Table 4 shows PSNR and SSIM of the reconstructed HS images. For all m and HS images, both PSNR and SSIM of the proposed method results are almost higher than that by HTV, SSAHTV, SSTV, and ASSTV.
Figure 6 is the reconstructed results on KSC and Reno with the random sampling ratio m = 0.4 and 0.2 , respectively. Here, the HS images are depicted as RGB images (R = 8th, G = 16th, and B = 32nd bands). One can see that (i) HTV and SSAHTV cause spatial oversmoothing, (ii) SSTV produces artifacts and spectral distortion, appearing different from the color of the groundtruth HS images, and (iii) the results by ASSTV have spatial oversmoothing and spectral distortion. On the other hand, the proposed method reconstructs meaningful details without both artifacts and spectral distortion.
Figure 7 plots band-wise PSNR or SSIM (left) and spatial and spectral responses (right) (Suwannee, m = 0.2 ). According to band-wise PSNR and SSIM, one can see that the proposed method achieves higher-quality reconstruction for all bands than HTV, SSAHTV, SSTV, and ASSTV. Graphs (c) and (d) plot the spatial and spectral responses of the same position in Section 5.1. Graph (c) shows that (i) the spatial response of the results by HTV, SSAHTV, and ASSTV are oversmoothing, (ii) SSTV produces undesirable variation, and (iii) the spatial response reconstructed by the proposed method is similar to the groundtruth one. In graph (d), HTV and SSAHTV generate undesirable variation, and ASSTV causes oversmoothing. Thanks to the evaluation of spatio-spectral piecewise-smoothness, SSTV reconstructs a similar spectral response to the groundtruth one. However, the mean value is larger than the groundtruth value. The proposed method achieves the most similar reconstruction of spectral response among all the TVs.

6. Discussion

In this section, we discuss multiple parameters in our proposed method.

6.1. The Impact of The Weight ω

The weight ω in (8) balances spatio-spectral piecewise-smoothness and direct spatial piecewise smoothness. We verified the impact ω on the mixed noise removal and CS reconstruction experiments. The weight ω was varied from 0.01 to 0.2 at 0.01 interval. The other settings were the same as Section 5.1 and Section 5.3.
Figure 8 and Figure 9 plot PSNR or SSIM of the results by the proposed method versus various ω on the mixed noise removal and CS reconstruction experiments, respectively, where the values of PSNR and SSIM are averaged over the 13 HS images. One can see that ω [ 0.03 , 0.07 ] is a good choice in the mixed noise removal case, and ω [ 0.05 , 0.1 ] is a good choice in most cases in the CS case. The results show that the suitable range of ω in CS reconstruction is almost the same as mixed noise removal. ASSTV and LRTV require adjusting the weight τ b and the hyperparameter τ newly for difference noise intensity, respectively, but the suitable parameter ω in HSSTV is noise-robust.

6.2. The Sensitivity of The Parameters ε and η

To verify the sensitivity of the parameter ε and η in (10), we conducted additional mixed noise removal experiments, where we examined various values of ε and η . Specifically, we set ε = 0.83 α N B ( 1 ( s p ( 1 l v l h ) + l v + l h l v l h ) ) σ 2 and η = β N B ( 0.45 s p + ( l v + l h ) v a v e l v l h v a v e ) , which are hand-optimized values of the parameters and changed α and β from 0.9 to 1.1 at 0.02 interval (the DC and the KSC images and the noise level (ii)). Figure 10 plots PSNR or SSIM of the results by HTV, SSAHTV, SSTV, ASSTV, and the proposed method versus α or β . For HTV, SSAHTV, ASSTV, and the proposed method, the graphs show that the suitable values of α and β do not vary significantly for both image, so the parameters ε and η are independent of both a regularization technique and an observed image. In the SSTV cases, the shapes of the plots are different between KSC and DC. This is because in the DC case, SSTV converges for all parameter settings, while in the case of KSC, it does not converge when α , β > 1 .

7. Conclusions

We have proposed a new constrained optimization approach to HS image restoration. Our proposed method is formulated as a convex optimization problem, where we utilize a novel regularization technique named HSSTV and incorporate data-fidelity as hard constraint. HSSTV evaluates direct spatial piecewise-smoothness and spatio-spectral piecewise-smoothness, and so it has a strong ability of HS restoration. Thanks to the design of the constraint-type data-fidelity, we can independently set the hyperparameters that balance between regularization and data-fidelity. To solve the proposed problem, we develop an efficient algorithm based on ADMM. Experimental results on mixed noise removal, real noise removal, and CS reconstruction demonstrate the advantages of the proposed method over various HS image restoration methods.

Author Contributions

Conceptualization, S.T., S.O., and I.K.; methodology, S.T. and S.O.; software, S.T.; validation, S.T.; formal analysis, S.T.; investigation, S.T.; writing—original draft, S.T.; writing—review and editing, S.O. and I.K.; supervision, S.O., and I.K.; project administration, S.T., S.O., and I.K.; funding acquisition, S.T., S.O., and I.K. All authors read and agreed to the published version of the manuscript.

Funding

This work was supported in part by JST CREST under Grant JPMJCR1662 and JPMJCR1666, and in part by JSPS KAKENHI under Grant 18J20290, 18H05413, and 20H02145.

Conflicts of Interest

The authors declare that there is no conflict of interest.

References

  1. Chang, C.I. Hyperspectral Imaging: Techniques for Spectral Detection and Classification; Springer Science & Business Media: Berlin/Heidelberg, Germany, 2003; Volume 1. [Google Scholar]
  2. Plaza, A.; Benediktsson, J.A.; Boardman, J.W.; Brazile, J.; Bruzzone, L.; Camps-Valls, G.; Chanussot, J.; Fauvel, M.; Gamba, P.; Gualtieri, A.; et al. Recent advances in techniques for hyperspectral image processing. Remote Sens. Environ. 2009, 113, S110–S122. [Google Scholar] [CrossRef]
  3. Rasti, B.; Scheunders, P.; Ghamisi, P.; Licciardi, G.; Chanussot, J. Noise reduction in hyperspectral imagery: Overview and application. Remote Sens. 2018, 10, 482. [Google Scholar] [CrossRef] [Green Version]
  4. Willett, R.M.; Duarte, M.F.; Davenport, M.A.; Baraniuk, R.G. Sparsity and structure in hyperspectral imaging: Sensing, reconstruction, and target detection. IEEE Signal Process. Mag. 2014, 31, 116–126. [Google Scholar] [CrossRef] [Green Version]
  5. Arce, G.R.; Brady, D.J.; Carin, L.; Arguello, H.; Kittle, D.S. Compressive coded aperture spectral imaging: An introduction. IEEE Signal Process. Mag. 2014, 31, 105–115. [Google Scholar] [CrossRef]
  6. Yuan, Q.; Zhang, L.; Shen, H. Hyperspectral image denoising employing a spectral–spatial adaptive total variation model. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3660–3677. [Google Scholar] [CrossRef]
  7. Aggarwal, H.K.; Majumdar, A. Hyperspectral Image Denoising Using Spatio-Spectral Total Variation. IEEE Geosci. Remote Sens. Lett. 2016, 13, 442–446. [Google Scholar] [CrossRef]
  8. Chang, Y.; Yan, L.; Fang, H.; Luo, C. Anisotropic spectral-spatial total variation model for multispectral remote sensing image destriping. IEEE Trans. Image Process. 2015, 24, 1852–1866. [Google Scholar] [CrossRef] [PubMed]
  9. Liu, H.; Sun, P.; Du, Q.; Wu, Z.; Wei, Z. Hyperspectral Image Restoration Based on Low-Rank Recovery with a Local Neighborhood Weighted Spectral-Spatial Total Variation Model. IEEE Trans. Geosci. Remote Sens. 2018, 57, 1–14. [Google Scholar] [CrossRef]
  10. Zhang, H.; He, W.; Zhang, L.; Shen, H.; Yuan, Q. Hyperspectral image restoration using low-rank matrix recovery. IEEE Trans. Geosci. Remote Sens. 2014, 52, 4729–4743. [Google Scholar] [CrossRef]
  11. Sun, L.; Ma, C.; Chen, Y.; Zheng, Y.; Shim, H.J.; Wu, Z.; Jeon, B. Low rank component induced spatial-spectral kernel method for hyperspectral image classification. IEEE Trans. Circuits Syst. Video Technol. 2019, 4133–4148. [Google Scholar] [CrossRef]
  12. Li, H.; Sun, P.; Liu, H.; Wu, Z.; Wei, Z. Non-Convex Low-Rank Approximation for Hyperspectral Image Recovery with Weighted Total Varaition Regularization. In Proceedings of the 2018 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Valencia, Spain, 22–27 July 2018; pp. 2733–2736. [Google Scholar]
  13. He, W.; Zhang, H.; Zhang, L.; Shen, H. Total-variation-regularized low-rank matrix factorization for hyperspectral image restoration. IEEE Trans. Geosci. Remote Sens. 2016, 54, 178–188. [Google Scholar] [CrossRef]
  14. Aggarwal, H.K.; Majumdar, A. Hyperspectral unmixing in the presence of mixed noise using joint-sparsity and total variation. IEEE Sel. Top. Appl. Earth Obs. Remote Sens. 2016, 9, 4257–4266. [Google Scholar] [CrossRef]
  15. Sun, L.; Wu, F.; Zhan, T.; Liu, W.; Wang, J.; Jeon, B. Weighted Nonlocal Low-Rank Tensor Decomposition Method for Sparse Unmixing of Hyperspectral Images. IEEE Sel. Top. Appl. Earth Obs. Remote Sens. 2020, 13, 1174–1188. [Google Scholar] [CrossRef]
  16. He, W.; Zhang, H.; Shen, H.; Zhang, L. Hyperspectral image denoising using local low-rank matrix recovery and global spatial–spectral total variation. IEEE Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 713–729. [Google Scholar] [CrossRef]
  17. Kong, X.; Zhao, Y.; Xue, J.; Chan, J.C.; Ren, Z.; Huang, H.; Zang, J. Hyperspectral Image Denoising Based on Nonlocal Low-Rank and TV Regularization. Remote Sens. 2020, 12, 1956. [Google Scholar] [CrossRef]
  18. Cao, W.; Wang, K.; Han, G.; Yao, J.; Cichocki, A. A robust PCA approach with noise structure learning and spatial–spectral low-rank modeling for hyperspectral image restoration. IEEE Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 3863–3879. [Google Scholar] [CrossRef]
  19. Wang, Y.; Peng, J.; Zhao, Q.; Leung, Y.; Zhao, X.; Meng, D. Hyperspectral image restoration via total variation regularized low-rank tensor decomposition. IEEE Sel. Top. Appl. Earth Obs. Remote Sens. 2017, 11, 1227–1243. [Google Scholar] [CrossRef] [Green Version]
  20. Wang, Q.; Wu, Z.; Jin, J.; Wang, T.; Shen, Y. Low rank constraint and spatial spectral total variation for hyperspectral image mixed denoising. Signal Process. 2018, 142, 11–26. [Google Scholar] [CrossRef]
  21. Sun, L.; Zhan, T.; Wu, Z.; Xiao, L.; Jeon, B. Hyperspectral mixed denoising via spectral difference-induced total variation and low-rank approximation. Remote Sens. 2018, 10, 1956. [Google Scholar] [CrossRef] [Green Version]
  22. Ince, T. Hyperspectral Image Denoising Using Group Low-Rank and Spatial-Spectral Total Variation. IEEE Access 2019, 7, 52095–52109. [Google Scholar] [CrossRef]
  23. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite elements approximations. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar]
  24. Eckstein, J.; Bertsekas, D.P. On the Douglas—Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar]
  25. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar]
  26. Takeyama, S.; Ono, S.; Kumazawa, I. Hyperspectral Image Restoration by Hybrid Spatio-Spectral Total Variation. In Proceedings of the 2017 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), New Orleans, LA, USA, 5–9 March 2017; pp. 4586–4590. [Google Scholar]
  27. Takeyama, S.; Ono, S.; Kumazawa, I. Mixed Noise Removal for Hyperspectral Images Using Hybrid Spatio-Spectral Total Variation. In Proceedings of the 2019 IEEE International Conference on Image Processing (ICIP), Taipei, Taiwan, 22–25 September 2019; pp. 3128–3132. [Google Scholar]
  28. Moreau, J.J. Fonctions convexes duales et points proximaux dans un espace hilbertien. C. R. Acad. Sci. Paris Ser. A Math. 1962, 255, 2897–2899. [Google Scholar]
  29. Bresson, X.; Chan, T.F. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Probl. Imag. 2008, 2, 455–484. [Google Scholar]
  30. Sun, L.; He, C.; Zheng, Y.; Tang, S. SLRL4D: Joint Restoration of Subspace Low-Rank Learning and Non-Local 4-D Transform Filtering for Hyperspectral Image. Remote Sens. 2020, 12, 2979. [Google Scholar]
  31. Afonso, M.; Bioucas-Dias, J.; Figueiredo, M. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 2011, 20, 681–695. [Google Scholar]
  32. Chierchia, G.; Pustelnik, N.; Pesquet, J.C.; Pesquet-Popescu, B. Epigraphical projection and proximal tools for solving constrained convex optimization problems. Signal Image Video Process. 2015, 9, 1737–1749. [Google Scholar]
  33. Ono, S.; Yamada, I. Signal recovery with certain involved convex data-fidelity constraints. IEEE Trans. Signal Process. 2015, 63, 6149–6163. [Google Scholar]
  34. Xie, Y.; Qu, Y.; Tao, D.; Wu, W.; Yuan, Q.; Zhang, W. Hyperspectral image restoration via iteratively regularized weighted schatten p-norm minimization. IEEE Trans. Geosci. Remote Sens. 2016, 54, 4642–4659. [Google Scholar]
  35. Ono, S. L0 gradient projection. IEEE Trans. Image Process. 2017, 26, 1554–1564. [Google Scholar] [CrossRef] [PubMed]
  36. Takeyama, S.; Ono, S.; Kumazawa, I. Robust and effective hyperspectral pansharpening using spatio-spectral total variation. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1603–1607. [Google Scholar]
  37. Takeyama, S.; Ono, S.; Kumazawa, I. Hyperspectral Pansharpening Using Noisy Panchromatic Image. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Honolulu, HI, USA, 12–15 November 2018; pp. 880–885. [Google Scholar]
  38. Chan, S.H.; Khoshabeh, R.; Gibson, K.B.; Gill, P.E.; Nguyen, T.Q. An augmented Lagrangian method for total variation video restoration. IEEE Trans. Image Process. 2011, 20, 3097–3111. [Google Scholar] [CrossRef] [PubMed]
  39. Takeyama, S.; Ono, S.; Kumazawa, I. Hyperspectral and Multispectral Data Fusion by a Regularization Considering. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 2152–2156. [Google Scholar]
  40. Hansen, P.C.; Nagy, J.G.; O’Leary, D.P. Deblurring Images: Matrices, Spectra, and Filtering; SIAM: Philadelphia, PA, USA, 2006. [Google Scholar]
  41. Combettes, P.L.; Pesquet, J.C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: Berlin/Heidelberg, Germany, 2011; pp. 185–212. [Google Scholar]
  42. Golub, G.H.; Loan, C.F.V. Matrix Computations, 4th ed.; Johns Hopkins University Press: Baltimore, MD, USA, 2012. [Google Scholar]
  43. Chambolle, A.; Pock, T. A first-order primal-dual algorithm for convex problems with applications to imaging. J. Math. Imaging Vis. 2010, 40, 120–145. [Google Scholar] [CrossRef] [Green Version]
  44. Combettes, P.L.; Pesquet, J.C. Primal-dual splitting algorithm for solving inclusions with mixtures of composite, Lipschitzian, and parallel-sum type monotone operators. Set-Valued Var. Anal. 2012, 20, 307–330. [Google Scholar] [CrossRef] [Green Version]
  45. Condat, L. A primal-dual splitting method for convex optimization involving Lipschitzian, proximable and linear composite terms. J. Optim. Theory Appl. 2013, 158, 460–479. [Google Scholar] [CrossRef] [Green Version]
  46. Ono, S.; Yamagishi, M.; Miyata, T.; Kumazawa, I. Image restoration using a stochastic variant of the alternating direction method of multipliers. In Proceedings of the 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Shanghai, China, 20–25 March 2016. [Google Scholar]
  47. Chambolle, A.; Ehrhardt, M.J.; Richtárik, P.; Schonlieb, C.B. Stochastic primal-dual hybrid gradient algorithm with arbitrary sampling and imaging applications. SIAM J. Optim. 2018, 28, 2783–2808. [Google Scholar] [CrossRef] [Green Version]
  48. Combettes, P.L.; Pesquet, J.C. Stochastic forward-backward and primal-dual approximation algorithms with application to online image restoration. In Proceedings of the 2016 24th European Signal Processing Conference (EUSIPCO), Budapest, Hungary, 29 August–2 September 2016; pp. 1813–1817. [Google Scholar]
  49. Ono, S. Efficient constrained signal reconstruction by randomized epigraphical projection. In Proceedings of the 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Brighton, UK, 12–17 May 2019; pp. 4993–4997. [Google Scholar]
  50. SpecTIR. Available online: http://www.spectir.com/free-data-samples/ (accessed on 4 July 2016).
  51. MultiSpec. Available online: https://engineering.purdue.edu/biehl/MultiSpec (accessed on 4 July 2016).
  52. GIC. Available online: http://www.ehu.eus/ccwintco/index.php?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 4 July 2016).
  53. Liu, W.; Lee, J. A 3-D Atrous Convolution Neural Network for Hyperspectral Image Denoising. IEEE Trans. Geosci. Remote Sens. 2019. [Google Scholar] [CrossRef]
  54. Gou, Y.; Li, B.; Liu, Z.; Yang, S.; Peng, X. CLEARER: Multi-Scale Neural Architecture Search for Image Restoration. In Proceedings of the NeurIPS 2020: Thirty-Fourth Conference on Neural Information Processing Systems, Vancouver, BC, Canada, 6–12 December 2020. [Google Scholar]
  55. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  56. Baraniuk, R.G. Compressive sensing. IEEE Signal Process. Mag. 2007, 24, 118–121. [Google Scholar] [CrossRef]
  57. Candès, E.; Wakin, M. An introduction to compressive sampling. IEEE Signal Process. Mag. 2008, 25, 21–30. [Google Scholar] [CrossRef]
Figure 1. Calculation of local differences in SSTV, ASSTV, and our HSSTV. SSTV evaluates the 1 norm of spatio-spectral differences (yellow line). ASSTV evaluates the 1 norm of direct spatial and spectral differences (blue line). HSSTV evaluates the mixed 1 , p norm of direct spatial and spatio-spectral differences (red line).
Figure 1. Calculation of local differences in SSTV, ASSTV, and our HSSTV. SSTV evaluates the 1 norm of spatio-spectral differences (yellow line). ASSTV evaluates the 1 norm of direct spatial and spectral differences (blue line). HSSTV evaluates the mixed 1 , p norm of direct spatial and spatio-spectral differences (red line).
Remotesensing 12 03541 g001
Figure 2. Restored HS images from an observation contaminated by similar noise in adjacent bands (the upper half area) and random noise (the lower half).
Figure 2. Restored HS images from an observation contaminated by similar noise in adjacent bands (the upper half area) and random noise (the lower half).
Remotesensing 12 03541 g002
Figure 3. Resulting HS images with their PSNR (left) and SSIM (right) in the mixed noise removal experiment (top: Salinas, the noise level (i), bottom: PaviaU, the noise level (ii)).
Figure 3. Resulting HS images with their PSNR (left) and SSIM (right) in the mixed noise removal experiment (top: Salinas, the noise level (i), bottom: PaviaU, the noise level (ii)).
Remotesensing 12 03541 g003
Figure 4. Band-wise PSNR and SSIM and spatial and spectral responses in the mixed noise removal experiment (Suwannee).
Figure 4. Band-wise PSNR and SSIM and spatial and spectral responses in the mixed noise removal experiment (Suwannee).
Remotesensing 12 03541 g004
Figure 5. The denoising results on the real noise removal experiments.
Figure 5. The denoising results on the real noise removal experiments.
Remotesensing 12 03541 g005
Figure 6. Resulting HS images with their PSNR (left) and SSIM (right) on the compressed sensing (CS) reconstruction experiment (top: KSC, m = 0.4 , bottom: Reno, m = 0.2 ).
Figure 6. Resulting HS images with their PSNR (left) and SSIM (right) on the compressed sensing (CS) reconstruction experiment (top: KSC, m = 0.4 , bottom: Reno, m = 0.2 ).
Remotesensing 12 03541 g006
Figure 7. Band-wise PSNR and SSIM and spatial and spectral responses on the CS reconstruction experiment (Suwannee).
Figure 7. Band-wise PSNR and SSIM and spatial and spectral responses on the CS reconstruction experiment (Suwannee).
Remotesensing 12 03541 g007
Figure 8. PSNR or SSIM versus ω in (8) in the mixed noise removal experiment.
Figure 8. PSNR or SSIM versus ω in (8) in the mixed noise removal experiment.
Remotesensing 12 03541 g008
Figure 9. PSNR or SSIM versus ω in (8) on the CS reconstruction experiment.
Figure 9. PSNR or SSIM versus ω in (8) on the CS reconstruction experiment.
Remotesensing 12 03541 g009
Figure 10. PSNR or SSIM versus α or β on the mixed noise removal experiment (the noise level (ii), top: DC, bottom: KSC).
Figure 10. PSNR or SSIM versus α or β on the mixed noise removal experiment (the noise level (ii), top: DC, bottom: KSC).
Remotesensing 12 03541 g010
Table 1. The feature of existing methods for hyperspectral (HS) image restoration.
Table 1. The feature of existing methods for hyperspectral (HS) image restoration.
FeatureSpatial CorrelationSpectral CorrelationConvexityHyperparameters
Methods
HTV [6]×convexinterdependent
SSAHTV [6]convexinterdependent
SSTV [7]convexinterdependent
ASSTV [8]convexinterdependent
LRM [10,11]×nonconvexindependent
LNWTV + LRM [9,12]convexinterdependent
HTV + LRM [13]nonconvexinterdependent
HTV + LRM [14,15]convexinterdependent
ASSTV + LRM [16,17]nonconvexinterdependent
SSTV + LRM [18,19,20]convexinterdependent
SSTV + LRM [21,22]nonconvexinterdependent
proposedconvexindependent
Table 2. Parameter settings for ASSTV, LRMR, LRTV, and the proposed method.
Table 2. Parameter settings for ASSTV, LRMR, LRTV, and the proposed method.
Noise Level(i) ( σ , s p , l v = l h ) = ( 0.05 , 0.04 , 0.04 ) (ii) ( 0.1 , 0.05 , 0.05 )
Parameters
ASSTV τ v = τ h 11
τ b 32
LRMRr33
k s p + l v + l h l v l h (the rate of sparse noise)
LRTVr22
τ 0.0050.008
LLRGTVr22
λ 0.10.1
τ 0.010.01
proposed ω 0.040.04
Table 3. Peak signal-to-noise ratio (PSNR) (top) and structural similarity (SSIM) (bottom) in mixed noise removal experiments.
Table 3. Peak signal-to-noise ratio (PSNR) (top) and structural similarity (SSIM) (bottom) in mixed noise removal experiments.
HS ImageNoise
Level
HTVSSAHTVSSTVASSTVLRMRLRTVLLRGTVProposed
( p = 1 )
Proposed
( p = 2 )
Beltsville(i)29.4329.4733.6627.1630.9135.3233.6134.2534.16
256 × 256 × 32 (ii)26.4026.4328.4224.6027.1331.2228.5129.7929.62
Suwannee(i)30.1430.1834.5932.6030.3036.2033.3535.1536.01
256 × 256 × 32 (ii)26.7026.7429.5528.7126.9031.9528.5031.0831.22
DC(i)26.4626.5133.0328.8031.7134.7833.5033.3633.08
256 × 256 × 32 (ii)23.8423.8827.7125.2527.3529.5328.1428.5728.32
Cuprite(i)31.6731.6834.4229.1430.1628.3932.6734.9636.20
256 × 256 × 32 (ii)28.2028.2129.8626.5727.3227.9427.9931.6331.73
Reno(i)28.5328.5734.3730.4932.2137.0634.9535.1134.96
256 × 256 × 32 (ii)25.5625.6128.1126.9528.4731.0027.9929.8329.72
Botswana(i)27.9828.0533.3226.4731.6229.0032.0233.6133.53
256 × 256 × 32 (ii)25.2125.2528.5524.0128.3127.3328.0829.3929.35
PSNRIndianPines(i)31.0531.0631.4529.0728.9626.1629.7431.9031.80
145 × 145 × 32 (ii)28.5728.5727.8226.7225.1429.8226.7029.2629.18
KSC(i)30.1730.2534.7431.6433.7435.7434.7536.3936.33
256 × 256 × 32 (ii)28.0328.0629.2328.6230.1930.2229.5331.8231.72
PaviaLeft(i)27.6227.7035.5730.9133.0136.4934.7535.9835.81
216 × 216 × 32 (ii)24.7424.7829.9326.7129.4629.0229.2230.4730.24
PaviaRight(i)26.9327.3534.5431.1333.3335.8234.1735.6835.23
256 × 256 × 32 (ii)24.9025.1630.7027.2329.8229.0829.2431.5931.39
PaviaU(i)27.9228.0435.5231.6533.0036.7234.5936.3136.17
256 × 256 × 32 (ii)25.2425.2930.2127.4229.4328.9029.1731.0430.80
Salinas(i)32.5932.6435.8632.8331.8236.7434.3637.6037.65
217 × 217 × 32 (ii)28.8828.9128.1928.9928.0232.7329.0932.0132.12
SalinaA(i)32.5432.6535.2928.1231.1828.4934.0736.2736.23
83 × 86 × 32 (ii)28.6928.8029.6725.1927.6726.1027.8731.6831.64
Beltsville(i)0.79020.79040.88560.81110.85830.93720.92780.91320.9085
(ii)0.69540.69590.70570.71770.70830.85680.82480.81860.8088
Suwannee(i)0.84060.84100.93530.90520.86890.95020.94310.95590.9555
(ii)0.75420.75520.81460.82260.74700.89300.86220.91250.9158
DC(i)0.76220.76330.92740.86760.92480.96130.95480.94420.9394
(ii)0.61890.62010.80920.72110.82140.88100.87220.86110.8533
Cuprite(i)0.85500.85520.91790.86320.84950.93960.94110.94590.9426
(ii)0.78490.78520.77170.79530.70980.88140.85240.90310.9058
Reno(i)0.78180.78190.93220.88320.90120.95890.95230.95310.9515
(ii)0.66400.66450.80450.75390.79050.88160.86400.86790.8635
Botswana(i)0.78960.79000.92020.81990.90680.92820.93840.93430.9344
(ii)0.68100.68200.81750.70950.82010.85640.87560.87450.8765
IndianPines(i)0.81180.81200.80150.76710.75930.81900.82240.83350.8243
SSIM(ii)0.77130.77130.62290.73030.78930.79390.74330.77850.7689
KSC(i)0.82710.82780.91160.89220.88900.93850.93220.95420.9532
(ii)0.75980.76020.78850.80640.75290.84270.82160.88090.8747
PaviaLeft(i)0.77520.77700.95930.88280.93590.96120.96010.96610.9645
(ii)0.61020.61160.87550.72670.85650.87910.88820.88980.8815
PaviaRight(i)0.77690.77720.94940.88620.92560.95400.94700.96160.9598
(ii)0.64740.64710.86350.74930.82610.85070.85510.90860.9006
PaviaU(i)0.79730.79860.94520.88910.91240.95400.94930.96220.9610
(ii)0.67760.67850.84440.76780.81030.86270.86490.89350.8855
Salinas(i)0.89970.90020.90150.91630.82700.95090.92850.95610.9564
(ii)0.85700.85750.71170.87320.66700.92250.83330.92230.9240
SalinaA(i)0.91290.91370.91340.84680.86320.93840.95130.94480.9416
(ii)0.87930.88030.77890.81100.72660.89510.85490.91970.9195
Table 4. PSNR (left) and SSIM (right) in the CS reconstruction experiment.
Table 4. PSNR (left) and SSIM (right) in the CS reconstruction experiment.
PSNRSSIM
mHTVSSAHTVSSTVASSTVProposed
( p = 1 )
Proposed
( p = 2 )
HTVSSAHTVSSTVASSTVProposed
( p = 1 )
Proposed
( p = 2 )
Beltsville0.427.4627.4927.5326.5131.1530.710.68290.69400.60130.68360.81050.7948
0.226.2326.2524.3424.1229.6329.180.63630.64930.43480.61080.76040.7427
Suwannee0.427.9728.0228.4927.6832.9833.040.73320.74970.73770.73670.89020.8909
0.226.4726.5025.6925.3931.3731.440.68100.70070.57390.66330.85310.8534
DC0.424.6924.7327.3324.7129.7029.290.60960.62420.75220.62450.85770.8460
0.223.3123.3324.1622.6927.9827.590.52150.53840.61200.50370.79700.7846
Cuprite0.429.9429.9628.2128.5934.3634.340.76650.78040.68260.76520.88820.8895
0.228.7728.7725.7926.3832.9732.950.73680.75250.50570.72070.85680.8578
Reno0.426.9927.0527.8226.4931.8031.610.67690.68680.74140.67300.87330.8705
0.225.5725.6125.5724.5230.2230.040.62020.63260.62760.59400.82630.8228
Botswana0.426.1026.1527.8125.1330.3230.150.66830.68030.75510.64600.85630.8598
0.224.6624.6924.7922.8628.7928.630.60140.61620.62250.55190.81190.8163
IndianPines0.430.5430.5527.6529.5531.3631.040.74970.77770.50660.76170.78060.7655
0.229.9929.9925.1128.1930.7130.460.73660.76580.34880.74650.75890.7491
KSC0.429.3029.3328.3428.6334.1034.030.76600.77420.68140.75440.90190.9002
0.228.1128.1227.0026.5932.6732.600.73180.74100.60080.70320.86980.8679
PaviaLeft0.425.6625.6929.6625.4131.9631.830.60820.62050.83860.59320.89000.8857
0.224.2624.2727.1723.2430.2030.080.51030.52510.73190.44340.84180.8364
PaviaRight0.425.8325.8529.8625.6132.4532.210.63570.64230.79620.62750.89370.8877
0.224.3024.3027.5423.6130.5630.380.55020.55840.69170.50690.84750.8392
PaviaU0.426.4926.5330.0226.3832.8832.700.68670.69560.78300.68180.89500.8901
0.224.9524.9727.3524.0831.1330.960.61380.62420.66230.56800.85570.8508
Salinas0.431.1931.2427.6930.1835.4335.510.85770.86720.61530.85660.92220.9245
0.229.9429.9825.2828.0934.0534.100.84040.85160.46200.83020.90520.9080
SalinasA0.430.6730.8227.9328.1934.4534.140.86470.88710.65950.84890.91780.9208
0.228.6828.7524.1524.9432.7132.360.83870.86550.48100.80050.89660.9002
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Takeyama, S.; Ono, S.; Kumazawa, I. A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization. Remote Sens. 2020, 12, 3541. https://doi.org/10.3390/rs12213541

AMA Style

Takeyama S, Ono S, Kumazawa I. A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization. Remote Sensing. 2020; 12(21):3541. https://doi.org/10.3390/rs12213541

Chicago/Turabian Style

Takeyama, Saori, Shunsuke Ono, and Itsuo Kumazawa. 2020. "A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization" Remote Sensing 12, no. 21: 3541. https://doi.org/10.3390/rs12213541

APA Style

Takeyama, S., Ono, S., & Kumazawa, I. (2020). A Constrained Convex Optimization Approach to Hyperspectral Image Restoration with Hybrid Spatio-Spectral Regularization. Remote Sensing, 12(21), 3541. https://doi.org/10.3390/rs12213541

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop