Next Article in Journal
Objective Classes for Micro-Facial Expression Recognition
Next Article in Special Issue
Age Determination of Blood-Stained Fingerprints Using Visible Wavelength Reflectance Hyperspectral Imaging
Previous Article in Journal
Multivariate Statistical Approach to Image Quality Tasks
Previous Article in Special Issue
Hyperspectral Imaging Using Laser Excitation for Fast Raman and Fluorescence Hyperspectral Imaging for Sorting and Quality Control Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Fusing Multiple Multiband Images

Commonwealth Scientific and Industrial Research Organisation (CSIRO), Pullenvale QLD 4069, Australia
J. Imaging 2018, 4(10), 118; https://doi.org/10.3390/jimaging4100118
Submission received: 21 August 2018 / Revised: 5 October 2018 / Accepted: 8 October 2018 / Published: 12 October 2018
(This article belongs to the Special Issue The Future of Hyperspectral Imaging)

Abstract

:
High-resolution hyperspectral images are in great demand but hard to acquire due to several existing fundamental and technical limitations. A practical way around this is to fuse multiple multiband images of the same scene with complementary spatial and spectral resolutions. We propose an algorithm for fusing an arbitrary number of coregistered multiband, i.e., panchromatic, multispectral, or hyperspectral, images through estimating the endmember and their abundances in the fused image. To this end, we use the forward observation and linear mixture models and formulate an appropriate maximum-likelihood estimation problem. Then, we regularize the problem via a vector total-variation penalty and the non-negativity/sum-to-one constraints on the endmember abundances and solve it using the alternating direction method of multipliers. The regularization facilitates exploiting the prior knowledge that natural images are mostly composed of piecewise smooth regions with limited abrupt changes, i.e., edges, as well as coping with potential ill-posedness of the fusion problem. Experiments with multiband images constructed from real-world hyperspectral images reveal the superior performance of the proposed algorithm in comparison with the state-of-the-art algorithms, which need to be used in tandem to fuse more than two multiband images.

1. Introduction

The wealth of spectroscopic information provided by hyperspectral images containing hundreds or even thousands of contiguous bands can immensely benefit many remote sensing and computer vision applications, such as source/target detection [1,2,3], object recognition [4], change/anomaly detection [5,6], material classification [7], and spectral unmixing [8,9], commonly encountered in environmental monitoring, resource location, weather or natural disaster forecasting, etc. Therefore, finely-resolved hyperspectral images are in great demand [10,11,12,13,14]. However, limitations in light intensity as well as efficiency of the current sensors impose an inexorable trade-off between the spatial resolution, spectral sensitivity, and the signal-to-noise ratio (SNR) of existing spectral imagers [15]. As a results, typical spectral imaging systems can capture multiband images of high spatial resolution at a small number of spectral bands or multiband images of high spectral resolution with a reduced spatial resolution. For example, imaging devices onboard Pleiades or IKONOS satellites [16] provide single-band panchromatic images with spatial resolutions of less than a meter and multispectral images with a few bands and spatial resolutions of a few meters while NASA’s airborne visible/infrared imaging spectrometer (AVIRIS) [17] provides hyperspectral images with more than 200 bands but with a spatial resolution of several ten meters.
One way to surmount the abovementioned technological limitation of acquiring high-resolution hyperspectral images is to capture multiple multiband images of the same scene with practical spatial and spectral resolutions, then fuse them together in a synergistic manner. Fusing multiband images combines their complementary information obtained through multiple sensors that may have different spatial and spectral resolutions and cover different spectral ranges.
Initial multiband image fusion algorithms were developed to fuse a panchromatic image with a multispectral image and the associated inverse problem was dubbed pansharpening [18,19,20,21,22]. Many pansharpening algorithms are based on either of the two popular pansharpening strategies: component substitution (CS) and multiresolution analysis (MRA). The CS-based algorithms substitute a component of the multispectral image obtained through a suitable transformation by the panchromatic image. The MRA-based algorithms inject the spatial detail of the panchromatic image obtained by a multiscale decomposition, e.g., using wavelets [23], into the multispectral image. There also exist hybrid methods that use both CS and MRA. Some of the algorithms originally proposed for pansharpening have been successfully extended to be used for fusing a panchromatic image with a hyperspectral image, a problem that is called hyperspectral pansharpening [21].
Recently, significant research effort has been expended to solve the problem of fusing a multispectral image with a hyperspectral one. This inverse problem is essentially different from the pansharpening and hyperspectral pansharpening problems since a multispectral image has multiple bands that are intricately related to the bands of its corresponding hyperspectral image. Unlike a panchromatic image that contains only one band of reflectance data usually covering parts of the visible and near-infrared spectral ranges, a multispectral image contains multiple bands each covering a smaller spectral range, some being in the shortwave-infrared (SWIR) region. Therefore, extending the pansharpening techniques so that they can be used to inject the spatial details of a multispectral image into a hyperspectral image is not straightforward. Nonetheless, an effort towards this end has led to the development of a framework called hypersharpening, which is based on adapting the MRA-based pansharpening methods to multispectral–hyperspectral image fusion. The main idea is to synthesize a high-spatial-resolution image for each band of the hyperspectral image by linearly combining the bands of the multispectral image using linear regression [24].
In some works on multispectral–hyperspectral image fusion, it is assumed that each pixel on the hyperspectral image, which has a lower spatial resolution than the target image, is the average of the pixels of the same area on the target image [25,26,27,28,29]. Clearly, the size of this area depends on the downsampling ratio. Based on this pixel-aggregation assumption, one can divide the problem of fusing two multiband images into subproblems dealing with smaller blocks and hence significantly decrease the complexity of the overall process. However, it is more realistic to allow the area on the target image corresponding to a pixel of the hyperspectral image to span as many pixels as determined by the point-spread function of the sensor, which induces spatial blurring. The downsampling ratio generally depends on the physical and optical characteristics of a sensor and is usually fixed. Therefore, spatial blurring and downsampling can be expressed as two separate linear operations. The spectral degradation of a panchromatic or multispectral image with respect to the target image can also be modeled as a linear transformation. Articulating the spatial and spectral degradations in terms of linear operations forms a realistic and convenient forward observation model to relate the observed multiband images to the target image.
Hyperspectral image data is generally known to have a low-rank structure and reside in a subspace that usually has a dimension much smaller than the number of the spectral bands [8,30,31,32,33]. This is mainly due to correlations among the spectral bands and the fact that the spectrum of each pixel can often be represented as a linear combination of a relatively few spectral signatures. These signatures, called endmembers, may be the spectra of the material present at the scene. Consequently, a hyperspectral image can be linearly decomposed into its constituent endmembers and the fractional abundances of the endmembers for each pixel. This linear decomposition is called spectral unmixing and the corresponding data model is called the linear mixture model. Other linear decompositions that can be used to reduce the dimensionality of a hyperspectral image in the spectral domain are dictionary-learning-based sparse representation and principle-component analysis.
Many recent works on multiband image fusion, which mostly deal with fusing a multispectral image with a hyperspectral image of the same scene, employ the abovementioned forward observation model and a form of linear spectral decomposition. They mostly extract the endmembers or the spectral dictionary from the hyperspectral image. Some of the works use the extracted endmember or dictionary matrix to reconstruct the multispectral image via sparse regression and calculate the endmember abundances or the representation coefficients [34]. Others cast the multiband image fusion problem as reconstructing a high-spatial-resolution hyperspectral datacube from two datacubes degraded according to the mentioned forward observation model. When the number of spectral bands in the multispectral image is smaller than the number of endmembers or dictionary atoms, the linear inverse problem associated with the multispectral–hyperspectral fusion problem is ill-posed and needs be regularized to have a meaningful solution. Any prior knowledge about the target image can be used for regularization. Natural images are known to mostly consist of smooth segments with few abrupt changes corresponding to the edges and object boundaries [35,36,37]. Therefore, penalizing the total-variation [38,39,40] and sparse (low-rank) representation in the spatial domain [41,42,43,44] are two popular approaches to regularizing the multiband image fusion problems. Some algorithms, developed within the framework of the Bayesian estimation, incorporate the prior knowledge or conjecture about the probability distribution of the target image into the fusion problem [45,46,47]. The work of [48] obviates the need for regularization by dividing the observed multiband images into small spatial patches for spectral unmixing and fusion under the assumption that the target image is locally low-rank.
When the endmembers or dictionary atoms are induced from an observed hyperspectral image, the problem of fusing the hyperspectral image with a multispectral image boils down to estimating the endmember abundances or representation coefficients of the target image, a problem that is often tractable (due to being a convex optimization problem) and has a manageable size and complexity. The estimate of the target image is then obtained by mixing the induced endmembers/dictionary and the estimated abundances/coefficients. It is also possible to jointly estimate the endmembers/dictionary and the abundances/coefficients from the available multiband data. This joint estimation problem is usually formulated as a non-convex optimization problem of non-negative matrix factorization, which can be solved approximately using block coordinate-descent iterations [49,50,51,52].
To the best of our knowledge, all existing multiband image fusion algorithms are designed to fuse a pair of multiband images with complementary spatial and spectral resolutions. Therefore, fusing more than two multiband images using the existing algorithms can only be realized by performing a hierarchical procedure that combines multiple fusion processes possibly implemented via different algorithms as, for example, in [53,54]. In addition, there are potentially various ways to arrange the pairings and often it is not possible to know beforehand which way will provide the best overall fusion result. For instance, in order to fuse a panchromatic, a multispectral, and a hyperspectral image of a scene, one can first fuse the panchromatic and multispectral images, then fuse the resultant pansharpened multispectral image with the hyperspectral image. Another way would be to first fuse the multispectral and hyperspectral images, then pansharpen the resultant hyperspectral image with the panchromatic image. Apart from the said ambiguity of choice, such combined pair-wise fusions can be slow and inaccurate since they may require several runs of different algorithms and may suffer from propagation and accumulation of errors. Therefore, the increasing availability of multiband images with complementary characteristics captured by modern spectral imaging devices has brought about the demand for efficient and accurate fusion techniques that can handle multiple multiband images simultaneously.
In this paper, we propose an algorithm that can simultaneously fuse an arbitrary number of multiband images. We utilize the forward observation and linear mixture models to effectively model the data and reduce the dimensionality of the problem. Assuming matrix normal distribution for the observation noise, we derive the likelihood function as well as the Fisher information matrix (FIM) associated with the problem of recovering the endmember abundance matrix of the target image from the observations. We study the properties of the FIM and the conditions for existence of a unique maximum-likelihood estimate and the associated Cramer–Rao lower bound. We regularize the problem of maximum-likelihood estimation of the endmember abundances by adding a vector total-variation penalty term to the cost function and constraining the abundances to be non-negative and add up to one for each pixel. The total-variation penalty serves two major purposes. First, it helps us cope with the likely ill-posedness of the maximum-likelihood estimation problem. Second, it allows us to take into account the spatial characteristics of natural images that is they mostly consist of piecewise plane regions with few sharp variations. Regularization with a vector total-variation penalty can effectively advocate this desired feature by promoting sparsity in the image gradient, i.e., local differences between adjacent pixels, while encourages the local differences to be spatially aligned across different bands [37]. The non-negativity and sum-to-one constraints on the endmember abundances ensure that the abundances have practical values. They also implicitly promote sparsity in the estimated endmember abundances.
We solve the resultant constrained optimization problem using the alternating direction method of multipliers (ADMM) [55,56,57,58,59,60]. Simulation results indicate that the proposed algorithm outperforms several combinations of the state-of-the-art algorithms, which need be cascaded to carry out fusion of multiple (more than two) multiband images.

2. Data Model

2.1. Forward Observation Model

Let us denote the target multiband image by X L × N where L is the number of spectral bands and N is the number of pixels in the image. We wish to recover X from K observed multiband images Y k L k × N k , k = 1 , , K , that are spatially or spectrally downgraded and degraded versions of X . We assume that these multiband images are geometrically coregistered and are related to X via the following forward observation model
  Y k = R k X B k S k + P k  
where
L k L and N k = N / D k 2 with D k being the spatial downsampling ratio of the k th image;
R k L k × N is the spectral response of the sensor producing Y k ;
B k N × N is a band-independent spatial blurring matrix that represents a two-dimensional convolution with a blur kernel corresponding to the point-spread function of the sensor producing Y k ;
S k N × N k is a sparse matrix with N k ones and zeros elsewhere that implements a two-dimensional uniform downsampling of ratio D k on both spatial dimensions and satisfies S k S k = I N ;
P k L k × N k is an additive perturbation representing the noise or error associated with the observation of Y k .
We assume that the perturbations P k , k = 1 , , K , are independent of each other and have matrix normal distributions expressed by
  P k M N L k × N k ( 0 L k × N k , Σ k , I N k )  
where 0 L k × N k is the L k × N k zero matrix, I N k is the N k × N k identity matrix, and Σ k L k × L k is a diagonal matrix that represents the correlation among rows of P k , which correspond to different spectral bands. Note that we consider the column-covariance matrices to be identity assuming that the perturbations are independent and identically-distributed in the spatial domain. However, by considering diagonal row-covariance matrices, we assume that the perturbations are independent in the spectral domain but may have nonidentical variances at different bands. Moreover, the instrument noise of an optoelectronic device can also have a multiplicative nature. A prominent example is the shot noise that is generally modeled using the Poisson distribution. By virtue of the central limit theorem and since a Poisson distribution with a reasonably large mean can be well approximated by a Gaussian distribution, our assumption of additive Gaussian perturbation for the acquisition noise/error is a sensible working hypothesis given that the SNRs are adequately high.
Note that Y k , k = 1 , , K , in (1) contain the corrected (preprocessed) spectral values, not the raw measurements produced by the spectral imagers. The preprocessing usually involves several steps including radiometric calibration, geometric correction, and atmospheric compensation [61]. The radiometric calibration is generally performed to obtain radiance values at the sensor. It converts the sensor measurements in digital numbers into physical units of radiance. The reflected sunlight passing through the atmosphere is partially absorbed and scattered through a complex interaction between the light and various parts of the atmosphere. The atmospheric compensation counters these effects and converts the radiance values into ground-leaving radiance or surface reflectance values. To obtain accurate reflectance values, one additionally has to account for the effects of the viewing geometry and sun’s position as well as the surfaces structural and optical properties [10]. This preprocessing is particularly important when the multiband images to be fused are acquired via different instruments, from different viewpoints, or at different times. After the preprocessing, the images should also be coregistered.

2.2. Linear Mixture Model

Under some mild assumptions, multiband images of natural scenes can be suitably described by a linear mixture model [8]. Specifically, the spectrum of each pixel can often be written as a linear mixture of a few archetypal spectral signatures known as endmembers. The number of endmembers, denoted by M , is usually much smaller than the spectral dimension of a hyperspectral image, i.e., M L . Therefore, if we arrange M endmembers corresponding to X as columns of the matrix E L × M , we can factorize X as
  X = E A + P  
where A M × N is the matrix of endmember abundances and P L × N is a perturbation matrix that accounts for any possible inaccuracy or mismatch in the linear mixture mode. We assume that P is independent of P k , k = 1 , , K , and has a matrix normal distribution as
  P M N L × N ( 0 L × N , Σ , I N )  
where Σ L × L is its row-covariance matrix. Every column of A contains the fractional abundances of the endmembers at a pixel. The fractional abundances are non-negative and often assumed to add up to one for each pixel.
The linear mixture model stated above has been widely used in various contexts and applications concerning multiband, particularly hyperspectral, images. Its popularity can mostly be attributed to its intuitiveness as well as relative simplicity and ease of implementation. However, remotely-sensed images of ground surface may suffer from strong nonlinear effects. These effects are generally due to ground characteristics such as non-planar surface and bidirectional reflectance, artefacts left by atmospheric removal procedures, and the presence of considerable atmospheric absorbance in the neighborhood of the bands of interest. The use of images of the same scene captured by different sensors, although coregistered, can also induce nonlinearly mainly owing to difference in observation geometry, lighting conditions, and miscalibration. Therefore, it should be taken into consideration that the mentioned nonlinear phenomena can impact the results of any procedure relying on the linear mixture model in any real-world application and the scale of the impact depends on the severity of the nonlinearities.
There are a few other caveats regarding the linear mixture model that should also be kept in mind. First, X in model (3) corresponds to a matrix of corrected (preprocessed) values, not raw ones that would typically be captured by a spectral imager of the same spatial and spectral resolutions. However, whether these values are radiance or reflectance has no impact on the validity of the model, though it certainly matters for further processing of the data. Second, the model (3) does not necessarily require each endmember to be the spectral signature of only one (pure) material. An endmember may be composed of the spectral signatures of multiple materials or may be seen as the spectral signature of a composite material made of several constituent materials. Additionally, depending on the application, the endmembers may be purposely defined in particular subjective ways. Third, in practice, an endmember may have slightly different spectral manifestations at different parts of a scene due to variable illumination, environmental, atmospheric, or temporal conditions. This so-called endmember variability [62] along with possible nonlinearities in the actual underlying mixing process [63] may introduce inaccuracies or inconsistencies in the linear mixture model and consequently in the endmember extraction or spectral unmixing techniques that rely on this model. Lastly, the sum-to-one assumption on the abundances of each pixel may not always hold, especially, when the linear mixture model is not able to account for every material in a pixel possibly because of the effects of endmember variability or nonlinear mixing.

2.3. Fusion Model

Substituting (3) into (1) gives
  Y k = R k E A B k S k + P ˇ k  
where the aggregate perturbation of the k th image is
P ˇ k = P k + R k P B k S k .
Instead of estimating the target multiband image X directly, we consider estimating its abundance matrix A from the observations Y k , k = 1 , .. K , given the endmember matrix E . We can then obtain an estimate of the target image by multiplying the estimated abundance matrix by the endmember matrix. This way, we reduce the dimensionality of the fusion problem and consequently the associated computational burden. In addition, by estimating A first, we attain an unmixed fused image obviating the need to perform additional unmixing, if demanded by any application utilizing the fused image. However, this approach requires the prior knowledge of the endmember matrix E . The columns of this matrix can be selected from a library of known spectral signatures, such as the U.S. Geological Survey digital spectral library [64], or extracted from the observed multiband images that have the appropriate spectral dimension.

3. Problem

3.1. Maximum-Likelihood Estimation

In order to facilitate our analysis, we define the following vectorized variables
  y k = vec { Y k } L k N k × 1
  a = vec { A } M N × 1
  p k = vec { P ˇ k } L k N k × 1  
where vec { · } is the vectorization operator that stacks the columns of its matrix argument on top of each other. Applying vec { · } to both sides of (5) while using the property vec { A B C } = ( C A ) vec { B } gives
  y k = ( S k B k R k E ) a + p k  
where denotes the Kronecker product.
Since P k and P have independent matrix normal distributions (see (2) and (4)), p k has a multivariate normal distribution expressed as
  p k   ~   N L k N k ( 0 L k N k , I N k Σ k + S k B k B k S k R k Σ R k )  
where 0 L k N k stands for the L k N k × 1 vector of zeroes. Using the approximation S k B k B k S k ξ k I N k with ξ k > 0 , we get
  p k   ~   N L k N k ( 0 L k N k , I N k Λ k )  
where
  Λ k = Σ k + ξ k R k Σ R k  
In view of (9) and (10), we have
y k   ~   N L k N k ( [ S k B k R k E ] a , I N k Λ k ) .
Hence, the probability density function of y k parametrized over the unknown a can be written as
f y k ( y k ; a ) = | 2 π I N k Λ k | 1 2 × exp { 1 2 [ y k ( S k B k R k E ) a ] ( I N k Λ k ) 1 [ y k ( S k B k R k E ) a ] } .  
Since the perturbations p k , k = 1 , , K , are independent of each other, the joint probability density function of the observations is written as
f y 1 , , y K ( y 1 , , y K ; a ) = k = 1 K f y k ( y k ; a ) = k = 1 K | 2 π I N k Λ k | 1 2 exp { 1 2 k = 1 K || ( I N k Λ k 1 / 2 ) [ y k ( S k B k R k E ) a ] || 2 }  
and the log-likelihood function of a given the observed data as
l ( a | y 1 , , y K ) = ln f y 1 , , y K ( y 1 , , y K ; a ) = 1 2 ln ( k = 1 K | 2 π I N k Λ k | ) 1 2 k = 1 K || ( I N k Λ k 1 / 2 ) [ y k ( S k B k R k E ) a ] || 2 .  
Accordingly, the maximum-likelihood estimate of a is found by solving the following optimization problem
a ^ = arg max a   l ( a | y 1 , , y K ) = argmin a   1 2 k = 1 K || ( I N k Λ k 1 / 2 ) [ y k ( S k B k R k E ) a ] || 2 .  
This problem can be stated in terms of A = vec 1 { a } as
  A ^ = argmin A   1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E A B k S k ) || F 2 .  
The Fisher information matrix (FIM) of the maximum-likelihood estimator a ^ in (16) is calculated as
  F = E [ H l ( a ) ]  
where H l ( a ) denotes the Hessian, i.e., the Jacobian of the gradient, of the log-likelihood function l ( a | y 1 , , y K ) . The entry on the i th row and the j th column of H l ( a ) is computed as
  2 a i a j l ( a | y 1 , , y K )  
where a i and a j denote the i th and j th entries of a , respectively. Accordingly, we can show that
  F = k = 1 K ( B k S k S k B k E R k Λ k 1 R k E ) .  
If F is invertible, the optimization problem (16) has a unique solution given by
  a ^ = [ k = 1 K ( B k S k S k B k E R k Λ k 1 R k E ) ] 1 k = 1 K ( B k S k E R k Λ k 1 ) y k  
and the Cramer–Rao lower bound for the estimator a ^ , which is a lower bound on the covariance of a ^ , is the inverse of F . The FIM F is guaranteed to be invertible when, for at least one image, the matrix B k S k S k B k E R k Λ k 1 R k E is full-rank.
The matrix S k S k has a rank of N k hence for D k > 1 is rank-deficient. The blurring matrix B k does not change the rank of the matrix that it multiplies from the right. In addition, as Λ k 1 is full-rank, E R k Λ k 1 R k E has a full rank of M when the rows of R k E are at least as many as its columns, i.e., L k M . Therefore, A and consequently X is guaranteed to be uniquely identifiable given Y k , k = 1 , , K , only when at least one observed image, say the q th image, has full spatial resolution, i.e., N q = N , with the number of its spectral bands being equal to or larger than the number of endmembers, i.e., L q M , so that, at least for the q th image, B q S q S q B q E R q Λ q 1 R q E is full-rank.
In practice, it is rarely possible to satisfy the abovementioned requirements as multiband images with high spectral resolution are generally spatially downsampled and the number of bands of the ones with full spatial resolution, such as panchromatic or multispectral images, is often less than the number of endmembers. Hence, the inverse problem of recovering A from Y k , k = 1 , , K , is usually ill-posed or ill-conditioned. Thus, some prior knowledge need be injected into the estimation process to produce a unique and reliable estimate. The prior knowledge is intended to partially compensate for the information lost in spectral and spatial downsampling and usually stems from experimental evidence or common facts that may induce certain analytical properties or constraints. The prior information is commonly incorporated into the problem in the form of imposed constraints or additive regularization terms. Examples of prior knowledge about A that are regularly used in the literature are non-negativity and sum-to-one constraints, matrix normal distribution with known or estimated parameters [45], sparse representation with a learned or known dictionary or basis [41], and minimal total variation [38].

3.2. Regularization

To develop an algorithm for effective fusion of multiple multiband images with arbitrary spatial and spectral resolutions, we employ two mechanisms to regularize the maximum-likelihood cost function in (17).
As the first regularization mechanism, we impose a constraint on A such that its entries are non-negative and sum to one in all columns. We express this constraint as A 0 and 1 M A = 1 N where A 0 means all the entries of A are greater than or equal to zero. As the second regularization mechanism, we add an isotropic vector total-variation penalty term, denoted by || A || 2 , 1 , to the cost function. Here, || · || 2 , 1 is the l 2 , 1 -norm operator that returns the sum of l 2 -norms of all the columns of its matrix argument. In addition, we define
  A = [ A D h A D v ] 2 M × N  
where D h and D v are discrete differential matrix operators that, respectively, yield the horizontal and vertical first-order backward differences (gradients) of the row-vectorized image that they multiply from the right. Consequently, we formulate our regularized optimization problem for estimating A as
  min A   1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E A B k S k ) || F 2 + α || A || 2 , 1 subject   to :   A 0   and   1 M A = 1 N  
where α 0 is the regularization parameter.
The non-negativity and sum-to-one constraints on A , which force the columns of A to reside on the unit ( M 1 ) -simplex, are naturally expected and help find a solution that is physically plausible. In addition, they implicitly induce sparseness in the solution. The total-variation penalty promotes solutions with a sparse gradient, a property that is known to be possessed by images of most natural scenes as they are usually made of piecewise homogeneous regions with few sudden changes at object boundaries or edges. Note that the subspace spanned by the endmembers is the one that the target image X lives in. Therefore, through the total-variation regularization of the abundance matrix A , we regularize X indirectly.

4. Algorithm

Defining the set of values for A that satisfy the non-negativity and sum-to-one constraints as
  S = { A | A 0 , 1 M A = 1 N }  
and making use of the indicator function ı S ( A ) defined as
  ı S ( A ) = { 0 A S + A S ,  
we rewrite (23) as
  min A   1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E A B k S k ) || F 2 + α || A || 2 , 1 + ı S ( A ) .  

4.1. Iterations

We use the alternating direction method of multipliers (ADMM), also known as the split-Bregman method, to solve the convex but nonsmooth optimization problem of (26). We split the problem to smaller and more manageable pieces by defining the auxiliary variables, U k M × N , k = 1 , , K , V 2 M × N , and W M × N , and changing (26) into
  min A , U 1 , , U K , V , W   1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E U k S k ) || F 2 + α || V || 2 , 1 + ı S ( W ) subject   to :   U k = A B k ,   V = A ,   W = A .
Then, we write the augmented Lagrangian function associated with (27) as
L ( A , U 1 , , U K , V , W , F 1 , , F K , G , H ) = 1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E U k S k ) || F 2 + α || V || 2 , 1 + ı S ( W ) + μ 2 k = 1 K || A B k U k F k || F 2 + μ 2 || A V G || F 2 + μ 2 || A W H || F 2  
where F k M × N , k = 1 , , K , G 2 M × N , and H M × N are the scaled Lagrange multipliers and μ 0 is the penalty parameter.
Using the ADMM, we minimize the augmented Lagrangian function in an iterative fashion. At each iteration, we alternate the minimization with respect to the main unknown variable A and the auxiliary variables; then, we update the scaled Lagrange multipliers. Hence, we compute the iterates as
  A ( n ) = argmin A   L ( A , U 1 ( n 1 ) , , U K ( n 1 ) , V ( n 1 ) , W ( n 1 ) , F 1 ( n 1 ) , , F K ( n 1 ) , G ( n 1 ) , H ( n 1 ) )  
  { U 1 ( n ) , , U K ( n ) , V ( n ) , W ( n ) } = argmin U 1 , , U K , V , W L ( A ( n ) , U 1 , , U K , V , W , F 1 ( n 1 ) , , F K ( n 1 ) , G ( n 1 ) , H ( n 1 ) )  
  F k ( n ) = F k ( n 1 ) ( A ( n ) B k U k ( n ) ) ,       k = 1 , , K
  G ( n ) = G ( n 1 ) ( A ( n ) V ( n ) )
  H ( n ) = H ( n 1 ) ( A ( n ) W ( n ) )  
where superscript ( n ) denotes the value of an iterate at iteration number n 0 . We repeat the iterations until convergence is reached up to a maximum allowed number of iterations.
Since we define the auxiliary variables independent of each other, the minimization of the augmented Lagrangian function (28) with respect to the auxiliary variables can be realized separately. Thus, (30) is equivalent to
  U k ( n ) = argmin U k   1 2 || Λ k 1 / 2 ( Y k R k E U k S k ) || F 2 + μ 2 || A ( n ) B k U k F k ( n 1 ) || F 2 ,   k = 1 , , K  
  V ( n ) = argmin V   α || V || 2 , 1 + μ 2 || A ( n ) V G ( n 1 ) || F 2  
  W ( n ) = argmin W   ı S ( W ) + μ 2 || A ( n ) W H ( n 1 ) || F 2 .  

4.2. Solutions of Subproblems

Considering (28), (29) can be written as
  A ( n ) = argmin A   k = 1 K || A B k U k ( n 1 ) F k ( n 1 ) || F 2 + || A V ( n 1 ) G ( n 1 ) || F 2 + || A W ( n 1 ) H ( n 1 ) || F 2   .  
Calculating the gradient of the cost function in (35) with respect to A and setting it to zero gives
A ( n ) = [ k = 1 K ( U k ( n 1 ) + F k ( n 1 ) ) B k   + Q 1 ( n 1 ) D h + Q 2 ( n 1 ) D v + W ( n 1 ) + H ( n 1 ) ] × ( k = 1 K B k B k + D h D h + D v D v + I N ) 1  
where, for the convenience of presentation, we define Q 1 ( n 1 ) and Q 2 ( n 1 ) as
[ Q 1 ( n 1 ) Q 2 ( n 1 ) ] = V ( n 1 ) + G ( n 1 ) .
To make the computation of A ( n ) in (36) more efficient, we assume that the two-dimensional convolutions represented by B k , k = 1 , , K , are cyclic. In addition, we assume that the differential matrix operators D h and D v apply with periodic boundaries. Consequently, multiplications by B k , D h , and D v as well as by ( k = 1 K B k B k + D h D h + D v D v + I N ) 1 can be performed through the use of the fast Fourier transform (FFT) algorithm and the circular convolution theorem. This theorem states that the Fourier transform of a circular convolution is the pointwise product of the Fourier transforms, i.e., a circular convolution can be expressed as the inverse Fourier transform of the product of the individual spectra [65].
Equating the gradient of the cost function in (32) with respect to U k to zero results in
E R k Λ k 1 R k E U k ( n ) S k S k + μ U k ( n ) = E R k Λ k 1 Y k S k + μ ( A ( n ) B k F k ( n 1 ) ) .
Multiplying both sides of (38) from the right by the masking matrix M k = S k S k and its complement I N M k yields
U k ( n ) M k = ( E R k Λ k 1 R k E + μ I N ) 1 [ E R k Λ k 1 Y k S k + μ ( A ( n ) B k F k ( n 1 ) ) M k ]  
and
U k ( n ) ( I N M k ) = ( A ( n ) B k F k ( n 1 ) ) ( I N M k ) ,
respectively. Note that we have S k S k = I N and M k is idempotent, i.e., M k M k = M k . Summing both sides of (39) and (40) gives the solution of (32) for k = 1 , , K as
U k ( n ) = U k ( n ) M k + U k ( n ) ( I N M k ) = ( E R k Λ k 1 R k E + μ I N ) 1 [ E R k Λ k 1 Y k S k + μ ( A ( n ) B k F k ( n 1 ) ) M k ] + ( A ( n ) B k F k ( n 1 ) ) ( I N M k ) .  
The terms ( E R k Λ k 1 R k E + μ I N ) 1 and E R k Λ k 1 Y k S k do not change during the iterations and can be precomputed.
The subproblem (33) can be decomposed pixelwise and its solution is linked to the so-called Moreau proximity operator of the l 2 , 1 -norm given by column-wise vector-soft-thresholding [66,67]. If we define
Z ( n ) = A ( n ) G ( n 1 ) ,
the j th column of V ( n ) , denoted by v j ( n ) , is given in terms of the j th column of Z ( n ) , denoted by z j ( n ) , as
  v j ( n ) = max { || z j ( n ) || 2 α μ , 0 } || z j ( n ) || 2 z j ( n ) .  
The solution of (34) is the value of the proximity operator of the indicator function ı S ( W ) at the point A ( n ) H ( n 1 ) , which is the projection of A ( n ) H ( n 1 ) onto the set S defined by (24). Therefore, we have
W ( n ) = argmin W S ||   A ( n ) H ( n 1 ) W || F 2 = Π S { A ( n ) H ( n 1 ) }  
where Π S { · } denotes the projection onto S . We implement this projection onto the unit ( M 1 ) -simplex employing the algorithm proposed in [68].
Algorithm 1 presents a summary of the proposed algorithm.
Algorithm 1 The proposed algorithm
1: initialize
2:    E VCA ( Y l ) % if E is not known and Y l has full spectral resolution
3:    A ( 0 ) upscale and interpolate the output of SUnSAL ( Y l , E )
4:    for k = 1 , , K
5:      U k ( 0 ) = A ( 0 )
6:      F k ( n ) = 0 M × N
7:    V ( 0 ) = A ( 0 ) , W ( 0 ) = A ( 0 )
8:    G ( 0 ) = 0 M × N , H ( 0 ) = 0 M × N
9: for n = 1 , 2 , % until a convergence criterion is met or a given maximum number of iterations is reached
10:     [ Q 1 ( n 1 ) Q 2 ( n 1 ) ] = V ( n 1 ) + G ( n 1 )
11:     A ( n ) = [ k = 1 K ( U k ( n 1 ) + F k ( n 1 ) ) B k   + Q 1 ( n 1 ) D h + Q 2 ( n 1 ) D v + W ( n 1 ) + H ( n 1 ) ]
       × ( k = 1 K B k B k + D h D h + D v D v + I N ) 1
12:    for k = 1 , , K
13:        U k ( n ) = ( E R k Λ k 1 R k E + μ I N ) 1 [ E R k Λ k 1 Y k S k + μ ( A ( n ) B k F k ( n 1 ) ) M k ]
          + ( A ( n ) B k F k ( n 1 ) ) ( I N M k )
14:     Z ( n ) = A ( n ) G ( n 1 )
15:    for j = 1 , , N
16:        v j ( n ) = max { || z j ( n ) || 2 α μ , 0 } || z j ( n ) || 2 z j ( n )
17:     W ( n ) = Π S { A ( n ) H ( n 1 ) }
18:    for k = 1 , , K
19:        F k ( n ) = F k ( n 1 ) ( A ( n ) B k U k ( n ) )
20:     G ( n ) = G ( n 1 ) ( A ( n ) V ( n ) )
21:     H ( n ) = H ( n 1 ) ( A ( n ) W ( n ) )
22: calculate the fused image
23:     X ^ = E A ( n )

4.3. Convergence

By defining
  U = [ U 1 , , U K , V , W ]  
and
C = [ B 1 , , B K , D h , D v , I N ] ,
(27) can be expressed as
min A   f ( U ) subject   to   U = C A  
where
  f ( U ) = 1 2 k = 1 K || Λ k 1 / 2 ( Y k R k E U k S k ) || F 2 + α || V || 2 , 1 + ı S ( W ) .  
The function f ( U ) is closed, proper, and convex as it is a sum of closed, proper, and convex functions and C has full column rank. Therefore, according to Theorem 8 of [56], if (47) has a solution, the proposed algorithm converges to this solution, regardless of the initial values as long as the penalty parameter μ is positive. If no solution exists, at least one of A ( n ) and U ( n ) will diverge.

5. Simulations

To examine the performance of the proposed algorithm in comparison with the state-of-the-art, we simulate the fusion of three multiband images, viz. a panchromatic image, a multispectral image, and a hyperspectral image. To this end, we adopt the popular practice known as the Wald’s protocol [69], which is to use a reference image with high spatial and spectral resolutions to generate the lower-resolution images that are fused and evaluate the fusion performance by comparing the fused image with the reference image.
We obtain the reference images of our experiments by cropping five publicly available hyperspectral images to the spatial resolutions given in Table 1. These images are called Botswana [70], Indian Pines [71], Washington DC Mall [71], Moffett Field [17], and Kennedy Space Center [70]. The Botswana image has been captured by the Hyperion sensor aboard the Earth Observing 1 (EO-1) satellite, the Washington DC Mall image by the airborne-mounted Hyperspectral Digital Imagery Collection Experiment (HYDICE), and the Indian Pines, Moffett Filed, and Kennedy Space Center images by the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) instrument. All images cover the visible near-infrared (VNIR) and short-wavelength-infrared (SWIR) ranges with uncalibrated, excessively noisy, and water-absorbed bands removed. The spectral resolution of each image is also given in Table 1. The data as well as the MATLAB code used to produce the results of this paper can be found at [72].
We generate three multiband images (panchromatic, multispectral, and hyperspectral) using each reference image. We obtain the hyperspectral images by applying a rotationally-symmetric 2D Gaussian blur filter with a kernel size of 13 × 13 and a standard deviation of 2.12 to each reference image followed by downsampling with a ratio of 4 in both spatial dimensions for all bands. For the multispectral images, we use a Gaussian blur filter with a kernel size of 7 × 7 and a standard deviation of 1.06 and downsampling with a ratio of 2 in both spatial dimensions for all bands of each reference image. Afterwards, we downgrade the resultant images spectrally by applying the spectral responses of the Landsat 8 multispectral sensor. This sensor has eight multispectral bands and one panchromatic band. Figure 1 depicts the spectral responses of all the bands of this sensor [73]. We create the panchromatic images from the reference images using the panchromatic band of the Landsat 8 sensor without applying any spatial blurring or downsampling. We add zero-mean Gaussian white noise to each band of the produced multiband images such that the band-specific signal-to-noise ratio (SNR) is 30 dB for the multispectral and hyperspectral images and 40 dB for the panchromatic image. In practice, SNR may vary along the bands of a multiband sensor and the noise may be non-zero-mean or non-Gaussian. Our use of the same SNR for all bands and zero-mean Gaussian noise is a simplification adopted for the purpose of evaluating the proposed algorithm and comparing it with the considered benchmarks.
Note that we have selected the standard deviations of the abovementioned 2D Gaussian blur filters such that the normalized magnitude of the modulation transfer function (MTF) of both filters is approximately 0.25 at the Nyquist frequency in both spatial dimensions [74] as shown in Figure 2. We have also selected the filter kernel sizes in accordance with the downsampling ratios and the selected standard deviations. In our simulations, we use symmetric 2D Gaussian blur filters for simplicity and ease of computations. Gaussian blur filters are known to well approximate the real acquisition MTFs, which may be affected by a number of physical processes. With a pushbroom acquisition, the MTF is different in the across-track and along-track directions. This is because, in the along-track direction, the blurring is due to the effects of the spatial resolution and the apparent motion of the scene. On the other hand, in the across-track direction, the blurring is mainly attributable to the resolution of the instrument, fixed by both detector and optics. There is also the contribution to MTF by the propagation trough a scattering atmosphere.
The current multiband image fusion algorithms published in the literature are designed to fuse two images at a time. In order to compare the performance of the proposed algorithm with the state-of-the-art, we consider fusing the abovementioned three multiband images in three different ways, which we refer to as Pan + HS, Pan + (MS + HS), and (Pan + MS) + HS, using the existing algorithms for pansharpening, hyperspectral pansharpening, and hyperspectral-multispectral fusion. In Pan + HS, we only fuse the panchromatic and hyperspectral images. In Pan + (MS + HS), and (Pan + MS) + HS, we fuse the given images in two cascading stages. In Pan + (MS + HS), first, we fuse the multispectral and hyperspectral images. Then, we fuse the resultant hyperspectral image with the panchromatic image. We use the same algorithm at both stages, albeit with different parameter values. In (Pan + MS) + HS, we first fuse the panchromatic image with the multispectral one. Then, we fuse the pansharpened multispectral image with the hyperspectral image. We use two different algorithms at each of the two stages resulting in four combined solutions.
For pansharpening, which is the fusion of a panchromatic image with a multispectral one, we use two algorithms called the band-dependent spatial detail (BDSD) [75] and the modulation-transfer-function generalized Laplacian pyramid with high-pass modulation (MTF-GLP-HPM) [76,77,78]. The BDSD algorithm belongs to the class of component substitution methods and the MTF-GLP-HPM algorithm falls into the category of multiresolution analysis. In [18], where several pansharpening algorithms are studied, it is shown that the BDSD and MTF-GLP-HPM algorithms exhibit the best performance among all the considered ones.
For fusing a panchromatic or multispectral image with a hyperspectral image, we use two algorithms proposed in [38,79,80], which are called HySure and R-FUSE-TV, respectively. These algorithms are based on total-variation regularization and are among the best performing and most efficient hyperspectral pansharpening and multispectral–hyperspectral fusion algorithms currently available [21,81].
We use three performance metrics for assessing the quality of a fused image with respect to its reference image. The metrics are the relative dimensionless global error in synthesis (ERGAS) [82], spectral angle mapper (SAM) [83], and Q 2 n [84]. The metric Q 2 n is a generalization of the universal image quality index (UIQI) proposed in [85] and an extension of the Q 4 index [86] to hyperspectral images based on hypercomplex numbers.
We extract the endmembers (columns of E ) from each hyperspectral image using the vertex component analysis (VCA) algorithm [87]. The VCA is a fast unsupervised unmixing algorithm that assumes the endmembers as the vertices of a simplex encompassing the hyperspectral data cloud. We utilize the SUnSAL algorithm [88] together with the extracted endmembers to unmix each hyperspectral image and obtain its abundance matrix. Then, we upscale the resulting matrix by a factor of four and apply two-dimensional spline interpolation on each of its rows (abundance bands) to generate the initial estimate for the abundance matrix A ( 0 ) . We initialize the proposed algorithm as well as the HySure and R-FUSE-TV algorithms by this matrix.
To make our comparisons fair, we tune the values of the parameters in the HySure and R-FUSE-TV algorithms to yield the best possible performance in all experiments. In addition, in order to use the BDSD and MTF-GLP-HPM algorithms to their best potential, we provide these algorithms with the true point-spread function, i.e., the blurring kernel, used to generate the multispectral images.
Apart from the number of endmembers, which can be estimated using, for example, the HySime algorithm [31], the proposed algorithm has two tunable parameters, the total-variation regularization parameter α and the ADMM penalty parameter μ . The automatic tuning of the values of these parameters is an interesting and challenging subject. There are a number of strategies that can be employed such as those proposed in [66,89]. We found through experimentations that although the value of μ impacts the convergence speed of the proposed algorithm, as long as it is within an appropriate range, it has little influence on the accuracy of the proposed algorithm. Therefore, we set it to μ = 1.5 × 10 3 in all experiments. The value of α affects the performance of the proposed algorithm in subtle ways as shown in Figure 3 where we plot the performance metrics, ERGAS, SAM, and Q 2 n , against α for the Botswana and Washington DC Mall images. The results in Figure 3 suggest that, for different values of α , there is a trade-off between the performance metrics, specifically, ERGAS and Q 2 n on one side and SAM on the other. Therefore, we tune the value of α for each experiment only roughly to obtain a reasonable set of values for all three performance metrics. We give the values of α used in the proposed algorithm in Table 1.
In Table 2, we give the values of the performance metrics to assess the quality of the images fused using the proposed algorithm and the considered benchmarks. We provide the performance metrics for the case of considering only the bands within the spectrum of the panchromatic image as well as the case of considering all bands, i.e., the entire spectrum of the reference image. We also give the time taken by each algorithm to produce the fused images. We used MATLAB (The MathWorks, Natick, MA, USA) with a 2.9-GHz Core-i7 CPU and 24 GB of DDR3 RAM and ran each of the proposed, HySure, and R-FUSE-TV algorithms for 200 iterations as they always converged sufficiently after this number of iterations. According to the results in Table 2, the proposed algorithm significantly outperforms the considered benchmarks. It is also evident from the required processing times that the computational (time) complexity of the proposed algorithm is lower than those of its contenders.
In Figure 4 and Figure 5, we plot the sorted per-pixel normalized root mean-square error (NRMSE) values of the proposed algorithm and the best performing algorithms from each of the Pan + HS, Pan + (MS + HS), and (Pan + MS) + HS categories. Figure 4 corresponds to the case of considering only the spectrum of the panchromatic image and Figure 5 to the case of considering the entire spectrum. We define the per-pixel NRMSE as || x j x ^ j || 2   / || x j || 2 where x j and x ^ j are the j th column of the reference image X and the fused image X ^ , respectively. We sort the NRMSE values in the ascending order.
In Figure 6, we show RGB renderings of the reference images together with the panchromatic, multispectral, and hyperspectral images generated from them and used for the fusion. We also show the fused images yielded by the proposed algorithm and Pan + (MS + HS) fusion using the HySure algorithm, which generally performs better than the other considered benchmarks. The multispectral images are depicted using their red, green, and blue bands. The RGB representations of the hyperspectral images are rendered through transforming the spectral data to the CIE XYZ color space and then transforming the XYZ values to the sRGB color space. From visual inspection of the reference and fused images shown in Figure 6, it is observed that the images fused by the proposed algorithm match their corresponding reference images better than the ones produced by the Pan + (MS + HS) fusion using the HySure algorithm do.

6. Conclusions

We proposed a new image fusion algorithm that can simultaneously fuse multiple multiband images. We utilized the well-known forward observation model together with the linear mixture model to cast the fusion problem as a reduced-dimension linear inverse problem. We used a vector total-variation penalty as well as non-negativity and sum-to-one constraints on the endmember abundances to regularize the associated maximum-likelihood estimation problem. The regularization encourages the estimated fused image to have low rank with a sparse representation in the spectral domain while preserving the edges and discontinuities in the spatial domain. We solved the regularized problem using the alternating direction method of multipliers. We demonstrated the advantages of the proposed algorithm in comparison with the state-of-the-art via experiments with five real hyperspectral images that were done following the Wald’s protocol.

Author Contributions

The author was responsible for all aspects of this work.

Funding

This work received no external funding.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Arablouei, R. Fusion of multiple multiband images with complementary spatial and spectral resolutions. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018. [Google Scholar]
  2. Courbot, J.-B.; Mazet, V.; Monfrini, E.; Collet, C. Extended faint source detection in astronomical hyperspectral images. Signal Process. 2017, 135, 274–283. [Google Scholar] [CrossRef]
  3. Du, B.; Zhang, Y.; Zhang, L.; Zhang, L. A hypothesis independent subpixel target detector for hyperspectral Images. Signal Process. 2015, 110, 244–249. [Google Scholar] [CrossRef]
  4. Mohammadzadeh, A.; Tavakoli, A.; Zoej, M.J.V. Road extraction based on fuzzy logic and mathematical morphology from pansharpened IKONOS images. Photogramm. Rec. 2006, 21, 44–60. [Google Scholar] [CrossRef]
  5. Souza, C., Jr.; Firestone, L.; Silva, L.M.; Roberts, D. Mapping forest degradation in the Eastern amazon from SPOT 4 through spectral mixture models. Remote Sens. Environ. 2003, 87, 494–506. [Google Scholar] [CrossRef]
  6. Du, B.; Zhao, R.; Zhang, L.; Zhang, L. A spectral-spatial based local summation anomaly detection method for hyperspectral images. Signal Process. 2016, 124, 115–131. [Google Scholar] [CrossRef]
  7. Licciardi, G.A.; Villa, A.; Khan, M.M.; Chanussot, J. Image fusion and spectral unmixing of hyperspectral images for spatial improvement of classification maps. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 July 2012. [Google Scholar]
  8. Bioucas-Dias, J.M.; Plaza, A.; Dobigeon, N.; Parente, M.; Du, Q.; Gader, P.; Chanussot, J. Hyperspectral unmixing overview: Geometrical, statistical, and sparse regression-based approaches. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2012, 5, 354–379. [Google Scholar] [CrossRef]
  9. Caiafa, C.F.; Salerno, E.; Proto, A.N.; Fiumi, L. Blind spectral unmixing by local maximization of non-Gaussianity. Signal Process. 2008, 88, 50–68. [Google Scholar] [CrossRef]
  10. Bioucas-Dias, J.M.; Plaza, A.; Camps-Valls, G.; Scheunders, P.; Nasrabadi, N.; Chanussot, J. Hyperspectral remote sensing data analysis and future challenges. IEEE Geosci. Remote Sens. Mag. 2013, 1, 6–36. [Google Scholar] [CrossRef]
  11. Shaw, G.A.; Burke, H.-H.K. Spectral imaging for remote sensing. Lincoln Lab. J. 2003, 14, 3–28. [Google Scholar]
  12. Greer, J. Sparse demixing of hyperspectral images. IEEE Trans. Image Process. 2012, 21, 219–228. [Google Scholar] [CrossRef] [PubMed]
  13. Arablouei, R.; de Hoog, F. Hyperspectral image recovery via hybrid regularization. IEEE Trans. Image Process. 2016, 25, 5649–5663. [Google Scholar] [CrossRef] [PubMed]
  14. Arablouei, R. Spectral unmixing with perturbed endmembers. IEEE Trans. Geosci. Remote Sens 2018, in press. [Google Scholar] [CrossRef]
  15. Arablouei, R.; Goan, E.; Gensemer, S.; Kusy, B. Fast and robust push-broom hyperspectral imaging via DMD-based scanning. In Novel Optical Systems Design and Optimization XIX, Proceedings of the Optical Engineering + Applications 2016—Part of SPIE Optics + Photonics, San Diego, CA, USA, 6–10 August 2016; SPIE: Bellingham, WA, USA, 2016; Volume 9948. [Google Scholar]
  16. Satellite Sensors. Available online: https://www.satimagingcorp.com/satellite-sensors/ (accessed on 10 October 2018).
  17. NASA’s airborne visible/infrared imaging spectrometer (AVIRIS). Available online: https://aviris.jpl.nasa.gov/data/free_data.html (accessed on 10 October 2018).
  18. Vivone, G.; Alparone, L.; Chanussot, J.; Mura, M.D.; Garzelli, A.; Licciardi, G.A.; Restaino, R.; Wald, L. A critical comparison among pansharpening algorithms. IEEE Trans. Geosci. Remote Sens. 2015, 53, 2565–2586. [Google Scholar] [CrossRef]
  19. Alparone, L.; Wald, L.; Chanussot, J.; Thomas, C.; Gamba, P.; Bruce, L.M. Comparison of pansharpening algorithms: Outcome of the 2006 GRS-S data-fusion contest. IEEE Trans. Geosci. Remote Sens. 2007, 45, 3012–3021. [Google Scholar] [CrossRef] [Green Version]
  20. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. 25 years of pansharpening: A critical review and new developments. In Signal and Image Processing for Remote Sensing, 2nd ed.; Chen, C.H., Ed.; CRC Press: Boca Raton, FL, USA, 2011. [Google Scholar]
  21. Loncan, L.; Almeida, L.B.; Bioucas-Dias, J.M.; Briottet, X.; Chanussot, J.; Dobigeon, N.; Fabre, S.; Liao, W.; Licciardi, G.A.; Simões, M.; et al. Hyperspectral pansharpening: A review. IEEE Geosci. Remote Sens. Mag. 2015, 3, 27–46. [Google Scholar] [CrossRef]
  22. Yin, H. Sparse representation based pansharpening with details injection model. Signal Process. 2015, 113, 218–227. [Google Scholar] [CrossRef]
  23. Pajares, G.; de la Cruz, J.M. A wavelet-based image fusion tutorial. Pattern Recognit. 2004, 37, 1855–1872. [Google Scholar] [CrossRef]
  24. Selva, M.; Aiazzi, B.; Butera, F.; Chiarantini, L.; Baronti, S. Hyper-sharpening: A first approach on SIM-GA data. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2015, 8, 3008–3024. [Google Scholar] [CrossRef]
  25. Eismann, M.T.; Hardie, R.C. Application of the stochastic mixing model to hyperspectral resolution enhancement. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1924–1933. [Google Scholar] [CrossRef]
  26. Huang, B.; Song, H.; Cui, H.; Peng, J.; Xu, Z. Spatial and spectral image fusion using sparse matrix factorization. IEEE Trans. Geosci. Remote Sens. 2014, 52, 1693–1704. [Google Scholar] [CrossRef]
  27. Zhang, Y.; de Backer, S.; Scheunders, P. Noise-resistant wavelet-based Bayesian fusion of multispectral and hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2009, 47, 3834–3843. [Google Scholar] [CrossRef]
  28. Kawakami, R.; Wright, J.; Tai, Y.-W.; Matsushita, Y.; Ben-Ezra, M.; Ikeuchi, K. High-resolution hyperspectral imaging via matrix factorization. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition( CVPR 2011), Colorado Springs, CO, USA, 20–25 June 2011; pp. 2329–2336. [Google Scholar]
  29. Wycoff, E.; Chan, T.-H.; Jia, K.; Ma, W.-K.; Ma, Y. A non-negative sparse promoting algorithm for high resolution hyperspectral imaging. In Proceedings of the 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, Vancouver, BC, Canada, 26–31 May 2013; pp. 1409–1413. [Google Scholar]
  30. Cawse-Nicholson, K.; Damelin, S.; Robin, A.; Sears, M. Determining the intrinsic dimension of a hyperspectral image using random matrix theory. IEEE Trans. Image Process. 2013, 22, 1301–1310. [Google Scholar] [CrossRef] [PubMed]
  31. Bioucas-Dias, J.; Nascimento, J. Hyperspectral subspace identification. IEEE Trans. Geosci. Remote Sens. 2008, 46, 2435–2445. [Google Scholar] [CrossRef]
  32. Landgrebe, D. Hyperspectral image data analysis. IEEE Signal Process. Mag. 2002, 19, 17–28. [Google Scholar] [CrossRef]
  33. Yuan, Y.; Fu, M.; Lu, X. Low-rank representation for 3D hyperspectral images analysis from map perspective. Signal Process. 2015, 112, 27–33. [Google Scholar] [CrossRef]
  34. Zurita-Milla, R.; Clevers, J.G.P.W.; Schaepman, M.E. Unmixing-based landsat TM and MERIS FR data fusion. IEEE Geosci. Remote Sens. Lett. 2008, 5, 453–457. [Google Scholar] [CrossRef] [Green Version]
  35. Rudin, L.; Osher, S.; Fatemi, E. Nonlinear total variation based noise removal algorithms. Phys. D Nonlinear Phenom. 1992, 60, 259–268. [Google Scholar] [CrossRef]
  36. Beck, A.; Teboulle, M. Fast gradient-based algorithms for constrained total variation image denoising and deblurring problems. IEEE Trans. Image Process. 2009, 18, 2419–2434. [Google Scholar] [CrossRef] [PubMed]
  37. Bresson, X.; Chan, T. Fast dual minimization of the vectorial total variation norm and applications to color image processing. Inverse Probl. Imaging 2008, 2, 455–484. [Google Scholar] [CrossRef] [Green Version]
  38. Simões, M.; Bioucas-Dias, J.; Almeida, L.B.; Chanussot, J. A convex formulation for hyperspectral image superresolution via subspace-based regularization. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3373–3388. [Google Scholar] [CrossRef]
  39. He, X.; Condat, L.; Bioucas-Dias, J.; Chanussot, J.; Xia, J. A new pansharpening method based on spatial and spectral sparsity priors. IEEE Trans. Image Process. 2014, 23, 4160–4174. [Google Scholar] [CrossRef] [PubMed]
  40. Palsson, F.; Sveinsson, J.R.; Ulfarsson, M.O. A new pansharpening algorithm based on total variation. IEEE Geosci. Remote Sens. Lett. 2014, 11, 318–322. [Google Scholar] [CrossRef]
  41. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J. Hyperspectral and multispectral image fusion based on a sparse representation. IEEE Trans. Geosci. Remote Sens. 2015, 53, 3658–3668. [Google Scholar] [CrossRef]
  42. Dong, W.; Fu, F.; Shi, G.; Cao, X.; Wu, J.; Li, G.; Li, X. Hyperspectral image super-resolution via non-negative structured sparse representation. IEEE Trans. Image Process. 2016, 25, 2337–2352. [Google Scholar] [CrossRef] [PubMed]
  43. Grohnfeldt, C.; Zhu, X.X.; Bamler, R. Jointly sparse fusion of hyperspectral and multispectral imagery. In Proceedings of the 2013 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Melbourne, Australia, 21–26 July 2013; pp. 4090–4093. [Google Scholar]
  44. Akhtar, N.; Shafait, F.; Mian, A. Bayesian sparse representation for hyperspectral image super resolution. In Proceedings of the 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Boston, MA, USA, 7–12 June 2015; pp. 3631–3640. [Google Scholar]
  45. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y. Bayesian fusion of multiband images. IEEE J. Sel. Top. Signal Process. 2015, 9, 1117–1127. [Google Scholar] [CrossRef]
  46. Hardie, R.C.; Eismann, M.T.; Wilson, G.L. MAP estimation for hyperspectral image resolution enhancement using an auxiliary sensor. IEEE Trans. Image Process. 2004, 13, 1174–1184. [Google Scholar] [CrossRef] [PubMed]
  47. Zhang, Y.; Duijster, A.; Scheunders, P. A Bayesian restoration approach for hyperspectral images. IEEE Trans. Geosci. Remote Sens. 2012, 50, 3453–3462. [Google Scholar] [CrossRef]
  48. Veganzones, M.A.; Simões, M.; Licciardi, G.; Yokoya, N.; Bioucas-Dias, J.M.; Chanussot, J. Hyperspectral super-resolution of locally low rank images from complementary multisource data. IEEE Trans. Image Process. 2016, 25, 274–288. [Google Scholar] [CrossRef] [PubMed]
  49. Berne, O.; Helens, A.; Pilleri, P.; Joblin, C. Non-negative matrix factorization pansharpening of hyperspectral data: An application to mid-infrared astronomy. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  50. Yokoya, N.; Yairi, T.; Iwasaki, A. Coupled nonnegative matrix factorization unmixing for hyperspectral and multispectral data fusion. IEEE Trans. Geosci. Remote Sens. 2012, 50, 528–537. [Google Scholar] [CrossRef]
  51. Lanaras, C.; Baltsavias, E.; Schindler, K. Hyperspectral superresolution by coupled spectral unmixing. In Proceedings of the IEEE ICCV, Santiago, Chile, 7–13 December 2015; pp. 3586–3594. [Google Scholar]
  52. Wei, Q.; Bioucas-Dias, J.; Dobigeon, N.; Tourneret, J.-Y.; Chen, M.; Godsill, S. Multiband image fusion based on spectral unmixing. IEEE Trans. Geosci. Remote Sens. 2016, 54, 7236–7249. [Google Scholar] [CrossRef]
  53. Kwan, C.; Budavari, B.; Bovik, A.C.; Marchisio, G. Blind quality assessment of fused WorldView-3 images by using the combinations of pansharpening and hypersharpening paradigms. IEEE Geosci. Remote Sens. Lett. 2017, 14, 1835–1839. [Google Scholar] [CrossRef]
  54. Yokoya, N.; Yairi, T.; Iwasaki, A. Hyperspectral, multispectral, and panchromatic data fusion based on coupled non-negative matrix factorization. In Proceedings of the 2011 3rd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Lisbon, Portugal, 6–9 June 2011; pp. 1–4. [Google Scholar]
  55. Afonso, M.; Bioucas-Dias, J.M.; Figueiredo, M. An augmented Lagrangian approach to the constrained optimization formulation of imaging inverse problems. IEEE Trans. Image Process. 2011, 20, 681–695. [Google Scholar] [CrossRef] [PubMed]
  56. Eckstein, J.; Bertsekas, D.P. On the Douglas–Rachford splitting method and the proximal point algorithm for maximal monotone operators. Math. Program. 1992, 55, 293–318. [Google Scholar] [CrossRef]
  57. Gabay, D.; Mercier, B. A dual algorithm for the solution of nonlinear variational problems via finite-element approximation. Comput. Math. Appl. 1976, 2, 17–40. [Google Scholar] [CrossRef]
  58. Glowinski, R.; Marroco, A. Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de Dirichlet non linéaires, Revue française d’automatique, informatique, recherche opérationnelle. Analyse Numérique 1975, 9, 41–76. [Google Scholar] [CrossRef]
  59. Boyd, S.; Parikh, N.; Chu, E.; Peleato, B.; Eckstein, J. Distributed optimization and statistical learning via the alternating direction method of multipliers. Found. Trends Mach. Learn. 2011, 3, 1–122. [Google Scholar] [CrossRef]
  60. Esser, E. Applications of Lagrangian-Based Alternating Direction Methods and Connections to Split-Bregman; CAM Reports 09-31; Center for Computational Applied Mathematics, University of California: Los Angeles, CA, USA, 2009. [Google Scholar]
  61. Gao, B.-C.; Montes, M.J.; Davis, C.O.; Goetz, A.F. Atmospheric correction algorithms for hyperspectral remote sensing data of land and ocean. Remote Sens. Environ. 2009, 113, S17–S24. [Google Scholar] [CrossRef]
  62. Zare, A.; Ho, K.C. Endmember variability in hyperspectral analysis. IEEE Signal Process. Mag. 2014, 31, 95–104. [Google Scholar] [CrossRef]
  63. Dobigeon, N.; Tourneret, J.-Y.; Richard, C.; Bermudez, J.C.M.; McLaughlin, S.; Hero, A.O. Nonlinear unmixing of hyperspectral images: Models and algorithms. IEEE Signal Process. Mag. 2014, 31, 89–94. [Google Scholar] [CrossRef]
  64. USGS Digital Spectral Library 06. Available online: https://speclab.cr.usgs.gov/spectral.lib06/ (accessed on 10 October 2018).
  65. Stockham, T.G., Jr. High-speed convolution and correlation. In Proceedings of the ACM Spring Joint Computer Conference, New York, NY, USA, 26–28 April 1966; pp. 229–233. [Google Scholar]
  66. Donoho, D.; Johnstone, I. Adapting to unknown smoothness via wavelet shrinkage. J. Am. Stat. Assoc. 1995, 90, 1200–1224. [Google Scholar] [CrossRef]
  67. Combettes, P.; Pesquet, J.-C. Proximal splitting methods in signal processing. In Fixed-Point Algorithms for Inverse Problems in Science and Engineering; Springer: New York, NY, USA, 2011; pp. 185–212. [Google Scholar]
  68. Condat, L. Fast projection onto the simplex and the l1 ball. Math. Program. 2016, 158, 575–585. [Google Scholar] [CrossRef] [Green Version]
  69. Wald, L.; Ranchin, T.; Mangolini, M. Fusion of satellite images of different spatial resolutions: Assessing the quality of resulting image. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar]
  70. Hyperspectral Remote Sensing Scenes. Available online: www.ehu.eus/ccwintco/?title=Hyperspectral_Remote_Sensing_Scenes (accessed on 10 October 2018).
  71. Baumgardner, M.; Biehl, L.; Landgrebe, D. 220 band AVIRIS Hyperspectral Image Data set: June 12, 1992 Indian Pine Test Site 3; Purdue University Research Repository; Purdue University: West Lafayette, IN, USA, 2015. [Google Scholar]
  72. Fusing multiple multiband images. Available online: https://github.com/Reza219/Multiple-multiband-image-fusion (accessed on 10 October 2018).
  73. Landsat 8. Available online: https://landsat.gsfc.nasa.gov/landsat-8/ (accessed on 10 October 2018).
  74. Alparone, L.; Baronti, S.; Aiazzi, B.; Garzelli, A. Spatial methods for multispectral pansharpening: Multiresolution analysis demystified. IEEE Trans. Geosci. Remote Sens. 2016, 54, 2563–2576. [Google Scholar] [CrossRef]
  75. Garzelli, A.; Nencini, F.; Capobianco, L. Optimal MMSE pan sharpening of very high resolution multispectral images. IEEE Trans. Geosci. Remote Sens. 2008, 46, 228–236. [Google Scholar] [CrossRef]
  76. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. MTF-tailored multiscale fusion of high-resolution MS and Pan imagery. Photogramm. Eng. Remote Sens. 2006, 72, 591–596. [Google Scholar] [CrossRef]
  77. Lee, J.; Lee, C. Fast and efficient panchromatic sharpening. IEEE Trans. Geosci. Remote Sens. 2010, 48, 155–163. [Google Scholar]
  78. Aiazzi, B.; Alparone, L.; Baronti, S.; Garzelli, A.; Selva, M. An MTF-based spectral distortion minimizing model for pan-sharpening of very high resolution multispectral images of urban areas. In Proceedings of the 2nd GRSS/ISPRS Joint Workshop Remote Sensing Data Fusion URBAN Areas, Berlin, Germany, 22–23 May 2003; pp. 90–94. [Google Scholar]
  79. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y.; Bioucas-Dias, J.M.; Godsill, S. R-FUSE: Robust fast fusion of multiband images based on solving a Sylvester equation. IEEE Signal Process. Lett. 2016, 23, 1632–1636. [Google Scholar] [CrossRef]
  80. Wei, Q.; Dobigeon, N.; Tourneret, J.-Y. Fast fusion of multi-band images based on solving a Sylvester equation. IEEE Trans. Image Process. 2015, 24, 4109–4121. [Google Scholar] [CrossRef] [PubMed]
  81. Yokoya, N.; Grohnfeldt, C.; Chanussot, J. Hyperspectral and multispectral data fusion: A comparative review of the recent literature. IEEE Geosci. Remote Sens. Mag. 2017, 5, 29–56. [Google Scholar] [CrossRef]
  82. Wald, L. Quality of high resolution synthesised images: Is there a simple criterion? In Proceedings of the Fusion of Earth data: Merging point measurements, raster maps and remotely sensed images, Nice, France, 26–28 January 2000; pp. 99–103. [Google Scholar]
  83. Kruse, F.A.; Lefkoff, A.B.; Boardman, J.W.; Heidebrecht, K.B.; Shapiro, A.T.; Barloon, P.J.; Goetz, A.F.H. The spectral image processing system (SIPS): Interactive visualization and analysis of imaging spectrometer data. Remote Sens. Environ. 1993, 44, 145–163. [Google Scholar] [CrossRef]
  84. Garzelli, A.; Nencini, F. Hypercomplex quality assessment of multi/hyperspectral images. IEEE Geosci. Remote Sens. Lett. 2009, 6, 662–665. [Google Scholar] [CrossRef]
  85. Wang, Z.; Bovik, A.C. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
  86. Alparone, L.; Baronti, S.; Garzelli, A.; Nencini, F. A global quality measurement of pan-sharpened multispectral imagery. IEEE Geosci. Remote Sens. Lett. 2004, 1, 313–317. [Google Scholar] [CrossRef]
  87. Nascimento, J.; Bioucas-Dias, J. Vertex component analysis: A fast algorithm to unmix hyperspectral data. IEEE Trans. Geosci. Remote Sens. 2005, 43, 898–910. [Google Scholar] [CrossRef]
  88. Bioucas-Dias, J.; Figueiredo, M. Alternating direction algorithms for constrained sparse regression: Application to hyperspectral unmixing. In Proceedings of the 2010 2nd Workshop on Hyperspectral Image and Signal Processing: Evolution in Remote Sensing, Reykjavik, Iceland, 14–16 June 2010; pp. 1–4. [Google Scholar]
  89. Golub, G.; Heath, M.; Wahba, G. Generalized cross-validation as a method for choosing a good ridge parameter. Technometrics 1979, 21, 215–223. [Google Scholar] [CrossRef]
Figure 1. The spectral responses of the Landsat 8 multispectral and panchromatic sensors.
Figure 1. The spectral responses of the Landsat 8 multispectral and panchromatic sensors.
Jimaging 04 00118 g001
Figure 2. The modulation transfer function (normalized spatial-frequency response) of the used 2D Gaussian blur filters in both spatial dimensions. The solid curve corresponds to the filter used to generate the multispectral images and the dashed curve corresponds to the filter used to generate the hyperspectral images.
Figure 2. The modulation transfer function (normalized spatial-frequency response) of the used 2D Gaussian blur filters in both spatial dimensions. The solid curve corresponds to the filter used to generate the multispectral images and the dashed curve corresponds to the filter used to generate the hyperspectral images.
Jimaging 04 00118 g002
Figure 3. The values of the performance metrics versus the regularization parameter α for the experiments with Botswana and Washington DC Mall images. The left y -axis corresponds to ERGAS and SAM and the right y -axis to Q 2 n .
Figure 3. The values of the performance metrics versus the regularization parameter α for the experiments with Botswana and Washington DC Mall images. The left y -axis corresponds to ERGAS and SAM and the right y -axis to Q 2 n .
Jimaging 04 00118 g003
Figure 4. The sorted per-pixel NRMSE of different algorithms measured only on the spectrum of the panchromatic image in experiments with different images.
Figure 4. The sorted per-pixel NRMSE of different algorithms measured only on the spectrum of the panchromatic image in experiments with different images.
Jimaging 04 00118 g004aJimaging 04 00118 g004b
Figure 5. The sorted per-pixel NRMSE of different algorithms measured on the entire spectrum in experiments with different images.
Figure 5. The sorted per-pixel NRMSE of different algorithms measured on the entire spectrum in experiments with different images.
Jimaging 04 00118 g005aJimaging 04 00118 g005b
Figure 6. The panchromatic, multispectral, and hyperspectral images that are fused together, the reference hyperspectral image, and the fused images produced by the proposed algorithm and the Pan + (MS + HS) method using the HySure algorithm.
Figure 6. The panchromatic, multispectral, and hyperspectral images that are fused together, the reference hyperspectral image, and the fused images produced by the proposed algorithm and the Pan + (MS + HS) method using the HySure algorithm.
Jimaging 04 00118 g006aJimaging 04 00118 g006b
Table 1. The spatial and spectral dimensions of the considered hyperspectral datasets (reference images) and the value of the regularization parameter used in the proposed algorithm with each dataset.
Table 1. The spatial and spectral dimensions of the considered hyperspectral datasets (reference images) and the value of the regularization parameter used in the proposed algorithm with each dataset.
ImageNo. of RowsNo. of ColumnsNo. of Bands   α
Botswana 400 240 145 5
Indian Pines 400 400 200 7
Washington DC Mall 400 300 191 5
Moffett Field 480 320 176 22
Kennedy Space Center 500 400 176 28
Table 2. The values of the performance metrics for assessing the fusion quality as well as the runtimes of the considered algorithms for different datasets.
Botswana
Botswana
FusionAlgorithm(s)Spectrum of PanEntire SpectrumTime (s)
ERGASSAM (°)   Q 2 n   ERGASSAM (°)   Q 2 n  
Pan + MS + HSproposed0.9001.3550.9801.6371.5750.95647.01
Pan + HSHySure1.2731.9750.9671.8392.4350.94661.20
R-FUSE-TV1.2721.9740.9671.8402.4360.94661.17
Pan + (MS + HS)HySure1.2561.7210.9621.9922.1010.93778.28
R-FUSE-TV1.2651.7340.9612.0022.1130.93779.44
(Pan + MS) + HSBDSD & HySure1.3931.9710.9552.4582.3590.91262.58
BDSD & R-FUSE-TV1.3921.9770.9562.4612.3650.91262.10
MTF-GLP-HPM & HySure1.4412.1200.9572.1812.4420.93162.78
MTF-GLP-HPM & R-FUSE-TV1.4402.1240.9572.1852.4460.93162.20
Indian Pines
Indian Pines
FusionAlgorithm(s)Spectrum of PanEntire SpectrumTime (s)
ERGASSAM (°)   Q 2 n   ERGASSAM (°)   Q 2 n  
Pan + MS + HSproposed0.3040.2930.9900.5000.7610.96980.21
Pan + HSHySure0.4200.5470.9860.8131.1080.632106.75
R-FUSE-TV0.4250.5550.9860.8131.1130.632106.47
Pan + (MS + HS)HySure0.6560.6410.9610.8341.1170.594134.79
R-FUSE-TV0.6950.6420.9530.8751.1200.573134.32
(Pan + MS) + HSBDSD & HySure0.5380.5170.9720.8031.1830.670108.33
BDSD & R-FUSE-TV0.5390.5200.9720.7941.1820.674107.34
MTF-GLP-HPM & HySure0.5660.5630.9720.9591.2680.626108.48
MTF-GLP-HPM & R-FUSE-TV0.5670.5670.9720.9471.2700.628107.51
Washington DC Mall
Washington DC Mall
FusionAlgorithm(s)Spectrum of PanEntire SpectrumTime (s)
ERGASSAM (°)   Q 2 n   ERGASSAM (°)   Q 2 n  
Pan + MS + HSproposed0.7311.1160.9972.4842.7950.97059.52
Pan + HSHySure1.1712.0470.9923.8224.5390.93079.02
R-FUSE-TV1.1712.0420.9923.8324.5370.93078.38
Pan + (MS + HS)HySure0.9371.7180.9943.2333.5920.94999.74
R-FUSE-TV1.2041.7380.9913.2703.6640.947100.53
(Pan + MS) + HSBDSD & HySure1.1142.0390.9924.1745.0480.91879.68
BDSD & R-FUSE-TV1.1042.0600.9924.2515.0330.91678.41
MTF-GLP-HPM & HySure1.3081.8700.9914.3805.1470.91179.28
MTF-GLP-HPM & R-FUSE-TV1.2981.8840.9914.4405.1140.91078.13
Moffett Field
Moffett Field
FusionAlgorithm(s)Spectrum of PanEntire SpectrumTime (s)
ERGASSAM (°)   Q 2 n   ERGASSAM (°)   Q 2 n  
Pan + MS + HSproposed0.5720.7860.9924.2323.1480.88577.37
Pan + HSHySure0.9021.1510.9856.5074.2330.823107.73
R-FUSE-TV0.9141.1520.9846.4164.2100.827106.20
Pan + (MS + HS)HySure0.8261.0040.9865.0783.6030.868134.78
R-FUSE-TV0.9641.0140.9775.1003.6700.845135.20
(Pan + MS) + HSBDSD & HySure1.0611.1350.9805.3254.0650.829108.91
BDSD & R-FUSE-TV1.0581.1340.9805.2444.0390.834106.12
MTF-GLP-HPM & HySure1.3961.1220.9685.9244.3840.824108.98
MTF-GLP-HPM & R-FUSE-TV1.3961.1230.9695.8354.3600.830106.28
Kennedy Space Center
Kennedy Space Center
FusionAlgorithm(s)Spectrum of PanEntire SpectrumTime (s)
ERGASSAM (°)   Q 2 n   ERGASSAM (°)   Q 2 n  
Pan + MS + HSproposed1.0241.6280.9842.4683.2110.90999.94
Pan + HSHySure1.4512.4260.9793.5443.9950.890138.16
R-FUSE-TV1.5182.4960.9743.6803.7950.886134.97
Pan + (MS + HS)HySure1.4622.2030.9672.8513.5460.909172.18
R-FUSE-TV1.8752.3430.9392.9864.1550.878172.25
(Pan + MS) + HSBDSD & HySure1.7382.5940.9493.7274.8240.850138.66
BDSD & R-FUSE-TV1.6912.5470.9533.5344.5840.865135.74
MTF-GLP-HPM & HySure6.8013.2500.9129.5325.1830.805138.60
MTF-GLP-HPM & R-FUSE-TV8.1433.2640.91411.1305.1970.816135.58

Share and Cite

MDPI and ACS Style

Arablouei, R. Fusing Multiple Multiband Images. J. Imaging 2018, 4, 118. https://doi.org/10.3390/jimaging4100118

AMA Style

Arablouei R. Fusing Multiple Multiband Images. Journal of Imaging. 2018; 4(10):118. https://doi.org/10.3390/jimaging4100118

Chicago/Turabian Style

Arablouei, Reza. 2018. "Fusing Multiple Multiband Images" Journal of Imaging 4, no. 10: 118. https://doi.org/10.3390/jimaging4100118

APA Style

Arablouei, R. (2018). Fusing Multiple Multiband Images. Journal of Imaging, 4(10), 118. https://doi.org/10.3390/jimaging4100118

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop