Next Article in Journal
Entropy Generation during Turbulent Flow of Zirconia-water and Other Nanofluids in a Square Cross Section Tube with a Constant Heat Flux
Previous Article in Journal
How to Read Probability Distributions as Statements about Process
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Image Fusion Based on the \({\Delta ^{ - 1}} - T{V_0}\) Energy Function

1
Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
2
Research Center for Smart Vehicles, Toyota Technological Institute, Nagoya, Aichi 468-8511, Japan
3
Department of General Education, Macau University of Science and Technology, Macau, China
4
Toyota Central R&D Labs, Yokomichi, Aichi 480-1192, Japan
*
Author to whom correspondence should be addressed.
Entropy 2014, 16(11), 6099-6115; https://doi.org/10.3390/e16116099
Submission received: 9 October 2014 / Revised: 11 November 2014 / Accepted: 12 November 2014 / Published: 18 November 2014
(This article belongs to the Section Information Theory, Probability and Statistics)

Abstract

:
This article proposes a Δ−1TV0 energy function to fuse a multi-spectral image with a panchromatic image. The proposed energy function consists of two components, a TV0 component and a Δ−1 component. The TV0 term uses the sparse priority to increase the detailed spatial information; while the Δ−1 term removes the block effect of the multi-spectral image. Furthermore, as the proposed energy function is non-convex, we also adopt an alternative minimization algorithm and the L0 gradient minimization to solve it. Experimental results demonstrate the improved performance of the proposed method over existing methods.

1. Introduction

With the application of multi-source imaging in many areas, such as remote sensing [1,2], medical imaging [3] and quality and defect detection [4], image fusion has become an attractive and important research area in image processing. More specifically, in remote sensing, there is a significant interest in the fusion of high spatial and high spectral resolution images to provide a better description and visual representation of a scene. However, typically, both high spatial and spectral resolution images are not simultaneously presented in a single image, due to commercial constraints. More specifically, panchromatic images contain high spatial resolution with limited spectral resolution, while multi-spectral images contain high spectral resolution with low spatial resolution.
Consequently, many researchers have expressed interest in the fusion of multi-spectral images and panchromatic images. The most common methods are based on the intensity-hue-saturation (IHS) transform [5,6]. However, using the IHS method results in spectral degradation. To solve this problem, some algorithms based on the wavelet transform are proposed. Wavelet transform is an important tool in signal processing [7] and image processing [8,9], especially for image fusion. Núñez et al. [10] fused a high spatial resolution panchromatic image (Satellite pour l’ observation de la Terre (SPOT)) with a low spatial resolution multi-spectral image (Landsat Thematic Mapper (TM)) using the additive wavelet algorithm (AW). The Atrous wavelet approximation of the SPOT panchromatic image is substituted by the bands of TM image. Li et al. [11] proposed a choose-max wavelet fusion algorithm (CMW) based on the wavelet transform. An algorithm based on the multi-scale first fundamental form (MFF) was presented by Scheunders [12], which used a multi-valued image wavelet representation method to fuse images. However, this method is prone to large wavelet coefficients. Chen et al. [13] addressed this issue by proposing a weighted multi-scale first fundamental form method (WMFF). The image fusion methods based on wavelet transform, while reporting good results, have three limitations: empirical selection of the predefined basis; selection of the decomposition level; the wavelet method focuses on preserving spectral information. In order to solve these issues, techniques, such as the TV (total variation)–L1 model [1416], were proposed as a balanced spatial and spectral information model, which use the primal-dual algorithm and Bregman-splitting algorithm to maximize the matching measurement. In the TV − L1 model, the TV regularization preserves the discontinuity of the magnitudes of gradient difference; and the L1 data term tends to measure the low frequency information by calculating the difference between the result and the multi-spectral image. Another class of literature utilizes population-based optimization to fuse multiple images. Saeedi [17] utilized the population-based optimization to select the weights of the low-frequency wavelet coefficients to improve the infrared and visible image fusion. Saeedi [18] found the optimal pan-sharpened image by applying the multi-objective particle swarm optimization algorithm and using the two initial pan-sharpened results generated by two different fusion rules. Finally, Lacewell [19] used genetic algorithms to get a more optimized combined image for visual and thermal satellite image fusion. While the reported literature produces good results, the multi-spectral images are prone to the block effect, and the existing literature does not address this problem. In this article, we propose to address the block effect within the multi-spectral images by using the TV0 Δ−1 energy function. The Δ−1 component not only deletes the block effect, but also retains the spectral information. Additionally, the L0 gradient term of the proposed energy function increases the spatial resolution within the final fused result.
The contributions of our paper in image fusion can be summarized as follows:
  • The Δ−1 term within the fusion energy function is proposed to remove the block effect in multi-spectral images without affecting the spatial information.
  • To improve the image fusion accuracy, an alternative minimization algorithm using a non-convex L0 regularization term is proposed.
The rest of the paper is organized as follows: Section 2 presents a detailed description of the proposed fusion algorithm. Section 3 provides experimental results and discussions on IKONOS and QUICKBIRD images. Finally, in Section 4, our conclusions are discussed.

2. Computational Method

2.1. Flowchart of the Proposed Algorithm

The flowchart of the proposed algorithm is presented in Figure 1. In the proposed algorithm, the color multi-spectral image is transformed into the IHS color model [20]. The intensity component is selected as the primary fusion variable. The method uses an energy minimization process solved by an iterative process, which is based on the inverse Laplace transform and sparse norm of the gradient term.

2.2. Inverse Laplace Transform Δ−1

The Δ−1 operator is proposed by [21,22] and also applied to image fusion quality assessment [16,23]. Additionally, [22] computes Δ−1 by discrete Fourier transform. The discrete Δ operator is given by:
Δ f ( m , n ) = f ( m + 1 , n ) + f ( m 1 , n ) + f ( m , n + 1 ) + f ( m , n 1 ) 4 f ( m , n ) ,
where m and n are the pixel position of image f with size [M,N]. Applying the discrete Fourier transform on (1), we obtain:
F ( Δ f ) ( p , q ) = 2 ( cos ( 2 π M p ) + cos ( 2 π N p ) 2 ) F ( f ) ( p , q ) ,
where p and q are discrete Fourier sample points and F presents the discrete Fourier transform. In order to avoid singularity, we input a small constant ϵ and obtain the Δ−1 as follows:
Δ 1 ( f ) = F 1 ( F ( f ) 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) ) .
As shown in Figure 2, the weighting function ( 1 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) ) of Equation (3) focuses on the low frequency component of the image f. An example of the application Δ−1 on the IKONOS’s image is shown in Figure 3. It can be seen that Δ−1 removes the block effect, while preserving the general information.

2.3. Functional Form

Xie et al. [14] proposed a TV − L1 fusion method as follows:
min R J ( R ) = min R R T 1 + λ ( R G ) 1 ,
where J is an energy function about the fusion result R, G is a panchromatic image, T is the intensity part of IHS transformation of a multi-spectral image, λ is a weighted parameter and 1 stands for a discrete total variation norm. If we use the anisotropic total variation form, we can transform Equation (4) as follows:
min R J ( R ) = min R R T 1 + λ R x G x 1 + λ R y G y 1 ,
where Rx is the partial derivative of R with respect to x, Ry is the partial derivative of R with respect to y, Gx is the partial derivative of G with respect to x and Gy is the partial derivative of G with respect to y.
In recent years, many studies have empirically demonstrated through experiments that non-convex potential functions are suitable for optimization-based computer vision problems. Krishnan [24] used a normalized sparsity measure similar to L0 for the image blind deconvolution model. A Lp norm [25] is proposed to solve the optical flow and image denoising model. Fu and Zhang [26] proposed an adaptive non-convex model to solve the image restoration model. Consequently, in our article, we investigate the use of a sparse L0 norm to increase the image details of the fused result. A non-convex L0 norm is substituted for the L1 norm of anisotropic total variation of the TV − L1 model as follows:
min R J ( R ) = min R R T 1 + λ R x G x 0 + λ R y G y 0 ,
where R y G y 0 = p , | R y , p G y , p | 0 , p is the pixels and ⊕ is the counting operator. We use the L0 norm of the gradient to transform the spatial information, which can retain the spectral information, whose tendency is inconsistent with the spatial information.
While the TV − L1 model demonstrates good results for multi-spectral fusion, the final fused images contain the block effect, as discussed earlier. To address this issue, we apply Δ−1 on the data fitting term and substitute ||R − T||1 with ||Δ−1R − Δ−1T||1 as:
min R J ( R ) = min R Δ 1 R Δ 1 T 2 2 + λ R x G x 0 + λ R y G y 0 ,
where R is the fused result, which retains the spatial information and the spectral information. Moreover, we employ the Δ−1 operator, within ||TV − L1|| as ||Δ−1R − Δ−1T||1, resulting in the function focusing on the low frequency information of R − T and neglecting the high frequency information.
Finally, as Model (7) is a non-convex model, we solve the problem using an alternative minimization algorithm, as detailed in the next subsection.

2.4. Alternative Minimization Algorithm

We use the splitting scheme similar to [20,27] to solve Equation (7). It is well-known that the L0 problem is strongly NP-hard to solve, but we solve it based on the conclusion reported in [26]. Firstly, we introduce two auxiliary variables p1 and p2 to transform Model (7) as follows:
min R , p 1 , p 2 J ( R , p 1 , p 2 ) = min R , p 1 , p 2 Δ 1 R Δ 1 T 2 2 + β R x G x p 1 2 2 + β R y G y p 2 2 2 + λ p 1 0 + λ p 2 0 .
The alternative minimization algorithm [28] is employed to solve Model (8). The most important part of the alternative minimization is to solve:
min p 1 J 1 ( p 1 ) = β R x G x p 1 2 + λ p 1 0 min p 2 J 1 ( p 2 ) = β R y G y p 2 2 + λ p 2 0 .
It is well known that Model (9) is strongly NP-hard. We use the technique of [27] to decompose it into the following model:
i min p 1 , i { β ( R x , i G x , i p 1 , i ) 2 + λ H ( | p 1 , i | ) } i min p 2 , i { β ( R y , i G y , i p 2 , i ) 2 + λ H ( | p 2 , i | ) } ,
where i is the pixel position of the image and H is defined as:
H ( | x | ) = { 1 , if | x | 0 ; 0 , otherwise .
Next, Model (9) is rewritten by Model (10). Additionally, solve Model (10) by computing each term as follows:
min p 1 , i { ( R x , i G x , i p 1 , i ) 2 + λ β H ( | p 1 , i | ) } min p 2 , i { ( R y , i G y , i p 2 , i ) 2 + λ β H ( | p 2 , i | ) } .
To solve Model (12), we need Proposition 1.
Proposition 1. Model (12) obtains the minimum under the conditions:
p 1 , i = { 0 , R y , i G y , i ( R y , i G y , i ) 2 λ β ; o t h e r w i s e p 2 , i = { 0 , R y , i G y , i ( R x , i G x , i ) 2 λ β ; o t h e r w i s e
The proof is similar to [27]. In order for this paper to be self-contained, we present the proof of Proposition 1 as follows.
Proof. If ( R x . i G x , i ) 2 λ β and p1,i ≠ 0, we have:
( R x , i G x , i p 1 , i ) 2 + λ β H ( | p 1 , i | ) = ( R x , i G x , i p 1 , i ) 2 + λ β λ β ( R x , i G x , i ) 2 .
If ( R x , i G x , i ) 2 λ β and p1,i = 0, we have:
( R x , i G x , i p 1 , i ) 2 + λ β H ( | p 1 , i | ) = ( R x , i G x , i ) 2 .
Combining (14) and (15), we conclude that if ( R x , i G x , i ) 2 λ β and p1,i = 0, Model (12) obtains the minimum (Rx,i − Gx,i)2.
If ( R x , i G x , i ) 2 λ β and p1,i ≤ 0, we have:
( R x , i G x , i p 1 , i ) 2 + λ β H ( | p 1 , i | ) = ( R x , i G x , i p 1 , i ) 2 + λ β λ β . ( If p 1 , i = R x , i G x , i , t h e e q u a l i t y h o l d s . )
If ( R x , i G x , i ) 2 > λ β and p1,i = 0, we obtain:
( R x , i G x , i p 1 , i ) 2 + λ β H ( | p 1 , i | ) = ( R x , i G x , i ) 2 .
Combining (16) and (17), we conclude that if ( R x , i G x , i ) 2 > λ β and p1,i = Rx,iGx,i, Model (12) obtains the minimum λ β. Thus, we complete the proof of Proposition 1.
Because every point is considered to be a single point individual, we simplify Equation (13) as follows:
p 1 = { 0 , R x G x , ( R y , i G y , i ) 2 λ β ; o t h e r w i s e . p 2 = { 0 , R y G y , ( R y G y ) 2 λ β ; o t h e r w i s e .
Although Proposition 1 can solve the L0 approximately, another sub-problem of the alternate minimization is to solve as follows:
min R J 2 ( R ) = min R Δ 1 R Δ 1 T 2 2 + β R x G x p 1 2 2 + β R y G y p 2 2 2 .
Based on the definition of Δ−1 and the discrete Fourier transform, we transform (19) as follows:
min F ( R ) J 2 ( F ( R ) ) = min R F ( Δ 1 R ) F ( Δ 1 T ) 2 2 + β F ( x ) F ( R ) F ( x ) F ( G ) F ( p 1 ) 2 2 + β F ( y ) F ( R ) F ( y ) F ( G ) F ( p 2 ) 2 2 ,
where ○ represents the element-wise multiplication. Based on (3), Equation (20) can be transformed as follows:
min F ( R ) J 2 ( F ( R ) ) = F ( R ) F ( T ) 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) 2 2 + β F ( x ) F ( R ) F ( x ) F ( G ) F ( p 1 ) 2 2 + β F ( y ) F ( R ) F ( y ) F ( G ) F ( p 2 ) 2 2 .
Then, we obtain the solution:
W 0 = 1 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) W 1 = W 0 F ( T ) + β F ( x ) F ( x ) F ( x ) F ( G ) F ( p 1 ) + β F ( y ) F ( y ) F ( G ) F ( p 2 ) W 2 = W 0 W 0 + β F ( x ) F ( x ) + β F ( y ) F ( y ) R = F 1 ( W 1 . / W 2 ) ,
where represents the conjugate transpose operator, ./ is the element-wise division, F−1 is the inverse Fourier transform and (x, y) are differentiated operators.
After we solve the above important sub-optimization, we present our framework by Algorithms 1 and 2 as follows:
Algorithm 1 Image fusion algorithm
Algorithm 1 Image fusion algorithm
Entropy 16 06099f7
Algorithm 2 Kernel algorithm
Algorithm 2 Kernel algorithm
Entropy 16 06099f8
Remark 1.
  • rgb2ihs indicates the transformation from RGB (red, green, blue) space to IHS (intensity, hue, saturation) space.
  • ihs2rgb indicates the transformation from IHS space to RGB space.
  • ι is a small positive constant.
To give convergence results for the generated sequence { J ( R i , p 1 i , p 2 i ) } i IN generated by Algorithm 2, we give Proposition 2 as follows.
Proposition 2. If ϵ > 0, the sequence J ( R n , p 1 n , p 2 n ) generated by Algorithm 2 converges monotonically.
Proof. Based on the Plancherel theorem, the energy of (20) is equal to the energy of (19). As Equation (22) presents, the solver of the energy Equation (19) is:
R n = F 1 ( W 1 n . / W 2 n ) ,
where W 1 n = W 0 F ( T ) + β F ( x ) F ( x ) F ( G ) F ( p 1 n ) + β F ( y ) F ( y ) F ( G ) F ( p 2 n ) and W 2 n = W 0 W 0 + β F ( x ) F ( x ) + β F ( y ) F ( y ). If i , j , W 2 n ( i , j ) 0, the solver Rn+1 can be obtained by the Equation (23).
If ϵ > 0, the equation 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) < 0 is generated. Therefore,
W 2 n = W 0 W 0 > 0 + β F ( x ) F ( x ) 0 + β F ( y ) F ( y ) 0 > 0.
Accordingly, the optimization problem (19) can be solved. Then, we obtain that J ( R n + 1 , p 1 n , p 2 n ) J ( R n , p 1 n , p 2 n ). Based on Proposition 1, we obtain that J ( R n + 1 , p 1 n + 1 , p 2 n + 1 ) J ( R n + 1 , p 1 n , p 2 n ). From the above discussion, we obtain that:
J ( R n + 1 , p 1 n + 1 , p 2 n + 1 ) J ( R n , p 1 n , p 2 n ) .
It is trivial to obtain that J(·,·,·) is bounded from below. With the monotone convergence theorem, the sequence { J ( R i , p 1 i , p 2 i ) } i IN converges to a limit vector a.

2.5. The Selection of the Key Parameter λ

Parameter λ is very important for our fusion energy function. It determines the level of detail of information from the panchromatic image and the level of spectral information from the multi-spectral image present in the final fused image. In this paper, we do not assign a fixed value to λ, but an image A. Image A is constructed of two parts: one is a constant value matrix A1 corresponding to the previous constant parameter, and the other is the logical matrix A2 derived from the dilation operator of the mathematical morphology on the edge image of the panchromatic image. That means if A2(i) is larger, the object function focuses on the panchromatic image; otherwise, the multi-spectral information is considered. An example of image A is shown in Figure 4. Because Equation (13) is the point-wise operator, we can substitute the value λ with matrix A. Thus, we extend Equation (13) as follows:
p 1 , i = { 0 , R x , i G x , i ( R x , i G x , i ) 2 A ( i ) β ; o t h e r w i s e . p 2 , i = { 0 , R y , i G y , i ( R y , i G y , i ) 2 A ( i ) β ; o t h e r w i s e .
where i is the pixel position of the image.

3. Experimental Results

Some experiments on IKONOS and QUICKBIRD images are used to validate the proposed algorithm. We use two metrics to quantify the performance. The correlation measure (CM) [10] is defined as follows:
C M ( A , B ) = ( A A ¯ ) ( B B ¯ ) ( A A ¯ ) 2 ( B B ¯ ) 2 ,
where A and B are the images in the lexicographic order. CM computes correlation coefficients of the red, the green and the blue channels between the multispectral image and the fused result; this computation can be used to assess the preservation of the spectral information of the result.
A feature-based metric QAB/F [29] is adopted. The QAB/F metric is defined as follows:
g A ( m , n ) = s A x ( m , n ) 2 + s A y ( m , n ) 2 , a A ( m , n ) = tan 1 ( s A y ( m , n ) s A x ( m , n ) ) ,
where s A x ( m , n ) and s A y ( m , n ) are the outputs of the horizontal and vertical Sobel templates centered on pixel (m, n) and convolved with the corresponding pixels of image A. Similarly, gB(m, n) and gF(m, n) can also be obtained by the above definition. The relative strength and orientation values of GAF and AAF of image A with respect to F are defined as:
G A F ( m , n ) = { g F ( m , n ) g A ( m , n ) , if g A ( m , n ) > g F ( m , n ) ; g F ( m , n ) g A ( m , n ) , otherwise .
A A F ( m , n ) = | a A ( m , n ) a A ( m , n ) | π / 2 | | π / 2 .
Then, the edge strength and orientation preservation values are derived as:
Q g A F = Γ g 1 + e K g ( G A F ( m , n ) σ g ) ,
Q a A F = Γ a 1 + e K a ( A A F ( m , n ) σ a ) ,
where Γg, Kggα, Kα and σα are constants and determine the exact shape of the sigmoid functions used to form the edge strength and orientation preservation values. Edge information preservation values are then defined as:
Q A F = Q g A F Q a A F .
Therefore, QAB/F is obtained as:
Q A B / F = m = 1 M n = 1 N Q A F ( m , n ) w A ( m , n ) + Q B F ( m , n ) w B ( m , n ) i = 1 M j = 1 N ( w A ( i , j ) + w B ( i , j ) ) ,
where wA(m,n) = (gA(m,n))L,wB(m,n) = (gB(m, n))L and L is a constant. QAB/F considers the amount of edge information transferred from the input images to the fused image. The larger QAB/F value indicates that the fused result contains more information from the input images.
Spatial frequency (SF) measurement [30,31] is about the overall activity level in an image. The spatial frequency is defined as follows:
R F = 1 M × N i = 1 M j = 2 N ( I ( i , j ) I ( i , j 1 ) ) 2
C F = 1 M × N i = 2 M j = 1 N ( I ( i , j ) I ( i 1 , j ) ) 2
S F = ( R F ) 2 + ( C F ) 2 ,
where SF is defined on an M × N image I, RF is row frequency, CF is column frequency and I(i,j) denotes the samples of image. The large value of SF means that the image contains components in a high frequency area. The spatial frequent measurement can be used to reflect the clarity of the result.
The proposed method is compared to some wavelet-based methods, such as the additive wavelet fusion method (AW) [10], the choose-max wavelet fusion method (CMW) [11], the multi-scale fundamental forms fusion method (MFF) [12] and the weighted multi-scale fundamental forms fusion method (WMFF) [13]. In order to illustrate the superiority of the proposed L0 gradient minimization, we perform a comparative experimentation with L1 gradient minimization-based methods [14,15], and the results are shown in Tables 1 and 2.
From the enlarged portion of the fused result, Figure 5 shows that our proposed algorithm has removed the block effect. We selected the parameter to make CM (r) metric, as close as possible, which means the retained spectral information is as close as possible. We applied the above metric to quantify the results in Table 1. In Table 1, CM (r), CM (g) and CM (b) correspond to the correlation measures of the red channel, the green channel and the blue channel. This shows that our fused result preserves more spectral information than the other methods. The fourth column of Table 1 illustrates that the image obtained by the proposed method gets more feature information from the original image than other methods. The sixth column of Table 1 obtained by the proposed method indicates that our method obtains clearer results than the other methods.
Figure 6 shows the fused QUICKBIRD’s images that are obtained by CMW, MMF, WMMF, the methods of [14,15] and the proposed method. The above metric is applied to quantify the results in Table 2. In Table 2, CM (r), CM (g) and CM (b) correspond to the correlation measure of the red channel, the green channel and the blue channel; this shows that our fused result preserves more spectral information than other methods. The fourth column of Table 2 illustrates that the image obtained by the proposed method gets more feature information from the original images than other methods. The sixth column of Table 1 obtained by the proposed method indicates that our method obtains clearer results than the other methods.

4. Conclusion

The proposed algorithm removes the block effect efficiently by using the Δ−1 operator. In addition, we use a non-convex functional form to improve our fusion result. Simultaneously, our proposed method presents a new energy minimization algorithm to retain the spectral and high frequency information from the original images. We conduct some experiments on the IKONOS and QUICKBIRD images, and the proposed algorithm obtains a better fusion result than other fusion algorithms, in terms of not only visual quality, but also some fusion indicators.

Acknowledgments

The authors would like to thank the anonymous referees for valuable comments that have been implemented in the final version of the paper. This work is supported by the National Natural Science Foundation of China under Grant Nos. 61101219, 61201050, 61201375 and 21307150 and the Science and Technology Development Fund of Macau (No. 069/2011/A).

Author Contributions

Qiwei Xie, Seiichi Mita, Vijay John and Qian Long cooperated with each other to conceive of and design this study. They drafted the article together. Chao Ma collected and analyzed the data and revised the study critically for important intellectual content. Chunzhao Guo performed the the programming to calculate the data. All authors have read and approved the final manuscript.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Nichol, J.; Wong, M.S. Satellite remote sensing for detailed landslides inventories using change detection and image fusion. Int. J. Remote Sens 2005, 26, 1913–1926. [Google Scholar]
  2. Schowengerdt, R.A. The nature of remote sensing. In Remote Sensing: Models and Methods for Image Processing; Academic Press: New York, NY, USA, 2006; pp. 1–44. [Google Scholar]
  3. Bulanon, D.M.; Burks, T.F.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng 2009, 103, 12–22. [Google Scholar]
  4. Fischer, M.A.; Nanz, D.; Hany, T.; Reiner, C.S.; Stolzmann, P.; Donati, O.F.; Breitenstein, S.; Schneider, P.; Weishaupt, D.; Schulthess, G.K.; et al. Diagnostic accuracy of whole-body MRI/DWI image fusion for detection of malignant tumours: A comparison with PET/CT. Mol. Imaging 2011, 21, 246–255. [Google Scholar]
  5. Shettigara, V.K. A generalized component substitition technique for spatial enhacement of multispectral images using a higher resolution data set. Photogramm. Eng. Remote Sens 1992, 58, 561–566. [Google Scholar]
  6. Tu, T.M.; Su, S.C.; Shyu, H.C.; Huang, P.S. A new look at IHS-like image fusion methods. Inf. Fusion 2001, 2, 177–186. [Google Scholar]
  7. Bayram, I.; Selesnick, I.W. A dual-tree rational-dilation complex wavelet transform. IEEE Trans. Signal Process 2011, 59, 6251–6256. [Google Scholar]
  8. Demirel, H.; Anbarjafari, G. Image resolution enhancement by using discrete and stationary wavelet Decomposition. IEEE Trans. Signal Process 2011, 20, 1458–1460. [Google Scholar]
  9. You, X.G.; Du, L.; Cheung, Y.M.; Chen, Q.H. A blind watermarking Scheme using new nontensor product wavelet filter banks. IEEE Trans. Signal Process 2010, 19, 3271–3284. [Google Scholar]
  10. Núñez, J.; Otazu, X.; Fors, O.; Prades, A.; Pala, V.; Arbiol, R. Multiresolution based image fusion with additive wavelet decomposition. IEEE Trans. Geosci. Remote Sens 1999, 32, 1204–1211. [Google Scholar]
  11. Li, H.; Manjunath, B.S.; Mitra, S.K. Multisensor image fusion using the wavelet transform. Graph. Model. Image Process 1995, 57, 235–245. [Google Scholar]
  12. Scheunders, P. A multivalued image wavelet representation based on multiscale fundamental forms. IEEE Trans. Image Process 2002, 10, 568–575. [Google Scholar]
  13. Chen, T.; Guo, R.S.; Peng, S.L. Image fusion using weighted multiscale fundamental form. Proceedings of IEEE International Conference on Image Processing, 24–27 October 2004; pp. 3319–3322.
  14. Xie, Q.W.; He, J.C.; Long, Q.; Mita, S.; Chen, X.; Jiang, A. Image fusion based on TV − L1 function. Proceedings of the 2013 International Conference on Wavelet Analysis and Pattern Recognition, Tianjin, China, 14–17 July 2013; pp. 173–177.
  15. Xie, Q.W.; Long, Q.; Mita, S.; Chen, X.; Jiang, A. Image fusion based on TV − L1 – convex constrained algorithm. Proceedings of the 2013 International Conference on Wireless Communications & Signal Processing, Hangzhou, China, 24–26 October 2013; pp. 1–5.
  16. Xie, Q.W.; Long, Q.; Mita, S.; Guo, C.Z.; Jiang, A. Image fusion based on multi-objective optimization. Int. J. Wavelets Multiresolut. Inf. Process 2014, 12, 1450017. [Google Scholar]
  17. Saeedi, J.; Faez, K. Infrared and visible image fusion using fuzzy logic and population-based optimization. Appl. Soft Comput 2012, 12, 1041–1054. [Google Scholar]
  18. Saeedi, J.; Faez, K. A new pan-sharpening method using multi-objective particle swarm optimization and the shiftable contourlet transform. ISPRS J. Photogramm. Remote Sens 2011, 66, 365–381. [Google Scholar]
  19. Lacewell, C.W.; Gebril, M.; Buaba, R.; Homaifar, A. Optimization of image fusion using genetic algorithms and discrete wavelet transform. Proceedings of IEEE conference on Aerospace and Electronics, Fairborn, OH, USA, 14–16 July 2010; pp. 116–121.
  20. Smith, A.R. Color gamut transform pairs. ACM Comput. Graph. (SIGGRAPH) 1978, 12, 12–19. [Google Scholar]
  21. Osher, S.J.; Sole, A.; Vese, L.A. Image decomposition and restoration using total variation minimization and the H−1 norm. Multiscale Model. Simul 2003, 1, 349–370. [Google Scholar]
  22. Aujol, J.; Gilboa, G. Constrained and SNR-based solutions for TV-Hilbert space image denoising. J. Math. Imaging Vis 2006, 26, 217–237. [Google Scholar]
  23. Xie, Q.W.; Liu, Z.; Long, Q.; Mita, S.; Jiang, A. Remote sensing image fusion through kernel estimation based on energy minimization. Proceedings of International IEEE Conference on Intelligent Transportation Systems, Qingdao, China, 8–11 October 2014.
  24. Krishnan, D.; Tay, T.; Fergus, R. Blind deconvolution using a normalized sparsity measure. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 233–240.
  25. Ochs, P.; Dosovitskiy, A.; Brox, T.; Pock, T. An iterated l1 algorithm for non-smooth non-convex optimization in computer vision. Proceedings of IEEE Conference on Computer Vision and Pattern Recognition, Portland, OR, USA, 23–28 June 2013; pp. 1759–1766.
  26. Fu, S.; Zhang, C. Adaptive non-convex total variation regularisation for image restoration. Electron. Lett 2010, 46, 907–908. [Google Scholar]
  27. Li, X.; Lu, C.W.; Xu, Y.; Jia, J.Y. Image smoothing via L0 gradient minimization. ACM Trans. Graph. (SIGGRAPH Asia) 2011, 30, 174. [Google Scholar]
  28. Wang, Y.; Yang, J.; Yin, W.; Zhang, Y. A new alternating minimization algorithm for total variation image reconstruction. SIAM J. Imaging Sci 2011, 1, 248–272. [Google Scholar]
  29. Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett 2000, 36, 308–309. [Google Scholar]
  30. Li, S.; Kwok, J.T.; Wang, Y. Combination of images with diverse focuses using the spatial frequency. Inf. Fusion 2001, 2, 169–176. [Google Scholar]
  31. Zheng, Y.; Essock, E.A.; Hansen, B.C.; Huan, A.M. A new metric based on extend spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion 2007, 8, 177–192. [Google Scholar]
Figure 1. The flowchart of the fusion process.
Figure 1. The flowchart of the fusion process.
Entropy 16 06099f1
Figure 2. The weighted function: ( 1 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) ) .
Figure 2. The weighted function: ( 1 2 ( cos ( 2 π M p ) + cos ( 2 π N q ) 2 ε ) ) .
Entropy 16 06099f2
Figure 3. (Top) One channel of multi-spectral image S (left) and −Δ−1 (right); (bottom) an enlarged portion from the above images.
Figure 3. (Top) One channel of multi-spectral image S (left) and −Δ−1 (right); (bottom) an enlarged portion from the above images.
Entropy 16 06099f3
Figure 4. Generation of the parameter λ = A.
Figure 4. Generation of the parameter λ = A.
Entropy 16 06099f4
Figure 5. Fused IKONOS images using AW, CMW, MFF, WMFF, method of [14], method of [15] and the proposed method.
Figure 5. Fused IKONOS images using AW, CMW, MFF, WMFF, method of [14], method of [15] and the proposed method.
Entropy 16 06099f5
Figure 6. Fused QUICKBIRD images using CMW, MFF, WMFF, method of [14], method of [15] and the proposed method.
Figure 6. Fused QUICKBIRD images using CMW, MFF, WMFF, method of [14], method of [15] and the proposed method.
Entropy 16 06099f6
Table 1. Quantitative comparison results of Figure 5. AW, additive wavelet fusion method; CMW, choose-max wavelet fusion method; MFF, multi-scale fundamental forms fusion method; WMFF, weighted multi-scale fundamental forms fusion method.
Table 1. Quantitative comparison results of Figure 5. AW, additive wavelet fusion method; CMW, choose-max wavelet fusion method; MFF, multi-scale fundamental forms fusion method; WMFF, weighted multi-scale fundamental forms fusion method.
Fusion MethodCM(Red channel)CM(Green channel)CM(Blue channel)QAB/FSF
AW0.93410.93450.94020.41090.0642
CMW0.94260.94430.95010.34910.0640
MFF0.82810.82550.84970.37000.0810
WMFF0.92680.92650.93530.43040.0738
Method [14]0.95030.95280.95580.43110.0768
Method [15]0.95020.95270.95550.41570.0776
Proposed method0.95050.95340.95700.44850.0842
Table 2. Quantitative comparison results of Figure 6.
Table 2. Quantitative comparison results of Figure 6.
Fusion MethodCM(Red channel)CM(Green channel)CM(Blue channel)QAB/FSF
CMW0.88350.87040.86160.35760.1129
MFF0.83210.81300.79800.35420.0959
WMFF0.89200.87450.87040.40750.0941
Method [14]0.89210.88500.87430.41050.1360
Method [15]0.89210.88470.87460.40730.1372
Proposed method0.89270.88490.87510.42600.1478

Share and Cite

MDPI and ACS Style

Xie, Q.; Ma, C.; Guo, C.; John, V.; Mita, S.; Long, Q. Image Fusion Based on the \({\Delta ^{ - 1}} - T{V_0}\) Energy Function. Entropy 2014, 16, 6099-6115. https://doi.org/10.3390/e16116099

AMA Style

Xie Q, Ma C, Guo C, John V, Mita S, Long Q. Image Fusion Based on the \({\Delta ^{ - 1}} - T{V_0}\) Energy Function. Entropy. 2014; 16(11):6099-6115. https://doi.org/10.3390/e16116099

Chicago/Turabian Style

Xie, Qiwei, Chao Ma, Chunzhao Guo, Vijay John, Seiichi Mita, and Qian Long. 2014. "Image Fusion Based on the \({\Delta ^{ - 1}} - T{V_0}\) Energy Function" Entropy 16, no. 11: 6099-6115. https://doi.org/10.3390/e16116099

Article Metrics

Back to TopTop