# Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter

^{1}

^{2}

^{3}

^{4}

^{5}

^{*}

## Abstract

**:**

## 1. Introduction

- To improve the low information entropy caused by the redundant information between source images, only innovation images are performed edge-preserving MSD through RGF.
- To suppress the artifacts that may be brought by JSR, weight maps are used to balance the contribution of innovation images.
- To make fused images have high contrast, innovation images are used to guide the optimization of the weight maps.
- To ensure the spatial consistency of fused images, the fusion of innovation images is performed according to optimized weight maps.

## 2. Related Work

#### 2.1. Joint Sparse Representation

#### 2.2. Rolling Guidance Filter

Algorithm 1: The iteration process of RGF. |

Input: Input image ${I}_{in}$; spatial standard deviation ${\sigma}_{s}$; range standard deviation ${\sigma}_{r}$; iteration number M. |

1: Set ${J}^{0}$ as a constant image, i.e., $\forall p,{J}^{0}\left(p\right)=C$, where C is a constant value. |

2: for $t=1:1:M$ do |

3: ${J}^{t}=JBF\left({I}_{in},{J}^{t-1},{\sigma}_{s},{\sigma}_{r}\right)$. |

4: end for |

Output: Output image ${I}_{out}={J}^{M}$. |

## 3. Proposed Method

#### 3.1. JSR Decomposition

#### 3.2. Weight Map Construction

#### 3.3. Multi-Scale Decomposition

#### 3.4. Fused Image Reconstruction

#### 3.5. Workflow of Our Proposed Method

Algorithm 2: Pseudo code of our proposed method. |

Input: Source images ${S}_{A},{S}_{B}$; Dictionary D; Decomposition level K; JBF parameters ${\delta}_{s}^{i},{\delta}_{r}^{i}$; RGF parameters ${\sigma}_{s}^{i},{\sigma}_{r},M$. |

1: Use D to perform JSR decomposition on ${S}_{A},{S}_{B}$ to get $C,{I}_{A},{I}_{B}$. |

2: Process ${S}_{A},{S}_{B}$ with Kirsch operator to get ${R}_{A},{R}_{B}$. |

3: for q in pixel coordinate range of ${R}_{A}$ do |

4: if ${R}_{A}\left(q\right)\le {R}_{B}\left(q\right)$ then |

5: ${P}_{A}\left(q\right)=0,{P}_{B}\left(q\right)=1$. |

6: else |

7: ${P}_{A}\left(q\right)=1,{P}_{B}\left(q\right)=0$. |

8: end if |

9: end for |

10: for $i=1:1:K-1$ do |

11: ${W}_{A}^{i}=JBF({P}_{A},{I}_{A},{\delta}_{s}^{i},{\delta}_{r}^{i})$, ${W}_{B}^{i}=JBF({P}_{B},{I}_{B},{\delta}_{s}^{i},{\delta}_{r}^{i})$. |

12: ${O}_{A}^{i}=RGF\left({I}_{A},{\sigma}_{s}^{i},{\sigma}_{r},M\right)$, ${O}_{B}^{i}=RGF\left({I}_{B},{\sigma}_{s}^{i},{\sigma}_{r},M\right)$. |

13: end for |

14: ${B}_{A}={O}_{A}^{K-1}$, ${B}_{B}={O}_{B}^{K-1}$. |

15: ${O}_{A}^{0}={I}_{A}$, ${O}_{B}^{0}={I}_{B}$. |

16: for $i=1:1:K-1$ do |

17: ${H}_{A}^{i}={O}_{A}^{i-1}-{O}_{A}^{i}$, ${H}_{B}^{i}={O}_{B}^{i-1}-{O}_{B}^{i}$. |

18: end for |

19: ${E}_{A}=var\left({B}_{A}\right)$, ${E}_{B}=var\left({B}_{B}\right)$. |

20: ${F}_{B}=\frac{1}{{E}_{A}+{E}_{B}}\left({E}_{A}{B}_{A}+{E}_{B}{B}_{B}\right)$. |

21: ${F}_{H}={\sum}_{i=1}^{K-1}{H}_{A}^{i}{W}_{A}^{i}+{H}_{B}^{i}{W}_{B}^{i}$. |

22: ${F}_{I}={F}_{B}+{F}_{H}$. |

23: $F={F}_{I}+C$. |

Output: Fused image F. |

## 4. Experimental Results and Analysis

#### 4.1. Experimental Settings and Objective Evaluations

- Mutual information ($MI$) [29] based on Shannon entropy and relative entropy. It measures the correlation between the source image and the fused image to indicate how much information is retained.
- Feature mutual information ($FMI$) [30] indicates the entropy of features in fused image. It measures the amount of information in image features carried from the source images to the fused image. Besides, it is a non-reference image fusion metric.
- The normalized weighted edge preservation value (${Q}^{AB/F}$) [31] measures the visual information quality of the fusion, and more edge information can lead to higher values for this metric.
- Nonlinear correlation information entropy ($NCIE$) [32] is based on nonlinear joint entropy. It measures the general correlation between the source images and the fused image.

#### 4.2. Discussions about Parameters

#### 4.2.1. Size of JSR Dictionary

#### 4.2.2. Number of Decomposition Level

#### 4.3. Validity of Our Combination Strategy

#### 4.4. Comparison with Other Methods

#### 4.4.1. Analysis of Infrared–Visible Results

#### 4.4.2. Analysis of Medical Results

#### 4.4.3. Analysis of Multi-Focus Results

**a**) is hardly fused. CVT, DTCWT, CNN, LP perform similarly to our method.

#### 4.4.4. Analysis of Remote Sensing Results

#### 4.4.5. Summary of the Analysis

## 5. Conclusions

## Author Contributions

## Funding

## Conflicts of Interest

## References

- Goshtasby, A.A.; Nikolov, S. Image fusion: Advances in the state of the art. Inf. Fusion
**2007**, 8, 114–118. [Google Scholar] - James, A.P.; Dasarathy, B.V. Medical image fusion: A survey of the state of the art. Inf. Fusion
**2014**, 19, 4–19. [Google Scholar] [CrossRef] [Green Version] - Ma, J.; Yong, M.; Chang, L. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion
**2019**, 45, 153–178. [Google Scholar] - Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion
**2017**, 33, 100–112. [Google Scholar] [CrossRef] - Sahu, A.; Bhateja, V.; Krishn, A.; Himanshi. Medical image fusion with Laplacian Pyramids. In Proceedings of the 2014 International Conference on Medical Imaging, m-Health and Emerging Communication Systems (MedCom), Greater Noida, India, 7–8 November 2014; pp. 448–453. [Google Scholar]
- Petrović, V.S.; Xydeas, C.S. Gradient-Based Multiresolution Image Fusion. IEEE Trans. Image Process.
**2004**, 13, 228–237. [Google Scholar] [CrossRef] - Li, H.; Manjunath, B.; Mitra, S. Multisensor Image Fusion Using the Wavelet Transform. Graph. Model. Image Process.
**1995**, 57, 235–245. [Google Scholar] [CrossRef] - Lewis, J.J.; O’Callaghan, R.J.; Nikolov, S.G.; Bull, D.R.; Canagarajah, N. Pixel- and region-based image fusion with complex wavelets. Inf. Fusion
**2007**, 8, 119–130. [Google Scholar] [CrossRef] - Zhang, Q.; Guo, B.L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process.
**2009**, 89, 1334–1346. [Google Scholar] [CrossRef] - Lei, W.; Li, B.; Tian, L.F. Multi-modal medical image fusion using the inter-scale and intra-scale dependencies between image shift-invariant shearlet coefficients. Inf. Fusion
**2014**, 19, 20–28. [Google Scholar] - Liu, S.; Wang, J.; Lu, Y.; Li, H.; Zhao, J.; Zhu, Z. Multi-Focus Image Fusion Based on Adaptive Dual-Channel Spiking Cortical Model in Non-Subsampled Shearlet Domain. IEEE Access
**2019**, 7, 56367–56388. [Google Scholar] [CrossRef] - Liu, S.; Shi, M.; Zhu, Z.; Zhao, J. Image fusion based on complex-shearlet domain with guided filtering. Multidimens. Syst. Signal Process.
**2017**, 28, 207–224. [Google Scholar] [CrossRef] - He, K.; Sun, J.; Tang, X. Guided Image Filtering. IEEE Trans. Pattern Anal. Mach. Intell.
**2013**, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed] - Petschnigg, G.; Szeliski, R.; Agrawala, M.; Cohen, M.; Hoppe, H.; Toyama, K. Digital photography with flash and no-flash image pairs. ACM Trans. Graph.
**2004**, 23, 664–672. [Google Scholar] [CrossRef] - Qi, Z.; Shen, X.; Li, X.; Jia, J. Rolling Guidance Filter. In Proceedings of the European Conference on Computer Vision, Zurich, Switzerland, 6–12 September 2014. [Google Scholar]
- Shutao, L.; Xudong, K.; Jianwen, H. Image fusion with guided filtering. IEEE Trans. Image Process.
**2013**, 22, 2864–2875. [Google Scholar] [CrossRef] [PubMed] - Chen, L.; Yang, X.; Lu, L.; Liu, K.; Jeon, G.; Wu, W. An image fusion algorithm of infrared and visible imaging sensors for cyber-physical systems. J. Intell. Fuzzy Syst.
**2019**, 36, 4277–4291. [Google Scholar] [CrossRef] - Jian, L.; Yang, X.; Zhou, Z.; Zhou, K.; Liu, K. Multi-scale image fusion through rolling guidance filter. Future Gener. Comput. Syst.
**2018**, 83, 310–325. [Google Scholar] [CrossRef] - Olshausen, B.A.; Field, D.J. Emergence of simple-cell receptive field properties by learning a sparse code for natural images. Nature
**1996**, 381, 607–609. [Google Scholar] [CrossRef] - Yue, D.; Dong, L.; Xie, X.; Lam, K.M.; Dai, Q. Partially occluded face completion and recognition. In Proceedings of the 2009 16th IEEE International Conference on Image Processing (ICIP), Cairo, Egypt, 7–10 November 2009; pp. 4145–4148. [Google Scholar]
- Tu, B.; Zhang, X.; Kang, X.; Zhang, G.; Wang, J.; Wu, J. Hyperspectral Image Classification via Fusing Correlation Coefficient and Joint Sparse Representation. IEEE Geosci. Remote Sens. Lett.
**2018**, 15, 340–344. [Google Scholar] [CrossRef] - Liu, S.; Liu, M.; Li, P.; Zhao, J.; Zhu, Z.; Wang, X. SAR Image Denoising via Sparse Representation in Shearlet Domain Based on Continuous Cycle Spinning. IEEE Trans. Geosci. Remote Sens.
**2017**, 55, 2985–2992. [Google Scholar] [CrossRef] - Qin, Z.; Fan, J.; Liu, Y.; Gao, Y.; Li, G.Y. Sparse Representation for Wireless Communications: A Compressive Sensing Approach. IEEE Signal Process. Mag.
**2018**, 35, 40–58. [Google Scholar] [CrossRef] [Green Version] - Fang, L.; Zhuo, H.; Li, S. Super-resolution of hyperspectral image via superpixel-based sparse representation. Neurocomputing
**2018**, 273, 171–177. [Google Scholar] [CrossRef] - Yang, B.; Li, S. Multifocus Image Fusion and Restoration With Sparse Representation. IEEE Trans. Instrum. Meas.
**2010**, 59, 884–892. [Google Scholar] [CrossRef] - Yu, N.; Qiu, T.; Feng, B.; Wang, A. Image Features Extraction and Fusion Based on Joint Sparse Representation. IEEE J. Sel. Top. Signal Process.
**2011**, 5, 1074–1082. [Google Scholar] [CrossRef] - Ma, X.; Hu, S.; Liu, S.; Fang, J.; Xu, S. Multi-focus image fusion based on joint sparse representation and optimum theory. Signal Process. Image Commun.
**2019**, 78, 125–134. [Google Scholar] [CrossRef] - Aharon, M.; Elad, M.; Bruckstein, A. K-SVD: An Algorithm for Designing Overcomplete Dictionaries for Sparse Representation. IEEE Trans. Signal Process.
**2006**, 54, 4311–4322. [Google Scholar] [CrossRef] - Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett.
**2002**, 38, 313–315. [Google Scholar] [CrossRef] [Green Version] - Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. A non-reference image fusion metric based on mutual information of image features. Comput. Electr. Eng.
**2011**, 37, 744–756. [Google Scholar] [CrossRef] - Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett.
**2000**, 36, 308–309. [Google Scholar] [CrossRef] [Green Version] - Qiang, W.; Yi, S. Performances evaluation of image fusion techniques based on nonlinear correlation measurement. In Proceedings of the IEEE Instrumentation and Measurement Technology Conference, Como, Italy, 18–20 May 2004; Volume 1, pp. 472–475. [Google Scholar]
- Qu, X.B.; Yan, J.W.; Xiao, H.Z.; Zhu, Z.Q. Image Fusion Algorithm Based on Spatial Frequency-Motivated Pulse Coupled Neural Networks in Nonsubsampled Contourlet Transform Domain. Acta Autom. Sin.
**2008**, 34, 1508–1514. [Google Scholar] [CrossRef] - Liu, Y.; Liu, S.; Wang, Z. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion
**2015**, 24, 147–164. [Google Scholar] [CrossRef] - Liu, Y.; Wang, Z. Simultaneous image fusion and denoising with adaptive sparse representation. IET Image Process.
**2015**, 9, 347–357. [Google Scholar] [CrossRef] [Green Version] - Liu, Y.; Chen, X.; Ward, R.K.; Jane Wang, Z. Image Fusion With Convolutional Sparse Representation. IEEE Signal Process. Lett.
**2016**, 23, 1882–1886. [Google Scholar] [CrossRef] - Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion
**2007**, 8, 143–156. [Google Scholar] [CrossRef] - Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion
**2016**, 31, 100–109. [Google Scholar] [CrossRef] - Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion
**2016**, 30, 15–26. [Google Scholar] [CrossRef] - Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolution Inf. Process.
**2018**, 16, 1850018. [Google Scholar] [CrossRef] - Naidu, V.P.S. Image Fusion Technique using Multi-resolution Singular Value Decomposition. Def. Sci. J.
**2011**, 61, 479–484. [Google Scholar] [CrossRef] [Green Version] - Ma, J.; Zhou, Z.; Wang, B.; Zong, H. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol.
**2017**, 82, 8–17. [Google Scholar] [CrossRef] - Zhan, K.; Xie, Y.; Wang, H.; Min, Y. Fast filtering image fusion. J. Electron. Imaging
**2017**, 26, 1–18. [Google Scholar] [CrossRef] [Green Version]

**Figure 3.**An example of JSR decomposition. (

**a**,

**b**) Source images; (

**c**) The common image; (

**d**) The innovation image of (

**a**); (

**e**) The innovation image of (

**b**).

**Figure 8.**Objective evaluation of different decomposition level K. (

**a**–

**d**) The values of $MI$, $FMI$, ${Q}^{AB/F}$ and $NCIE$, respectively.

**Figure 9.**Some fused images of JSR, RGF and our proposed method. (

**a**,

**b**) Source images; (

**c**) The fused results of JSR; (

**d**) The fused results of RGF; (

**e**) The fused results of our proposed method.

**Figure 10.**Examples of the fusion results of infrared-visible images. (

**a**,

**b**) Source images; (

**c**–

**p**) The fused results of ASR, CSR, CVT, DTCWT, GTF, H-MSD, CNN, LP, MSSR, MSVD, NSCT, WLS, FFIF, and our proposed method, respectively.

**Figure 11.**Examples of the fusion results of medical images. (

**a**,

**b**) Source images; (

**c**–

**p**) The fused results of ASR, CSR, CVT, DTCWT, GTF, H-MSD, CNN, LP, MSSR, MSVD, NSCT, WLS, FFIF, and our proposed method, respectively.

**Figure 12.**Examples of the fusion results of multi-focus images. (

**a**,

**b**) Source images; (

**c**–

**p**) The fused results of ASR, CSR, CVT, DTCWT, GTF, H-MSD, CNN, LP, MSSR, MSVD, NSCT, WLS, FFIF, and our proposed method, respectively.

**Figure 13.**Examples of the fusion results of remote sensing images. (

**a**,

**b**) Source images; (

**c**–

**p**) The fused results of ASR, CSR, CVT, DTCWT, GTF, H-MSD, CNN, LP, MSSR, MSVD, NSCT, WLS, FFIF, and our proposed method, respectively.

**Table 1.**Objective evaluation of different n values for the JSR dictionary. The best and second best results of each metric are marked in red and bold, respectively.

Metric | $36\times 512$ | $64\times 512$ | $100\times 512$ |
---|---|---|---|

$MI$ | 5.3075 | 5.3377 | 5.3179 |

$FMI$ | 0.5650 | 0.5600 | 0.5556 |

${Q}^{AB/F}$ | 0.6971 | 0.6978 | 0.6974 |

$NCIE$ | 0.8224 | 0.8226 | 0.8225 |

Time | 163.74 | 398.33 | 1182.71 |

**Table 2.**Objective evaluation of different m values for the JSR dictionary. The best and second best results of each metric are marked in red and bold, respectively.

Metric | $64\times 128$ | $64\times 256$ | $64\times 512$ |
---|---|---|---|

$MI$ | 5.2805 | 5.3309 | 5.3377 |

$FMI$ | 0.5538 | 0.5638 | 0.5600 |

${Q}^{AB/F}$ | 0.6936 | 0.6973 | 0.6978 |

$NCIE$ | 0.8223 | 0.8225 | 0.8226 |

Time | 176.07 | 251.15 | 398.33 |

K | 2 | 3 | 4 | 5 |
---|---|---|---|---|

Time | 374.70 | 376.97 | 381.58 | 398.33 |

**Table 4.**Objective evaluation of JSR, RGF and our method. The best and second best results of each metric are marked in red and bold, respectively.

Category | Metric | JSR | RGF | OURS |
---|---|---|---|---|

Infrared-visible | $MI$ | 3.6597 | 3.7839 | 4.2105 |

$FMI$ | 0.4946 | 0.5456 | 0.5477 | |

${Q}^{AB/F}$ | 0.6130 | 0.6644 | 0.6652 | |

$NCIE$ | 0.8106 | 0.8113 | 0.8143 | |

Medical | $MI$ | 4.1862 | 4.0194 | 4.2164 |

$FMI$ | 0.5439 | 0.5278 | 0.5228 | |

${Q}^{AB/F}$ | 0.6177 | 0.6768 | 0.6800 | |

$NCIE$ | 0.8133 | 0.8119 | 0.8130 | |

Multi-focus | $MI$ | 6.9542 | 8.8911 | 8.9213 |

$FMI$ | 0.5475 | 0.6316 | 0.6324 | |

${Q}^{AB/F}$ | 0.7449 | 0.7890 | 0.7891 | |

$NCIE$ | 0.8316 | 0.8465 | 0.8467 | |

Remote sensing | $MI$ | 2.9600 | 3.7494 | 4.0035 |

$FMI$ | 0.4555 | 0.5337 | 0.5370 | |

${Q}^{AB/F}$ | 0.5846 | 0.6508 | 0.6567 | |

$NCIE$ | 0.8082 | 0.8145 | 0.8165 |

**Table 5.**Objective evaluation of infrared-visible image fusion. The best and second best results of each metric are marked in red and bold, respectively.

Metric | ASR | CSR | CVT | DTCWT | GTF | H-MSD | CNN |

$MI$ | 2.7134 | 2.7878 | 2.2697 | 2.3902 | 2.5465 | 2.6970 | 2.9490 |

$FMI$ | 0.5202 | 0.4510 | 0.4596 | 0.4892 | 0.4874 | 0.4324 | 0.4818 |

${Q}^{AB/F}$ | 0.5986 | 0.5890 | 0.5512 | 0.5796 | 0.4994 | 0.5686 | 0.6290 |

$NCIE$ | 0.8064 | 0.8066 | 0.8052 | 0.8055 | 0.8061 | 0.8064 | 0.8072 |

Metric | LP | MSSR | MSVD | NSCT | WLS | FFIF | OURS |

$MI$ | 2.6575 | 3.4726 | 2.9739 | 2.4802 | 2.7887 | 4.9717 | 4.2105 |

$FMI$ | 0.5003 | 0.5044 | 0.3972 | 0.4988 | 0.4339 | 0.5775 | 0.5477 |

${Q}^{AB/F}$ | 0.6366 | 0.6065 | 0.4123 | 0.6144 | 0.5574 | 0.6405 | 0.6652 |

$NCIE$ | 0.8062 | 0.8107 | 0.8072 | 0.8057 | 0.8065 | 0.8226 | 0.8143 |

**Table 6.**Objective evaluation of medical image fusion. The best and second best results of each metric are marked in red and bold, respectively.

Metric | ASR | CSR | CVT | DTCWT | GTF | H-MSD | CNN |

$MI$ | 3.4473 | 3.3705 | 2.6794 | 2.9084 | 2.9051 | 3.2624 | 3.5522 |

$FMI$ | 0.5638 | 0.5087 | 0.3534 | 0.4478 | 0.5125 | 0.4738 | 0.5152 |

${Q}^{AB/F}$ | 0.6037 | 0.5976 | 0.5170 | 0.5488 | 0.4288 | 0.5639 | 0.6416 |

$NCIE$ | 0.8092 | 0.8090 | 0.8069 | 0.8075 | 0.8076 | 0.8086 | 0.8097 |

Metric | LP | MSSR | MSVD | NSCT | WLS | FFIF | OURS |

$MI$ | 3.2668 | 3.6737 | 3.5279 | 3.1658 | 3.5519 | 4.6729 | 4.2164 |

$FMI$ | 0.5243 | 0.5406 | 0.4731 | 0.5063 | 0.4907 | 0.6086 | 0.5228 |

${Q}^{AB/F}$ | 0.6384 | 0.6422 | 0.4713 | 0.6220 | 0.5914 | 0.6535 | 0.6800 |

$NCIE$ | 0.8085 | 0.8102 | 0.8097 | 0.8082 | 0.8097 | 0.8151 | 0.8130 |

**Table 7.**Objective evaluation of multi-focus image fusion. The best and second best results of each metric are marked in red and bold, respectively.

Metric | ASR | CSR | CVT | DTCWT | GTF | H-MSD | CNN |

$MI$ | 7.5714 | 7.6874 | 7.2197 | 7.4468 | 7.6793 | 7.5452 | 8.5647 |

$FMI$ | 0.6022 | 0.4586 | 0.5643 | 0.5942 | 0.5963 | 0.5534 | 0.6065 |

${Q}^{AB/F}$ | 0.7746 | 0.7570 | 0.7571 | 0.7710 | 0.6210 | 0.7477 | 0.7835 |

$NCIE$ | 0.8367 | 0.8363 | 0.8343 | 0.8360 | 0.8397 | 0.8363 | 0.8441 |

Metric | LP | MSSR | MSVD | NSCT | WLS | FFIF | OURS |

$MI$ | 7.9235 | 7.8695 | 6.5958 | 7.5796 | 7.3741 | 9.2663 | 8.9213 |

$FMI$ | 0.6057 | 0.5987 | 0.4236 | 0.5963 | 0.5680 | 0.6558 | 0.6324 |

${Q}^{AB/F}$ | 0.7829 | 0.7807 | 0.6212 | 0.7795 | 0.7647 | 0.7403 | 0.7891 |

$NCIE$ | 0.8390 | 0.8385 | 0.8299 | 0.8367 | 0.8349 | 0.8533 | 0.8467 |

**Table 8.**Objective evaluation of remote sensing image fusion. The best and second best results of each metric are marked in red and bold, respectively.

Metric | ASR | CSR | CVT | DTCWT | GTF | H-MSD | CNN |

$MI$ | 2.0875 | 2.2285 | 1.8935 | 1.9541 | 1.7620 | 2.1677 | 3.6505 |

$FMI$ | 0.5057 | 0.4519 | 0.4305 | 0.4590 | 0.4967 | 0.4141 | 0.4741 |

${Q}^{AB/F}$ | 0.5375 | 0.5908 | 0.5526 | 0.5802 | 0.4661 | 0.5428 | 0.6194 |

$NCIE$ | 0.8047 | 0.8052 | 0.8044 | 0.8045 | 0.8037 | 0.8055 | 0.8143 |

Metric | LP | MSSR | MSVD | NSCT | WLS | FFIF | OURS |

$MI$ | 2.1795 | 3.1134 | 2.0510 | 2.0314 | 2.1937 | 4.5241 | 4.0035 |

$FMI$ | 0.4784 | 0.4646 | 0.3010 | 0.4757 | 0.4199 | 0.5473 | 0.5370 |

${Q}^{AB/F}$ | 0.6173 | 0.5941 | 0.4055 | 0.6119 | 0.5389 | 0.6298 | 0.6567 |

$NCIE$ | 0.8051 | 0.8110 | 0.8044 | 0.8046 | 0.8050 | 0.8209 | 0.8165 |

© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Liu, Y.; Yang, X.; Zhang, R.; Albertini, M.K.; Celik, T.; Jeon, G.
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter. *Entropy* **2020**, *22*, 118.
https://doi.org/10.3390/e22010118

**AMA Style**

Liu Y, Yang X, Zhang R, Albertini MK, Celik T, Jeon G.
Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter. *Entropy*. 2020; 22(1):118.
https://doi.org/10.3390/e22010118

**Chicago/Turabian Style**

Liu, Yudan, Xiaomin Yang, Rongzhu Zhang, Marcelo Keese Albertini, Turgay Celik, and Gwanggil Jeon.
2020. "Entropy-Based Image Fusion with Joint Sparse Representation and Rolling Guidance Filter" *Entropy* 22, no. 1: 118.
https://doi.org/10.3390/e22010118