# Exploiting Superpixels for Multi-Focus Image Fusion

^{1}

^{2}

^{*}

^{†}

## Abstract

**:**

## 1. Introduction

## 2. Background and Literature Review

## 3. Proposed Method

#### 3.1. Superpixels Computation

#### 3.2. Focus Map Estimation

#### 3.3. Fusion Rule

## 4. Experiments and Results

#### 4.1. Compared Methods

#### 4.2. Qualitative Performance Evaluation

#### 4.3. Quantitative Performance Evaluation

- Information theory based metrics analyze the quality of the fused images by checking the amount of information transferred from the source images to the fused image. These metrics include: normalized mutual information $\left(NMI\right)$ [79], visual information fidelity $\left(VIFF\right)$ [81], and the nonlinear correlation information entropy metric $\left({Q}_{NCIE}\right)$ [80].
- Image feature based metrics calculate the quality of fused image by analyzing the transferred features from the source images to the fused image. Gradient based fusion performance metrics ${Q}^{AB/F}$[82], $\left({Q}_{G}\right)$[83], image fusion metric based on multiscale scheme $\left({Q}_{M}\right)$[85], image fusion metric based on spatial frequency $\left({Q}_{SF}\right)$[84], and image fusion metric based on phase congruency $\left({Q}_{p}\right)$ [86,91] are examples of these metrics.
- Image structural similarity based metrics: The measurement of image similarity depends on the proof that the human visual system is profoundly adjusted to structural information, which is exploited in these metrics to assess the quality of a fused image. The loss of structural information is very important to estimate the image distortion. These metrics include Piella’s metric [87], Cvejie’s image fusion metric $\left({Q}_{C}\right)$[88], and Yang’s image fusion metric $\left({Q}_{Y}\right)$ [89].
- Human perception inspired fusion metrics, e.g., Chen Blum metric ${Q}_{CB}$ [90], measure the quality of fused images on the basis of the saliency map, which is generated by filtering and calculating the preservation of contrast.

#### 4.4. Computational Complexity

^{®}Core™ i7 $2.5$ GHz processor with 8 GB RAM and 64 bit operating system. The results show that the PCA algorithm is the fastest among all approaches; the proposed method takes an average $122.61$ seconds to generate a fused image. The main contributor to this time is the complex computation involved in estimating the superpixels and using the CIEDE2000 color distance model. An efficient implementation can help reduce its computational complexity.

#### 4.5. Summary

## 5. Performance Analysis Using Different Color Distance Models and the Impact of other Parameters

#### 5.1. Fusion Performance with Different Color Distance Models

- Euclidean color distance in the RGB color space: The simplest and the most well-known way to compute the distance between two colors is using the Euclidean distance in the RGB color space. The distance between two pixel colors ${C}_{1}({R}_{1},{G}_{1},{B}_{1})$ and ${C}_{2}({R}_{2},{G}_{2},{B}_{2})$ is computed as,$$d=\sqrt{{({R}_{2}-{R}_{1})}^{2}+{({G}_{2}-{G}_{1})}^{2}+{({B}_{2}-{B}_{1})}^{2}}$$
- Color approximation distance (CAD): The perception of brightness by the human eye is non-linear, and for each color, non-linearity is not the same, as proven by different experiments [92,93,94]. Therefore, there have been many attempts to weight the RGB values to better fit human perception so that the color approximation would be more appropriate. The CAD between two colors ${C}_{1}({R}_{1},{G}_{1},{B}_{1})$ and $C2(R2,G2,B2)$ is computed as:$$d=\sqrt{\left(\right)open="("\; close=")">2+\frac{\overline{r}}{256}\Delta {R}^{2}+4\Delta {G}^{2}+\left(\right)open="("\; close=")">2+\frac{255-\overline{r}}{256}}\Delta {B}^{2}$$
- CIEXYZ: The visualization of the RGB color space was not perfect because the color space of the human eye is greater than the experiment results of CIERGB, and an updated color difference model, CIEXYZ, was introduced by Commission Internationale de L’éclairage (CIE) [95]. It was mathematically designed to avoid the negative numbers known as tristimulus values. It consists of three parameters: non-negative cone response curve $\left(X\right)$, luminance $\left(Y\right)$, and partially-equal to blue color $\left(Z\right)$.$$\left[\begin{array}{c}X\\ Y\\ Z\end{array}\right]=\left[\begin{array}{ccc}0.431& 0.342& 0.178\\ 0.222& 0.707& 0.071\\ 0.020& 0.130& 0.939\end{array}\right]\times \left[\begin{array}{c}R\\ G\\ B\end{array}\right]$$The normalized tristimulus values x, y, and z were calculated from X, Y, and Z as: $\left[x\phantom{\rule{3.33333pt}{0ex}}y\phantom{\rule{3.33333pt}{0ex}}z\right]=\left[\frac{X}{S}\phantom{\rule{3.33333pt}{0ex}}\frac{Y}{S}\phantom{\rule{3.33333pt}{0ex}}\frac{Z}{S}\right],$ where $S=X+Y+Z$. The difference between the tristimulus values of two colors was then computed using the Euclidean distance formula.
- CIE76: In the CIE76 color distance model, the CIELAB color space was used as it is considered to be the most exact means of representing color [96]. The color difference between two colors ${C}_{1}({L}_{1},{a}_{1},{b}_{1})$ and ${C}_{2}({L}_{2},{a}_{2},{b}_{2})$ in the CIELAB color space is calculated by following formula:$$d=\sqrt{{({L}_{1}-{L}_{2})}^{2}+{({a}_{1}-{a}_{2})}^{2}+{({b}_{1}-{b}_{2})}^{2}}$$
- CMC l:c: The color difference CMC (Colour Measurement Committee of the Society of Dyes and Colourists of Great Britain) l:c is calculated in the LCh color space and has two parameters: lightness (l) and chroma (c). For the application, it allows the users based on the ratio of $l:c$ to weight the difference that is deemed appropriate.
- CIEDE2000: The CIE Delta E (CIEDE2000) color distance model was introduced by CIE and provides an improved method for the calculation of color differences. A detailed discussion is presented in Section 3.1.

#### 5.2. Impact of the Number of Superpixels on the Fusion Quality

#### 5.3. Analysis of the Relative Contribution of Spatial and Color Distances

## 6. Conclusions

## Author Contributions

## Funding

## Institutional Review Board Statement

## Informed Consent Statement

## Data Availability Statement

## Conflicts of Interest

## References

- Wan, T.; Zhu, C.; Qin, Z. Multifocus image fusion based on robust principal component analysis. Pattern Recognit. Lett.
**2013**, 34, 1001–1008. [Google Scholar] [CrossRef] - Campbell, J.B.; Wynne, R.H. Introduction to Remote Sensing; Guilford Press: New York, NY, USA, 2011. [Google Scholar]
- Ardeshir Goshtasby, A.; Nikolov, S. Guest editorial: Image fusion: Advances in the state of the art. Inf. Fusion
**2007**, 8, 114–118. [Google Scholar] [CrossRef] - Wang, X.; Bai, S.; Li, Z.; Sui, Y.; Tao, J. The PAN and MS image fusion algorithm based on adaptive guided filtering and gradient information regulation. Inf. Sci.
**2021**, 545, 381–402. [Google Scholar] [CrossRef] - Xu, Y.; Yang, C.; Sun, B.; Yan, X.; Chen, M. A novel multi-scale fusion framework for detail-preserving low-light image enhancement. Inf. Sci.
**2021**, 548, 378–397. [Google Scholar] [CrossRef] - Du, J.; Li, W.; Tan, H. Three-layer medical image fusion with tensor based features. Inf. Sci.
**2020**, 525, 93–108. [Google Scholar] [CrossRef] - Li, S.; Yang, B.; Hu, J. Performance comparison of different multi-resolution transforms for image fusion. Inf. Fusion
**2011**, 12, 74–84. [Google Scholar] [CrossRef] - Zafar, R.; Farid, M.S.; Khan, M.H. Multi-Focus Image Fusion: Algorithms, Evaluation, and a Library. J. Imaging
**2020**, 6, 60. [Google Scholar] [CrossRef] - Saha, A.; Bhatnagar, G.; Wu, Q.J. Mutual spectral residual approach for multifocus image fusion. Digit. Signal Process.
**2013**, 23, 1121–1135. [Google Scholar] [CrossRef] - Goshtasby, A.A.; Nikolov, S. Image fusion: advances in the state of the art. Inf. Fusion
**2007**, 2, 114–118. [Google Scholar] [CrossRef] - Li, S.; Kwok, J.T.; Wang, Y. Multifocus image fusion using artificial neural networks. Pattern Recognit. Lett.
**2002**, 23, 985–997. [Google Scholar] [CrossRef] - Eltoukhy, H.A.; Kavusi, S. Computationally efficient algorithm for multifocus image reconstruction. In Proceedings of the Sensors and Camera Systems for Scientific, Industrial, and Digital Photography Applications IV, Santa Clara, CA, USA, 20–24 January 2003; Volume 5017, pp. 332–341. [Google Scholar]
- Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense SIFT. Inf. Fusion
**2015**, 23, 139–155. [Google Scholar] [CrossRef] - Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion
**2017**, 33, 100–112. [Google Scholar] [CrossRef] - Li, S.; Kwok, J.Y.; Tsang, I.H.; Wang, Y. Fusing images with different focuses using support vector machines. IEEE Trans. Neural Netw.
**2004**, 15, 1555–1561. [Google Scholar] [CrossRef] - Huang, W.; Jing, Z. Multi-focus image fusion using pulse coupled neural network. Pattern Recognit. Lett.
**2007**, 28, 1123–1132. [Google Scholar] [CrossRef] - Agrawal, D.; Singhai, J. Multifocus image fusion using modified pulse coupled neural network for improved image quality. IET Image Process.
**2010**, 4, 443–451. [Google Scholar] [CrossRef] - Li, S.; Kang, X.; Hu, J.; Yang, B. Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion
**2013**, 14, 147–162. [Google Scholar] [CrossRef] - Kumar, B.S. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process.
**2015**, 9, 1193–1204. [Google Scholar] [CrossRef] - Liu, Y.; Wang, L.; Cheng, J.; Li, C.; Chen, X. Multi-focus image fusion: A Survey of the state of the art. Inf. Sci.
**2020**, 64, 71–91. [Google Scholar] [CrossRef] - Piella, G. A general framework for multiresolution image fusion: From pixels to regions. Inf. Fusion
**2003**, 4, 259–280. [Google Scholar] [CrossRef][Green Version] - Petrovic, V.; Xydeas, C.S. Gradient based multiresolution image fusion. IEEE Trans. Image Process.
**2004**, 13, 228–237. [Google Scholar] [CrossRef] - Li, H.; Manjunath, B.; Mitra, S.K. Multisensor image fusion using the wavelet transform. Graph Model Image Proc.
**1995**, 57, 235–245. [Google Scholar] [CrossRef] - Kumar, B.S. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process.
**2013**, 7, 1125–1143. [Google Scholar] [CrossRef] - Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion
**2007**, 8, 143–156. [Google Scholar] [CrossRef] - Xiao-Bo, Q.; Jing-Wen, Y.; Hong-Zhi, X.; Zi-Qian, Z. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin.
**2008**, 34, 1508–1514. [Google Scholar] - Zhang, Q.; Guo, B.L. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process.
**2009**, 89, 1334–1346. [Google Scholar] [CrossRef] - Bavirisetti, D.P.; Dhuli, R. Multi-focus image fusion using multi-scale image decomposition and saliency detection. Ain Shams Eng. J.
**2018**, 9, 1103–1117. [Google Scholar] [CrossRef][Green Version] - Zheng, S.; Shi, W.Z.; Liu, J.; Zhu, G.X.; Tian, J.W. Multisource image fusion method using support value transform. IEEE Trans. Image Process.
**2007**, 16, 1831–1839. [Google Scholar] [CrossRef] - Haghighat, M.B.A.; Aghagolzadeh, A.; Seyedarabi, H. Multi-focus image fusion for visual sensor networks in DCT domain. Comput. Electr. Eng.
**2011**, 37, 789–797. [Google Scholar] [CrossRef] - Naidu, V.; Elias, B. A novel image fusion technique using DCT based Laplacian pyramid. Int. J. Invent. Eng. Sci.
**2013**, 5, 2319–9598. [Google Scholar] - Bai, X.; Liu, M.; Chen, Z.; Wang, P.; Zhang, Y. Morphology and active contour model for multi-focus image fusion. In Proceedings of the International Conference on Digital Image Computing: Techniques and Applications (DICTA), Adelaide, SA, Australia, 23–25 November 2015; pp. 1–6. [Google Scholar]
- Yang, B.; Li, S. Multifocus image fusion and restoration with sparse representation. IEEE Trans. Instrum. Meas.
**2009**, 59, 884–892. [Google Scholar] [CrossRef] - Liang, J.; He, Y.; Liu, D.; Zeng, X. Image fusion using higher order singular value decomposition. IEEE Trans. Image Process.
**2012**, 21, 2898–2909. [Google Scholar] [CrossRef] - Luo, X.; Zhang, Z.; Zhang, C.; Wu, X. Multi-focus image fusion using HOSVD and edge intensity. J. Vis. Commun. Image Represent.
**2017**, 45, 46–61. [Google Scholar] [CrossRef] - Zhang, X.; Li, X.; Liu, Z.; Feng, Y. Multi-focus image fusion using image-partition based focus detection. Signal Process.
**2014**, 102, 64–76. [Google Scholar] [CrossRef] - Li, S.; Kwok, J.T.; Wang, Y. Combination of images with diverse focuses using the spatial frequency. Inf. Fusion
**2001**, 2, 169–176. [Google Scholar] [CrossRef] - Aslantas, V.; Kurban, R. Fusion of multi-focus images using differential evolution algorithm. Expert Syst. Appl.
**2010**, 37, 8861–8870. [Google Scholar] [CrossRef] - Siddique, A.; Xiao, B.; Li, W.; Nawaz, Q.; Hamid, I. Multi-Focus Image Fusion Using Block-Wise Color-Principal Component Analysis. In Proceedings of the IEEE 3rd International Conference on Image, Vision and Computing (ICIVC), Chongqing, China, 27–29 June 2018; pp. 458–462. [Google Scholar]
- Li, M.; Cai, W.; Tan, Z. A region based multi-sensor image fusion scheme using pulse-coupled neural network. Pattern Recognit. Lett.
**2006**, 27, 1948–1956. [Google Scholar] [CrossRef] - Li, S.; Yang, B. Multifocus image fusion using region segmentation and spatial frequency. Image Vis. Comput.
**2008**, 26, 971–979. [Google Scholar] [CrossRef] - Lewis, J.J.; O’Callaghan, R.J.; Nikolov, S.G.; Bull, D.R.; Canagarajah, N. Pixel-and region based image fusion with complex wavelets. Inf. Fusion
**2007**, 8, 119–130. [Google Scholar] [CrossRef] - Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary based sparse representation. Inf. Fusion
**2015**, 25, 72–84. [Google Scholar] [CrossRef] - Huang, W.; Jing, Z. Evaluation of focus measures in multi-focus image fusion. Pattern Recognit. Lett.
**2007**, 28, 493–500. [Google Scholar] [CrossRef] - Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient based fusion for multi-focus images. Inf. Fusion
**2014**, 20, 60–72. [Google Scholar] [CrossRef] - Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process.
**2013**, 22, 2864–2875. [Google Scholar] - Bouzos, O.; Andreadis, I.; Mitianoudis, N. Conditional random field model for robust multi-focus image fusion. IEEE Trans. Image Process.
**2019**, 28, 5636–5648. [Google Scholar] [CrossRef] [PubMed] - Chen, Y.; Guan, J.; Cham, W.K. Robust multi-focus image fusion using edge model and multi-matting. IEEE Trans. Image Process.
**2017**, 27, 1526–1541. [Google Scholar] [CrossRef] - Hua, K.L.; Wang, H.C.; Rusdi, A.H.; Jiang, S.Y. A novel multi-focus image fusion algorithm based on random walks. J. Vis. Commun. Image Represent.
**2014**, 25, 951–962. [Google Scholar] [CrossRef] - Li, H.; Nie, R.; Cao, J.; Guo, X.; Zhou, D.; He, K. Multi-Focus Image Fusion Using U-Shaped Networks With a Hybrid Objective. IEEE Sens. J.
**2019**, 19, 9755–9765. [Google Scholar] [CrossRef] - Liu, Y.; Chen, X.; Wang, Z.; Wang, Z.J.; Ward, R.K.; Wang, X. Deep learning for pixel-level image fusion: Recent advances and future prospects. Inf. Fusion
**2018**, 42, 158–173. [Google Scholar] [CrossRef] - Tang, H.; Xiao, B.; Li, W.; Wang, G. Pixel convolutional neural network for multi-focus image fusion. Inf. Sci.
**2018**, 433–434, 125–141. [Google Scholar] [CrossRef] - Mustafa, H.T.; Yang, J.; Zareapoor, M. Multi-scale convolutional neural network for multi-focus image fusion. Image Vis. Comput.
**2019**, 85, 26–35. [Google Scholar] [CrossRef] - Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion
**2017**, 36, 191–207. [Google Scholar] [CrossRef] - Mustafa, H.T.; Zareapoor, M.; Yang, J. MLDNet: Multi-level dense network for multi-focus image fusion. Signal Process. Image Commun.
**2020**, 85, 115864. [Google Scholar] [CrossRef] - Wang, C.; Zhao, Z.; Ren, Q.; Xu, Y.; Yu, Y. A novel multi-focus image fusion by combining simplified very deep convolutional networks and patch based sequential reconstruction strategy. Appl. Soft Comput.
**2020**, 91, 106253. [Google Scholar] [CrossRef] - Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Medical Image Fusion via Convolutional Sparsity Based Morphological Component Analysis. IEEE Signal Process. Lett.
**2019**, 26, 485–489. [Google Scholar] [CrossRef] - Zhang, C. Multi-focus Image Fusion Based on Convolutional Sparse Representation with Mask Simulation. In Advances in 3D Image and Graphics Representation, Analysis, Computing and Information Technology; Kountchev, R., Patnaik, S., Shi, J., Favorskaya, M.N., Eds.; Springer: Singapore, 2020; pp. 159–168. [Google Scholar]
- Xing, C.; Wang, M.; Dong, C.; Duan, C.; Wang, Z. Using Taylor Expansion and Convolutional Sparse Representation for Image Fusion. Neurocomputing
**2020**, 402, 437–455. [Google Scholar] [CrossRef] - Zhang, H.; Le, Z.; Shao, Z.; Xu, H.; Ma, J. MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion
**2021**, 66, 40–53. [Google Scholar] [CrossRef] - Huang, J.; Le, Z.; Ma, Y.; Mei, X.; Fan, F. A generative adversarial network with adaptive constraints for multi-focus image fusion. Neural Comput. Appl.
**2020**, 32, 15119–15129. [Google Scholar] [CrossRef] - Murugan, A.; Arumugam, G.; Gobinath, D. Multi-Focus Image Fusion Using Conditional Generative Adversarial Networks. In Intelligent Computing and Applications; Dash, S.S., Das, S., Panigrahi, B.K., Eds.; Springer: Singapore, 2021; pp. 559–566. [Google Scholar]
- Veksler, O.; Boykov, Y.; Mehrani, P. Superpixels and Supervoxels in an Energy Optimization Framework. In Computer Vision—ECCV 2010, Proceedings of the Europion Conference on Computer Vision (ECCV 2010), Heraklion, Crete, Greece, 5–11 September 2010; Daniilidis, K., Maragos, P., Paragios, N., Eds.; Springer: Berlin/Heidelberg, Germany, 2010; pp. 211–224. [Google Scholar]
- Levinshtein, A.; Stere, A.; Kutulakos, K.N.; Fleet, D.J.; Dickinson, S.J.; Siddiqi, K. TurboPixels: Fast Superpixels Using Geometric Flows. IEEE Trans. Pattern Anal. Mach. Intell.
**2009**, 31, 2290–2297. [Google Scholar] [CrossRef][Green Version] - Achanta, R.; Shaji, A.; Smith, K.; Lucchi, A.; Fua, P.; Süsstrunk, S. SLIC Superpixels Compared to State-of-the-Art Superpixel Methods. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 34, 2274–2282. [Google Scholar] [CrossRef] [PubMed][Green Version] - Sharma, G.; Wu, W.; Dalal, E.N. The CIEDE2000 color-difference formula: Implementation notes, supplementary test data, and mathematical observations. Color Res. Appl.
**2005**, 30, 21–30. [Google Scholar] [CrossRef] - Kumar, V.; Gupta, P. Importance of statistical measures in digital image processing. Int. J. Emerg. Technol. Adv. Eng.
**2012**, 2, 56–62. [Google Scholar] - Wang, W.; Chang, F. A Multi-focus Image Fusion Method Based on Laplacian Pyramid. J. Comput.
**2011**, 6, 2559–2566. [Google Scholar] [CrossRef] - Farid, M.S.; Mahmood, A.; Al-Maadeed, S.A. Multi-focus image fusion using Content Adaptive Blurring. Inf. Fusion
**2019**, 45, 96–112. [Google Scholar] [CrossRef] - Lytro Multi-Focus Dataset. Available online: http://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset (accessed on 31 October 2019).
- Tian, J.; Chen, L. Adaptive multi-focus image fusion using a wavelet based statistical sharpness measure. Signal Process.
**2012**, 92, 2137–2146. [Google Scholar] [CrossRef] - Qu, X.; Hu, C.; Yan, J. Image fusion algorithm based on orientation information motivated pulse coupled neural networks. In Proceedings of the 7th World Congress on Intelligent Control and Automation, Chongqing, China, 25–27 June 2008; pp. 2437–2441. [Google Scholar]
- Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion
**2007**, 8, 131–142. [Google Scholar] [CrossRef][Green Version] - Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-exposure and multi-focus image fusion in gradient domain. J. Circuits Syst. Comput.
**2016**, 25, 1650123. [Google Scholar] [CrossRef][Green Version] - Naidu, V.; Raol, J.R. Pixel-level image fusion using wavelets and principal component analysis. Def. Sci. J.
**2008**, 58, 338–352. [Google Scholar] [CrossRef] - Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett.
**2016**, 23, 1882–1886. [Google Scholar] [CrossRef] - Liu, Z.; Blasch, E.; Xue, Z.; Zhao, J.; Laganiere, R.; Wu, W. Objective Assessment of Multiresolution Image Fusion Algorithms for Context Enhancement in Night Vision: A Comparative Study. IEEE Trans. Pattern Anal. Mach. Intell.
**2012**, 34, 94–109. [Google Scholar] [CrossRef] - Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett.
**2002**, 38, 313–315. [Google Scholar] [CrossRef][Green Version] - Hossny, M.; Nahavandi, S.; Creighton, D. Comments on’Information measure for performance of image fusion’. Electron. Lett.
**2008**, 44, 1066–1067. [Google Scholar] [CrossRef][Green Version] - Wang, Q.; Shen, Y.; Jin, J. 19—Performance evaluation of image fusion techniques. In Image Fusion; Stathaki, T., Ed.; Academic Press: Oxford, UK, 2008; pp. 469–492. [Google Scholar]
- Han, Y.; Cai, Y.; Cao, Y.; Xu, X. A new image fusion performance metric based on visual information fidelity. Inf. Fusion
**2013**, 14, 127–135. [Google Scholar] [CrossRef] - Xydeas, C.S.; Petrovic, V. Objective image fusion performance measure. Electron. Lett.
**2000**, 36, 308–309. [Google Scholar] [CrossRef][Green Version] - Xydeas, C.S.; Petrovic, V. Objective pixel-level image fusion performance measure. Int. Soc. Opt. Photonics
**2000**, 4051, 89–98. [Google Scholar] - Zheng, Y.; Essock, E.A.; Hansen, B.C.; Haun, A.M. A new metric based on extended spatial frequency and its application to DWT based fusion algorithms. Inf. Fusion
**2007**, 8, 177–192. [Google Scholar] [CrossRef] - Wang, P.W.; Liu, B. A novel image fusion metric based on multi-scale analysis. In Proceedings of the 2008 9th International Conference on Signal Processing, Beijing, China, 26–29 October 2008; pp. 965–968. [Google Scholar]
- Liu, Z.; Forsyth, D.S.; Laganière, R. A feature based metric for the quantitative evaluation of pixel-level image fusion. Comput. Vis. Image Underst.
**2008**, 109, 56–68. [Google Scholar] [CrossRef] - Piella, G.; Heijmans, H. A new quality metric for image fusion. In Proceedings of the 2003 International Conference on Image Processing (Cat. No.03CH37429), Barcelona, Spain, 14–17 September 2003. [Google Scholar]
- Cvejic, N.; Loza, A.; Bull, D.; Canagarajah, N. A similarity metric for assessment of image fusion algorithms. Int. J. Signal Process.
**2005**, 2, 178–182. [Google Scholar] - Yang, C.; Zhang, J.Q.; Wang, X.R.; Liu, X. A novel similarity based quality metric for image fusion. Inf. Fusion
**2008**, 9, 156–160. [Google Scholar] [CrossRef] - Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput.
**2009**, 27, 1421–1432. [Google Scholar] [CrossRef] - Zhao, J.; Laganiere, R.; Liu, Z. Performance assessment of combinative pixel-level image fusion based on an absolute feature measurement. Int. J. Innov. Comput. Inf. Control.
**2007**, 3, 1433–1447. [Google Scholar] - Industrial Colour-Difference Evaluation. Available online: http://cie.co.at/publications/industrial-colour-difference-evaluation (accessed on 1 January 2021).
- Colour Metric. Available online: www.compuphase.com/cmetric.htm (accessed on 15 October 2020).
- Kuehni, R.G. Industrial color difference: Progress and problems. Color Res. Appl.
**1990**, 15, 261–265. [Google Scholar] [CrossRef] - Smith, T.; Guild, J. The C.I.E. colorimetric standards and their use. Trans. Opt. Soc.
**1931**, 33, 73–134. [Google Scholar] [CrossRef] - Sharma, G.; Bala, R. Digital Color Imaging Handbook; CRC press: Boca Raton, FL, USA, 2017. [Google Scholar]
- Melgosa, M. CIE94, History, Use, and Performance. In Encyclopedia of Color Science and Technology; Springer: New York, NY, USA, 2014; pp. 1–5. [Google Scholar]
- McDonald, R.; Smith, K.J. CIE94-a new colour-difference formula. J. Soc. Dye. Colour.
**1995**, 111, 376–379. [Google Scholar] [CrossRef]

**Figure 1.**Results of the proposed algorithm on sample multi-focus images from the test dataset. (

**a**) Source images with foreground focused, (

**b**) source images with background focused, and (

**c**) fused images using the proposed method.

**Figure 3.**Superpixel computation using the proposed strategy. (

**a**) Superpixels of ${I}_{1}$ (foreground focused). (

**b**) Superpixels of ${I}_{2}$ (background focused).

**Figure 4.**Initial focus map generated using the proposed strategy for the images shown in Figure 3.

**Figure 5.**Initial focus map refinement: (

**a**) a sample multi-focus image with superpixels structures overlaid on it, (

**b**) the obtained focus map, and (

**c**) the refined focus map.

**Figure 6.**Fusion results using the refined focus map: (

**a**) first source image ${I}_{1}$, (

**b**) second source image ${I}_{2}$, and (

**c**) fused image ${I}_{f}$.

**Figure 8.**Visual quality assessment of the fusion results achieved by the proposed and the compared methods on the “Swimmer” multi-focus image pair from the test dataset. WSSM, wavelet based statistical sharpness measure; PCNN, pulse coupled neural network; DCTLP, discrete cosine transform Laplacian pyramid; IFGD, image fusion using luminance gradients; CSR, convolutional sparse representation.

**Figure 9.**Visual quality assessment of the fusion results achieved by the proposed and the compared methods on the “Cookie” multi-focus image pair from the test dataset.

**Figure 10.**Impact of the number of superpixels k on the fusion quality of the proposed method. In all graphs, the x-axis represents k and the y-axis the quality score. The x-axis labels are coded as ${10}^{3}$.

**Figure 12.**Impact of the weight factor m on the fusion quality of the proposed method. In all graphs, the x-axis represents m and the y-axis the quality score.

Metric | Metric Description and Reference |
---|---|

Information theory based metrics | |

$NMI$ | Normalized mutual information [78,79] |

${Q}_{NCIE}$ | Non-linear correlation metric [80] |

$VIFF$ | Visual information fidelity metric [81] |

Feature based metrics | |

${Q}^{AB/F}$ | Gradient based metric [82] |

${Q}_{G}$ | Gradient based metric [83] |

${Q}_{SF}$ | Spatial frequency based metric [84] |

${Q}_{M}$ | Multi-scale scheme metric [85] |

${Q}_{P}$ | Phase congruency based metric [86] |

Structural similarity based metrics | |

${Q}_{S}$ | Piella’s metric [87] |

${Q}_{C}$ | Cvejie’s metric [88] |

${Q}_{Y}$ | Yang’s metric [89] |

Human perception based metrics | |

${Q}_{CB}$ | Chen Blum’s metric [90] |

**Table 2.**Objective performance evaluation of the proposed and the compared methods on the Lytro dataset. The best scores for each metric are highlighted in bold.

Metrics | DCHWT | WSSM | PCNN | DCTLP | IFGD | ICA | NSCT | PCA | CNN | CSR | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|

${Q}^{AB/F}$ | 0.7212 | 0.7305 | 0.7036 | 0.6526 | 0.7174 | 0.7445 | 0.2314 | 0.5992 | 0.7514 | 0.7366 | 0.7539 |

$VIFF$ | 0.9021 | 0.9333 | 0.8565 | 0.8388 | 1.0456 | 0.9386 | 0.5851 | 0.7809 | 0.9548 | 0.9329 | 0.9344 |

$NMI$ | 0.9176 | 0.9748 | 1.2068 | 0.8296 | 0.5223 | 0.9374 | 0.5645 | 0.8939 | 1.0725 | 0.9921 | 1.1972 |

${Q}_{G}$ | 0.6153 | 0.6758 | 0.6745 | 0.5390 | 0.6181 | 0.6787 | 0.2070 | 0.5340 | 0.7029 | 0.6447 | 0.7291 |

${Q}_{NCIE}$ | 0.8291 | 0.8321 | 0.8497 | 0.8252 | 0.8145 | 0.8301 | 0.8159 | 0.8277 | 0.8384 | 0.8341 | 0.8472 |

${Q}_{M}$ | 0.8821 | 0.9331 | 2.2555 | 0.6983 | 0.5633 | 1.0264 | 0.1796 | 0.4820 | 2.3431 | 1.3400 | 2.5074 |

${Q}_{SF}$ | −0.0900 | −0.0899 | −0.1050 | −0.0941 | −0.0744 | −0.0684 | −0.1522 | −0.3890 | −0.0321 | −0.0490 | −0.0314 |

${Q}_{P}$ | 0.7839 | 0.8201 | 0.7483 | 0.6892 | 0.7632 | 0.8197 | 0.0486 | 0.7500 | 0.8428 | 0.8283 | 0.8289 |

${Q}_{S}$ | 0.9477 | 0.9438 | 0.9133 | 0.9298 | 0.8901 | 0.9547 | 0.5802 | 0.9217 | 0.9485 | 0.9467 | 0.9463 |

${Q}_{C}$ | 0.8019 | 0.8097 | 0.8183 | 0.7698 | 0.7485 | 0.8334 | 0.3853 | 0.7978 | 0.8253 | 0.7980 | 0.8258 |

${Q}_{Y}$ | 0.9219 | 0.9567 | 0.9672 | 0.8747 | 0.8526 | 0.9515 | 0.3792 | 0.8490 | 0.9653 | 0.9340 | 0.9838 |

${Q}_{CB}$ | 0.6977 | 0.7887 | 0.7476 | 0.6185 | 0.6118 | 0.7130 | 0.4778 | 0.6325 | 0.7816 | 0.7638 | 0.8054 |

Method | DCHWT | WSSM | PCNN | DCTLP | IFGD | ICA | NSCT | PCA | CNN | CSR | Proposed |
---|---|---|---|---|---|---|---|---|---|---|---|

Time | 9.39 | 215.49 | 1.31 | 0.33 | 1.01 | 11.30 | 174.00 | 0.04 | 106.32 | 345.76 | 122.61 |

**Table 4.**Comparison of different color distance models for the proposed multi-focus image fusion algorithm. The best values are in bold. Time (in seconds) is the average execution time of the proposed method with each color distance model. CAD, color approximation distance.

Metric | Euclidean | CAD | CIE76 | CIE94 | CIEXYZ | CMC | CIEDE2000 |
---|---|---|---|---|---|---|---|

${Q}^{AB/F}$ | 0.7483 | 0.7477 | 0.7520 | 0.7510 | 0.7491 | 0.7533 | 0.7539 |

$VIFF$ | 0.9192 | 0.9182 | 0.9256 | 0.9240 | 0.9182 | 0.9287 | 0.9344 |

$NMI$ | 1.1919 | 1.1919 | 1.1945 | 1.1953 | 1.1953 | 1.1946 | 1.1972 |

${Q}_{G}$ | 0.7252 | 0.7242 | 0.7291 | 0.7287 | 0.7287 | 0.7295 | 0.7291 |

${Q}_{NCIE}$ | 0.8478 | 0.8479 | 0.8476 | 0.8478 | 0.8479 | 0.8472 | 0.8472 |

${Q}_{M}$ | 2.4243 | 2.4406 | 2.4445 | 2.4363 | 2.3985 | 2.4603 | 2.5074 |

${Q}_{SF}$ | −0.0403 | −0.0387 | −0.0393 | −0.0412 | −0.0457 | −0.0335 | −0.0314 |

${Q}_{P}$ | 0.8085 | 0.8077 | 0.8239 | 0.8239 | 0.8214 | 0.8224 | 0.8289 |

${Q}_{S}$ | 0.9449 | 0.9449 | 0.9465 | 0.9463 | 0.9456 | 0.9467 | 0.9463 |

${Q}_{C}$ | 0.8199 | 0.8190 | 0.8240 | 0.8238 | 0.8226 | 0.8250 | 0.8258 |

${Q}_{Y}$ | 0.9803 | 0.9796 | 0.9827 | 0.9825 | 0.9822 | 0.9826 | 0.9838 |

${Q}_{CB}$ | 0.7904 | 0.7881 | 0.8006 | 0.7994 | 0.7950 | 0.8005 | 0.8054 |

Time | 52.08 | 56.87 | 44.71 | 45.31 | 40.94 | 53.67 | 122.61 |

**Table 5.**Execution time of the proposed method with different numbers of superpixels k. Time (in seconds) is the average execution time. The best time is marked in bold.

k | 1500 | 2000 | 2500 | 3000 | 3500 | 4000 | 4500 |
---|---|---|---|---|---|---|---|

Time | 97.86 | 109.16 | 118.69 | 122.61 | 119.98 | 132.39 | 131.43 |

**Table 6.**Impact of weight factor m on execution time. Time (in seconds) is the average execution time of the proposed method with different values of m. The best values are in bold.

m | 10 | 15 | 20 | 25 | 30 | 35 | 40 |
---|---|---|---|---|---|---|---|

Time | 151.72 | 141.16 | 125.76 | 122.61 | 110.32 | 110.32 | 98.54 |

Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |

© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).

## Share and Cite

**MDPI and ACS Style**

Ilyas, A.; Farid, M.S.; Khan, M.H.; Grzegorzek, M.
Exploiting Superpixels for Multi-Focus Image Fusion. *Entropy* **2021**, *23*, 247.
https://doi.org/10.3390/e23020247

**AMA Style**

Ilyas A, Farid MS, Khan MH, Grzegorzek M.
Exploiting Superpixels for Multi-Focus Image Fusion. *Entropy*. 2021; 23(2):247.
https://doi.org/10.3390/e23020247

**Chicago/Turabian Style**

Ilyas, Areeba, Muhammad Shahid Farid, Muhammad Hassan Khan, and Marcin Grzegorzek.
2021. "Exploiting Superpixels for Multi-Focus Image Fusion" *Entropy* 23, no. 2: 247.
https://doi.org/10.3390/e23020247