Fractal Dimension-Based Multi-Focus Image Fusion via AGPCNN and Consistency Verification in NSCT Domain
Abstract
1. Introduction
2. Related Works
2.1. NSCT
2.2. AGPCNN
3. Proposed Image Fusion Method
3.1. NSCT Decomposition
3.2. Low-Frequency Fusion
3.3. High-Frequency Fusion
3.4. NSCT Reconstruction
| Algorithm 1 Proposed image fusion method |
| Input: the source images: A and B Parameters: the number of NSCT decomposition levels: , the number of directions at each decomposition level: , , the number of AGPCNN iterations: Step 1: NSCT decomposition For each source image Perform NSCT decomposition on to obtain , , End Step 2: Low-frequency fusion Merge and using Equation (10) to obtain ; Step 3: High-frequency fusion For each level For each direction For each source image Compute the using Equation (11); Initialize the AGPCNN model: , , , and , , ; Estimate the AGPCNN parameters using Equations (6)–(9); For each iteration Calculate the AGPCNN model using Equations (2)–(5) and (9); End End Get the decision map using Equation (12); Perform the consistency verification operation on decision map to guarantee the consistency via Equation (13); Compute the fused high-frequency sub-bands via Equation (14); End End Step 4: NSCT reconstruction Perform inverse NSCT on to obtain fused image ; Output: the fused image . |
4. Experimental Results and Discussion
4.1. Discussion of NSCT Decomposition Levels
4.2. Results on Lytro Dataset
4.3. Results on MFI-WHU Dataset
4.4. Ablation Experiment
4.5. Application Extension
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Wang, Z.; Zhao, L.; Zhang, J. Multi-Text Guidance Is Important: Multi-modality image fusion via large generative vision-language model. Int. J. Comput. Vis. 2025, 133, 4646–4668. [Google Scholar] [CrossRef]
- Xie, X.; Lin, Z.; Guo, B.; He, S.; Gu, Y.; Bai, Y.; Li, P. LightMFF: A simple and efficient ultra-lightweight multi-focus image fusion network. Appl. Sci. 2025, 15, 7500. [Google Scholar] [CrossRef]
- Zhao, L.; Zhang, X.; Wang, Z. Focusing on neglected natural images: A self-supervised learning model for pan-sharpening. Inf. Process. Manag. 2025, 62, 104246. [Google Scholar] [CrossRef]
- Tang, L.; Li, C.; Ma, J. Mask-DiFuser: A masked diffusion model for unified unsupervised image fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2025. Early Access. [Google Scholar]
- Cao, Z.; Liang, Y.; Deng, L.; Vivone, G. An efficient image fusion network exploiting unifying language and mask guidance. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 9845–9862. [Google Scholar] [CrossRef] [PubMed]
- Yan, H.; Zhang, J.; Zhang, X. Injected infrared and visible image fusion via L1 decomposition model and guided filtering. IEEE Trans. Comput. Imaging 2022, 8, 162–173. [Google Scholar] [CrossRef]
- Zhu, Z.; Wang, Z.; Qi, G.; Mazur, N.; Yang, P.; Liu, Y. Brain tumor segmentation in MRI with multi-modality spatial information enhancement and boundary shape correction. Pattern Recognit. 2024, 153, 110553. [Google Scholar] [CrossRef]
- Liu, Y.; Chen, X.; Ward, R.K.; Wang, Z.J. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
- Xie, X.; Guo, B.; He, S.; Gu, Y.; Li, Y.; Li, P. One-shot multi-focus image stack fusion via focal depth regression. Eng. Appl. Artif. Intell. 2025, 162, 112667. [Google Scholar] [CrossRef]
- Xie, X.; Guo, B.; Li, P. Multi-focus image fusion with visual state space model and dual adversarial learning. Comput. Electr. Eng. 2025, 123, 110238. [Google Scholar] [CrossRef]
- Zhu, Z.; He, X.; Qi, G.; Li, Y.; Cong, B.; Liu, Y. Brain tumor segmentation based on the fusion of deep semantics and edge information in multimodal MRI. Inf. Fusion 2023, 91, 376–387. [Google Scholar] [CrossRef]
- Li, L.; Ma, H. Pulse coupled neural network-based multimodal medical image fusion via guided filtering and WSEML in NSCT domain. Entropy 2021, 23, 591. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Li, X.; Duan, H.; Zhang, X. A self-supervised residual feature learning model for multifocus image fusion. IEEE Trans. Image Process. 2022, 31, 4527–4542. [Google Scholar] [CrossRef] [PubMed]
- Zhao, L.; Zhang, X.; Huang, B.; Tian, M.; Wang, Z. MFANet: Multi-feature aggregation network for multi-focus image fusion. In Proceedings of the 2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 6–11 April 2025; pp. 1–5. [Google Scholar]
- Xie, X.; Jiang, Q.; Chen, D. StackMFF: End-to-end multi-focus image stack fusion network. Appl. Intell. 2025, 55, 503. [Google Scholar] [CrossRef]
- Wu, X.; Cao, Z.; Huang, T.; Deng, L.; Chanussot, J.; Vivone, G. Fully-connected transformer for multi-source image fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 2071–2088. [Google Scholar] [CrossRef]
- Vivone, G.; Deng, L. Deep learning in remote sensing image fusion: Methods, protocols, data, and future perspectives. IEEE Geosci. Remote Sens. Mag. 2025, 13, 269–310. [Google Scholar] [CrossRef]
- Matteo, C.; Giuseppe, G.; Gemine, V. Hyperspectral pansharpening: Critical review, tools, and future perspectives. IEEE Geosci. Remote Sens. Mag. 2025, 13, 311–338. [Google Scholar]
- Vivone, G.; Garzelli, A.; Xu, Y.; Liao, W.; Chanussot, J. Panchromatic and hyperspectral image fusion: Outcome of the 2022 WHISPERS hyperspectral pansharpening challenge. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2023, 16, 166–179. [Google Scholar] [CrossRef]
- Wen, X.; Ma, H.; Li, L. A three-branch pansharpening network based on spatial and frequency domain interaction. Remote Sens. 2025, 17, 13. [Google Scholar] [CrossRef]
- Wen, X.; Ma, H.; Li, L. A Multi-stage progressive pansharpening network based on detail injection with redundancy reduction. Sensors 2024, 24, 6039. [Google Scholar] [CrossRef]
- Jin, X.; Zhu, P.; Yu, D.; Wozniak, M.; Jiang, Q.; Wang, P.; Zhou, W. Combining depth and frequency features with Mamba for multi-focus image fusion. Inf. Fusion 2025, 124, 103355. [Google Scholar] [CrossRef]
- Zhai, H.; Ouyang, Y.; Luo, N. MSI-DTrans: A multi-focus image fusion using multilayer semantic interaction and dynamic transformer. Displays 2024, 85, 102837. [Google Scholar] [CrossRef]
- Ouyang, Y.; Zhai, H.; Hu, H. FusionGCN: Multi-focus image fusion using superpixel features generation GCN and pixel-level feature reconstruction CNN. Expert Syst. Appl. 2025, 262, 125665. [Google Scholar] [CrossRef]
- Liu, J.; Wu, G.; Liu, Z.; Wang, D.; Jiang, Z.; Ma, L.; Zhong, W.; Fan, X.; Liu, R. Infrared and visible image fusion: From data compatibility to task adaption. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 2349–2369. [Google Scholar] [CrossRef] [PubMed]
- Li, L.; Lv, M.; Jia, Z.; Jin, Q.; Liu, M.; Chen, L.; Ma, H. An effective infrared and visible image fusion approach via rolling guidance filtering and gradient saliency map. Remote Sens. 2023, 15, 2486. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, S.; Zhang, J. Adaptive sliding mode consensus control based on neural network for singular fractional order multi-agent systems. Appl. Math. Comput. 2022, 434, 127442. [Google Scholar] [CrossRef]
- Zhang, J.; Ding, J.; Chai, T. Cyclic performance monitoring-based fault-tolerant funnel control of unknown nonlinear systems with actuator failures. IEEE Trans. Autom. Control 2025, 70, 6111–6118. [Google Scholar] [CrossRef]
- Zhang, J.; Yang, G. Low-complexity tracking control of strict-feedback systems with unknown control directions. IEEE Trans. Autom. Control 2019, 64, 5175–5182. [Google Scholar] [CrossRef]
- Li, L.; Shi, Y.; Lv, M.; Jia, Z.; Liu, M.; Zhao, X.; Zhang, X.; Ma, H. Infrared and visible image fusion via sparse representation and guided filtering in Laplacian pyramid domain. Remote Sens. 2024, 16, 3804. [Google Scholar] [CrossRef]
- Li, H.; Yang, Z.; Zhang, Y.; Jia, W.; Yu, Z.; Liu, Y. MulFS-CAP: Multimodal fusion-supervised cross-modality alignment perception for unregistered infrared-visible image fusion. IEEE Trans. Pattern Anal. Mach. Intell. 2025, 47, 3673–3690. [Google Scholar] [CrossRef]
- Zhang, X.; Yan, H.; He, H. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets. Front. Inf. Technol. Electron. Eng. 2020, 21, 834–843. [Google Scholar] [CrossRef]
- Zhang, X. Deep learning-based multi-focus image fusion: A survey and a comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 4819–4838. [Google Scholar] [CrossRef] [PubMed]
- Fang, L.; Wang, X. An unsupervised multi-focus image fusion method via dual-channel convolutional network and discriminator. Comput. Vis. Image Underst. 2024, 244, 104029. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, L. Multi-focus image fusion: A Survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar] [CrossRef]
- Li, B.; Zhang, L.; Liu, J.; Peng, H. Multi-focus image fusion with parameter adaptive dual channel dynamic threshold neural P systems. Neural Netw. 2024, 179, 106603. [Google Scholar] [CrossRef] [PubMed]
- Lv, M.; Jia, Z.; Li, L.; Ma, H. Multi-focus image fusion via PAPCNN and fractal dimension in NSST domain. Mathematics 2023, 11, 3803. [Google Scholar] [CrossRef]
- Lv, M.; Li, L.; Jin, Q.; Jia, Z.; Chen, L.; Ma, H. Multi-focus image fusion via distance-weighted regional energy and structure tensor in NSCT domain. Sensors 2023, 23, 6135. [Google Scholar] [CrossRef]
- Lin, H.; Lin, Y. Fusion2Void: Unsupervised multi-focus image fusion based on image inpainting. IEEE Trans. Circuits Syst. Video Technol. 2025, 35, 3328–3341. [Google Scholar] [CrossRef]
- Li, L.; Ma, H.; Jia, Z. A novel multiscale transform decomposition based multi-focus image fusion framework. Multimed. Tools Appl. 2021, 80, 12389–12409. [Google Scholar] [CrossRef]
- Li, L.; Lv, M.; Jia, Z.; Ma, H. Sparse representation-based multi-focus image fusion method via local energy in shearlet domain. Sensors 2023, 23, 2888. [Google Scholar] [CrossRef]
- Li, L.; Si, Y.; Wang, L. A novel approach for multi-focus image fusion based on SF-PAPCNN and ISML in NSST domain. Multimed. Tools Appl. 2020, 79, 24303–24328. [Google Scholar] [CrossRef]
- Li, H.; Yuan, M.; Li, J.; Liu, Y.; Lu, G.; Xu, Y.; Yu, Z.; Zhang, D. Focus affinity perception and super-resolution embedding for multifocus image fusion. IEEE Trans. Neural Netw. Learn. Syst. 2025, 36, 4311–4325. [Google Scholar] [CrossRef]
- Li, L.; Song, S.; Lv, M.; Jia, Z.; Ma, H. Multi-focus image fusion based on fractal dimension and parameter adaptive unit-linking dual-channel PCNN in curvelet transform domain. Fractal Fract. 2025, 9, 157. [Google Scholar] [CrossRef]
- Lv, M.; Song, S.; Jia, Z.; Li, L.; Ma, H. Multi-focus image fusion based on dual-channel Rybak neural network and consistency verification in NSCT domain. Fractal Fract. 2025, 9, 432. [Google Scholar] [CrossRef]
- Li, L.; Ma, H. Saliency-guided nonsubsampled shearlet transform for multisource remote sensing image fusion. Sensors 2021, 21, 1756. [Google Scholar] [CrossRef] [PubMed]
- Sengan, S.; Gugulothu, P.; Alroobaea, R. A non-sub-sampled shearlet transform-based deep learning sub band enhancement and fusion method for multi-modal images. Sci. Rep. 2025, 15, 29472. [Google Scholar] [CrossRef] [PubMed]
- Vajpayee, P.; Panigrahy, C.; Kumar, A. Medical image fusion by adaptive Gaussian PCNN and improved Roberts operator. SIViP 2023, 17, 3565–3573. [Google Scholar] [CrossRef]
- Satyanarayana, V.; Mohanaiah, P. A novel MRI PET image fusion using shearlet transform and pulse coded neural network. Sci. Rep. 2025, 15, 6349. [Google Scholar] [CrossRef]
- Zebari, D.A.; Ibrahim, D.A.; Zeebaree, D.Q.; Mohammed, M.A.; Haron, H.; Zebari, N.A.; Damaševičius, R.; Maskeliūnas, R. Breast cancer detection using mammogram images with improved multi-fractal dimension approach and feature fusion. Appl. Sci. 2021, 11, 12122. [Google Scholar] [CrossRef]
- Zhang, J.; Zhang, X.; Boutat, D.; Liu, D. Fractional-order complex systems: Advanced control, intelligent estimation and reinforcement learning image-processing algorithms. Fractal Fract. 2025, 9, 67. [Google Scholar] [CrossRef]
- Wang, X.; Zhang, X.; Pedrycz, W.; Yang, S.; Boutat, D. Consensus of T-S fuzzy fractional-order, singular perturbation, multi-agent systems. Fractal Fract. 2024, 8, 523. [Google Scholar] [CrossRef]
- Zhang, X.; Chen, Y. Admissibility and robust stabilization of continuous linear singular fractional order systems with the fractional order α: The 0 < α < 1 case. ISA Trans. 2018, 82, 42–50. [Google Scholar] [PubMed]
- Li, L.; Zhao, X.; Hou, H.; Zhang, X.; Lv, M.; Jia, Z.; Ma, H. Fractal dimension-based multi-focus image fusion via coupled neural P systems in NSCT domain. Fractal Fract. 2024, 8, 554. [Google Scholar] [CrossRef]
- Zhang, X.; Liu, R.; Ren, J.; Gui, Q. Adaptive fractional image enhancement algorithm based on rough set and particle swarm optimization. Fractal Fract. 2022, 6, 100. [Google Scholar] [CrossRef]
- Zhang, X.; Dai, L. Image enhancement based on rough set and fractional order differentiator. Fractal Fract. 2022, 6, 214. [Google Scholar] [CrossRef]
- Gwendal, B.; Godefroy, B.; Sébastien, R.; Mohsen, A.; Emmanuel, D. A comprehensive survey on image fusion: Which approach fits which need. Inf. Fusion 2026, 126, 103594. [Google Scholar]
- Song, Y.; Xie, X.; Guo, B.; Xiong, X.; Li, P. MLP-MFF: Lightweight pyramid fusion MLP for ultra-efficient end-to-end multi-focus image fusion. Sensors 2025, 25, 5146. [Google Scholar] [CrossRef]
- Liu, J.; Li, S.; Liu, H.; Dian, R.; Wei, X. A lightweight pixel-level unified image fusion network. IEEE Trans. Neural Netw. Learn. Syst. 2024, 35, 18120–18132. [Google Scholar] [CrossRef]
- Cheng, C.; Xu, T.; Wu, X. FusionBooster: A unified image fusion boosting paradigm. Int. J. Comput. Vis. 2025, 133, 3041–3058. [Google Scholar] [CrossRef]
- Jiang, S.; Yu, S. Optimizing multi-focus image fusion through convolutional attention vision transformers and spatial consistency models. Appl. Soft Comput. 2025, 181, 113507. [Google Scholar] [CrossRef]
- Wu, P.; Tang, J. MHDBN: Mamba-based hybrid dual-branch network for multi-focus image fusion. Neural Netw. 2025, 192, 107916. [Google Scholar] [CrossRef]
- Bai, H.; Zhao, Z.; Zhang, J. ReFusion: Learning image fusion from reconstruction with learnable loss via meta-learning. Int. J. Comput. Vis. 2025, 133, 2547–2567. [Google Scholar] [CrossRef]
- Da, A.; Zhou, J.; Do, M. The nonsubsampled contourlet transform: Theory, design, and applications. IEEE Trans. Image Process. 2006, 15, 3089–3101. [Google Scholar] [CrossRef] [PubMed]
- Yang, B.; Sun, Y.; Li, Y. Image fusion with structural saliency measure and content adaptive consistency verification. J. Electron. Imaging 2020, 29, 013014. [Google Scholar] [CrossRef]
- Panigrahy, C.; Seal, A.; Mahato, N.K. Fractal dimension based parameter adaptive dual channel PCNN for multi-focus image fusion. Opt. Lasers Eng. 2020, 133, 106141. [Google Scholar] [CrossRef]
- Li, X.; Zhou, F.; Tan, H. Multi-focus image fusion based on nonsubsampled contourlet transform and residual removal. Signal Process. 2021, 184, 108062. [Google Scholar] [CrossRef]
- Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
- Zhang, H.; Le, Z. MFF-GAN: An unsupervised generative adversarial network with adaptive and gradient joint constraints for multi-focus image fusion. Inf. Fusion 2021, 66, 40–53. [Google Scholar] [CrossRef]
- Das, S.; Kundu, M. A neuro-fuzzy approach for medical image fusion. IEEE Trans. Biomed. Eng. 2013, 60, 3347–3353. [Google Scholar] [CrossRef]
- Xu, H.; Ma, J.; Jiang, J. U2Fusion: A unified unsupervised image fusion network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef]
- Hu, X.; Jiang, J.; Wang, C.; Jiang, K.; Liu, X.; Ma, J. Balancing task-invariant interaction and task-specific adaptation for unified image fusion. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Honolulu, Hawai’i, 19–23 October 2025; pp. 1–13. [Google Scholar]
- Wang, X.; Fang, L.; Zhao, J.; Pan, Z.; Li, H.; Li, Y. MMAE: A universal image fusion method via mask attention mechanism. Pattern Recognit. 2025, 158, 111041. [Google Scholar] [CrossRef]
- Xie, X.; Guo, B.; Li, P. SwinMFF: Toward high-fidelity end-to-end multi-focus image fusion via swin transformer-based network. Vis. Comput. 2025, 41, 3883–3906. [Google Scholar] [CrossRef]
- Zhang, Z.; Li, H.; Xu, T.; Wu, X.; Kittler, J. DDBFusion: An unified image decomposition and fusion framework based on dual decomposition and Bézier curves. Inf. Fusion 2025, 114, 102655. [Google Scholar] [CrossRef]
- Qu, X.; Yan, J.; Xiao, H. Image fusion algorithm based on spatial frequency-motivated pulse coupled neural networks in nonsubsampled contourlet transform domain. Acta Autom. Sin. 2008, 34, 1508–1514. [Google Scholar] [CrossRef]
- Liu, Z.; Blasch, E.; Xue, Z. Objective assessment of multiresolution image fusion algorithms for context enhancement in night vision: A comparative study. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 94–109. [Google Scholar] [CrossRef] [PubMed]
- Haghighat, M.; Razian, M. Fast-FMI: Non-reference image fusion metric. In Proceedings of the IEEE 8th International Conference on Application of Information and Communication Technologies, Astana, Kazakhstan, 15–17 October 2014; pp. 424–426. [Google Scholar]
- Wu, T.; Zhao, R. Efficient Mamba-attention network for remote sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5627814. [Google Scholar] [CrossRef]
- Wu, T.; Zhao, R. Lightweight remote sensing image super-resolution via background-based multiscale feature enhancement network. IEEE Geosci. Remote Sens. Lett. 2024, 21, 7509405. [Google Scholar] [CrossRef]
- Wang, Z.; Li, L.; Xue, Y. FeNet: Feature enhancement network for lightweight remote-sensing image super-resolution. IEEE Trans. Geosci. Remote Sens. 2022, 60, 5622112. [Google Scholar] [CrossRef]
- Zhang, X. Benchmarking and comparing multi-exposure image fusion algorithms. Inf. Fusion 2021, 74, 111–131. [Google Scholar] [CrossRef]
- Yin, M.; Liu, X.; Liu, Y.; Chen, X. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 2019, 68, 49–64. [Google Scholar] [CrossRef]
- Tang, W.; Liu, Y.; Cheng, J.; Li, C.; Chen, X. Green fluorescent protein and phase contrast image fusion via detail preserving cross network. IEEE Trans. Comput. Imaging 2021, 7, 584–597. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z. A practical pan-sharpening method with wavelet transform and sparse representation. In Proceedings of the 2013 IEEE International Conference on Imaging Systems and Techniques (IST), Beijing, China, 22–23 October 2013; pp. 288–293. [Google Scholar]
- Li, J.; Zhang, J.; Yang, C.; Liu, H.; Zhao, Y.; Ye, Y. Comparative analysis of pixel-level fusion algorithms and a new high-resolution dataset for SAR and optical image fusion. Remote Sens. 2023, 15, 5514. [Google Scholar] [CrossRef]
- Li, J.; Zheng, K.; Gao, L.; Han, Z.; Li, Z.; Chanussot, J. Enhanced deep image prior for unsupervised hyperspectral image super-resolution. IEEE Trans. Geosci. Remote Sens. 2025, 63, 5504218. [Google Scholar] [CrossRef]
- Vivone, G. Multispectral and hyperspectral image fusion in remote sensing: A survey. Inf. Fusion 2023, 89, 405–417. [Google Scholar] [CrossRef]
- Li, L.; Ma, H.; Zhang, X.; Zhao, X.; Lv, M.; Jia, Z. Synthetic aperture radar image change detection based on principal component analysis and two-level clustering. Remote Sens. 2024, 16, 1861. [Google Scholar] [CrossRef]



















| Levels | Direction Number | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| Lytro | 1 | 2 | 0.6149 | 0.6205 | 0.8884 | 0.6085 | 6.2476 | 0.8247 | 0.8352 | 0.8707 | 0.6780 | 35.0198 |
| 2 | 2, 2 | 0.6894 | 0.6399 | 0.8941 | 0.6840 | 6.4190 | 0.8259 | 0.8567 | 0.9207 | 0.7432 | 34.8660 | |
| 3 | 2, 2, 4 | 0.7255 | 0.7009 | 0.8987 | 0.7214 | 6.7249 | 0.8279 | 0.8965 | 0.9425 | 0.7916 | 34.9951 | |
| 4 | 2, 2, 4, 4 | 0.7307 | 0.7223 | 0.8991 | 0.7268 | 6.8351 | 0.8287 | 0.9107 | 0.9472 | 0.7966 | 35.0579 | |
| MFI- WHU | 1 | 2 | 0.6488 | 0.7493 | 0.8664 | 0.6423 | 6.2429 | 0.8253 | 0.8558 | 0.9380 | 0.6978 | 35.0114 |
| 2 | 2, 2 | 0.7093 | 0.7644 | 0.8755 | 0.7044 | 6.8523 | 0.8294 | 0.9376 | 0.9646 | 0.7590 | 35.2781 | |
| 3 | 2, 2, 4 | 0.7137 | 0.7672 | 0.8763 | 0.7086 | 7.0191 | 0.8305 | 0.9596 | 0.9606 | 0.7621 | 35.3111 | |
| 4 | 2, 2, 4, 4 | 0.7136 | 0.7670 | 0.8763 | 0.7087 | 7.0174 | 0.8307 | 0.9593 | 0.9596 | 0.7592 | 35.3750 |
| Year | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| RPCNN | 2013 | 0.7103 | 0.6799 | 0.8972 | 0.7058 | 6.7075 | 0.8280 | 0.8945 | 0.9208 | 0.7616 | 34.7486 |
| MFFGAN | 2021 | 0.6642 | 0.6457 | 0.8915 | 0.6592 | 6.0604 | 0.8237 | 0.8047 | 0.8821 | 0.7125 | 33.5508 |
| U2Fusion | 2022 | 0.6143 | 0.5682 | 0.8844 | 0.6093 | 5.7765 | 0.8221 | 0.7725 | 0.7912 | 0.6657 | 31.2098 |
| TITA | 2025 | 0.7266 | 0.6953 | 0.8982 | 0.7225 | 6.7304 | 0.8279 | 0.8976 | 0.9219 | 0.7935 | 34.7206 |
| MMAE | 2025 | 0.6326 | 0.6419 | 0.8846 | 0.6279 | 5.2676 | 0.8197 | 0.6995 | 0.8608 | 0.6952 | 33.7963 |
| SwinMFF | 2025 | 0.7006 | 0.6419 | 0.8943 | 0.6969 | 5.7303 | 0.8219 | 0.7627 | 0.8815 | 0.7705 | 30.2613 |
| DDBFusion | 2025 | 0.6664 | 0.6335 | 0.8819 | 0.6596 | 6.1463 | 0.8242 | 0.8233 | 0.8738 | 0.7007 | 36.3935 |
| ReFusion | 2025 | 0.7055 | 0.6811 | 0.8949 | 0.7000 | 6.6689 | 0.8275 | 0.8878 | 0.9121 | 0.7663 | 34.0662 |
| Proposed | 0.7307 | 0.7223 | 0.8991 | 0.7268 | 6.8351 | 0.8287 | 0.9107 | 0.9472 | 0.7966 | 35.0579 |
| Year | |||||||||||
|---|---|---|---|---|---|---|---|---|---|---|---|
| RPCNN | 2013 | 0.7003 | 0.7271 | 0.8745 | 0.6938 | 6.7474 | 0.8287 | 0.9221 | 0.9328 | 0.7383 | 35.7791 |
| MFFGAN | 2021 | 0.6427 | 0.6329 | 0.8684 | 0.6367 | 5.6832 | 0.8222 | 0.7709 | 0.8890 | 0.7041 | 31.6060 |
| U2Fusion | 2022 | 0.5502 | 0.5156 | 0.8565 | 0.5447 | 5.1498 | 0.8194 | 0.6991 | 0.7830 | 0.6212 | 30.1022 |
| TITA | 2025 | 0.7041 | 0.7640 | 0.8757 | 0.6992 | 6.6627 | 0.8283 | 0.9107 | 0.9394 | 0.7660 | 34.7039 |
| MMAE | 2025 | 0.5916 | 0.6813 | 0.8628 | 0.5852 | 4.9524 | 0.8188 | 0.6730 | 0.8646 | 0.6637 | 32.6619 |
| SwinMFF | 2025 | 0.6802 | 0.6538 | 0.8728 | 0.6732 | 5.3873 | 0.8208 | 0.7323 | 0.8899 | 0.7311 | 28.7265 |
| DDBFusion | 2025 | 0.6565 | 0.6867 | 0.8634 | 0.6482 | 5.8375 | 0.8229 | 0.8015 | 0.8803 | 0.7020 | 38.0224 |
| ReFusion | 2025 | 0.6878 | 0.7429 | 0.8730 | 0.6819 | 6.5044 | 0.8272 | 0.8864 | 0.9318 | 0.7574 | 32.7130 |
| Proposed | 0.7136 | 0.7670 | 0.8763 | 0.7087 | 7.0174 | 0.8307 | 0.9593 | 0.9596 | 0.7592 | 35.3750 |
| Lytro | W/o FD | 0.7273 | 0.7206 | 0.8988 | 0.7231 | 6.8338 | 0.8287 | 0.9106 | 0.9433 | 0.7910 | 35.0874 |
| W/ FD | 0.7307 | 0.7223 | 0.8991 | 0.7268 | 6.8351 | 0.8287 | 0.9107 | 0.9472 | 0.7966 | 35.0579 | |
| MFI-WHU | W/o FD | 0.7079 | 0.7628 | 0.8757 | 0.7027 | 6.9393 | 0.8302 | 0.9487 | 0.9567 | 0.7530 | 35.3496 |
| W/ FD | 0.7136 | 0.7670 | 0.8763 | 0.7087 | 7.0174 | 0.8307 | 0.9593 | 0.9596 | 0.7592 | 35.3750 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Lv, M.; Jia, Z.; Li, L.; Ma, H. Fractal Dimension-Based Multi-Focus Image Fusion via AGPCNN and Consistency Verification in NSCT Domain. Fractal Fract. 2026, 10, 1. https://doi.org/10.3390/fractalfract10010001
Lv M, Jia Z, Li L, Ma H. Fractal Dimension-Based Multi-Focus Image Fusion via AGPCNN and Consistency Verification in NSCT Domain. Fractal and Fractional. 2026; 10(1):1. https://doi.org/10.3390/fractalfract10010001
Chicago/Turabian StyleLv, Ming, Zhenhong Jia, Liangliang Li, and Hongbing Ma. 2026. "Fractal Dimension-Based Multi-Focus Image Fusion via AGPCNN and Consistency Verification in NSCT Domain" Fractal and Fractional 10, no. 1: 1. https://doi.org/10.3390/fractalfract10010001
APA StyleLv, M., Jia, Z., Li, L., & Ma, H. (2026). Fractal Dimension-Based Multi-Focus Image Fusion via AGPCNN and Consistency Verification in NSCT Domain. Fractal and Fractional, 10(1), 1. https://doi.org/10.3390/fractalfract10010001

