You are currently viewing a new version of our website. To view the old version click .
Diagnostics
  • Review
  • Open Access

21 February 2023

A Non-Conventional Review on Multi-Modality-Based Medical Image Fusion

,
,
and
1
Department of CSE, Graphic Era Deemed to be University, Dehradun 248002, India
2
School of Computer Science Engineering and Technology, Bennett University, Greater Noida 201310, India
3
Center for Artificial Intelligence, Prince Mohammad Bin Fahd University, Khobar 34754, Saudi Arabia
*
Author to whom correspondence should be addressed.
This article belongs to the Special Issue Recent Trends in Molecular Image-Guided Theranostic and Personalized Medicine

Abstract

Today, medical images play a crucial role in obtaining relevant medical information for clinical purposes. However, the quality of medical images must be analyzed and improved. Various factors affect the quality of medical images at the time of medical image reconstruction. To obtain the most clinically relevant information, multi-modality-based image fusion is beneficial. Nevertheless, numerous multi-modality-based image fusion techniques are present in the literature. Each method has its assumptions, merits, and barriers. This paper critically analyses some sizable non-conventional work within multi-modality-based image fusion. Often, researchers seek help in apprehending multi-modality-based image fusion and choosing an appropriate multi-modality-based image fusion approach; this is unique to their cause. Hence, this paper briefly introduces multi-modality-based image fusion and non-conventional methods of multi-modality-based image fusion. This paper also signifies the merits and downsides of multi-modality-based image fusion.

1. Introduction

There is currently a wide range of image-processing techniques available to generate optimal imaging quality for diagnostic purposes. The best quality image is crucial to gaining good visual information. Moreover, the fundamental strategy for image processing converts an analog image into a digital image. It performs some operations and calculates the mathematical form by using a type of signal processing with an image as input and a series of all images on it as output [1,2,3,4,5].
Various kinds of medical images are used to distinguish applications such as CT, PET, and MR images. The pixel is a crucial part of any image, and this little picture element has some coordinates and intensity color values [6,7,8,9,10]. The various digital image examples [11,12,13,14,15,16,17] represent images performed in relevant space and time by sampling. It is crucial to use a few processing operations block-wise, and the pixel-to-pixel operation on the image is the primary operation through which we can resolve the issue of some pixels overlapping [18,19,20,21,22].
Signal distribution or characteristics are associated with image processing operations to extract better image quality and some significant information. The set for f (x, y), where x and y are spatial coordinates and the amplitude of any pair of coordinates, can be used to determine two-dimensional (2D) images. The digital signal-processing operations are performed on the digital images [23,24,25,26,27,28]. The several operations performed are the enhancement of the image, the restoration of the image, the compression of the image, and the segmentation of the image [29,30,31,32,33]. These operations perform and concentrate on image enhancement to improve image quality during image processing. Image fusion is the merging of complementary information about two or more images into a single output image. Image fusion is widely used in several applications related to remote sensing, medical imaging, the military, and astronomy [34,35,36,37,38,39,40]. Image fusion is the technique of combining images to enhance the content information in the images. Image fusion methods are critical for improving the performance of object recognition systems by combining many different sources of images taken from different satellite images, and airborne images, and relying on ground-based systems for the different datasets [41,42,43,44,45]. The advantages, disadvantages, and applications of the fusion process are discussed in Table 1.
Table 1. Merits, demerits, and application of image fusion.

Major Contributions

Some of the most important contributions of this non-conventional multi-modal medical image fusion survey are listed below:
  • A detailed introduction to non-conventional multi-modal medical image fusion techniques is presented. Most of the works selected for this survey are recent;
  • In addition, an analysis of non-conventional strategies for fusing many types of medical images is performed. Using multi-modal-source images generated from a CT scan, a SPECT, an MR-T1 image, and an MR-T2 image, six typical medical image fusion algorithms are compared and contrasted based on the results of five prominent objective metrics;
  • Some future research potentials for non-conventional multi-modal image fusion are proposed, while the existing difficulties in this area are highlighted.
The rest of this paper is organized as follows: Section 2 presents a brief introduction to the background of the techniques used in multi-modality image fusion. Section 3 is about the related work of medical image fusion. A comparative analysis of non-traditional related work has been critically discussed in Section 4. The outcomes with visual analysis and performance metrics are discussed in Section 5. Section 6 concludes the paper with a future perspective.

2. Multi-Modality Image Fusion

Multi-modality image fusion entails a composition of the image taken from different medical sources and equipment to acquire more detailed and reliable information about the image. In recent trends, radiography synthesis has used multi-modality in medical diagnosis and treatment. These methods of cure are adopted for diagnosing or excluding the disease. Medical images are classified into several categories; they can be distinguished in the image based on the various human body functions and physical structure of the image, which has a relatively low functional image spatial resolution. Thus, it can provide information about blood circulation and visceral metabolic rate. As in Table 2, MR and CT image fusion CT images show the physical details, while MR images show the functional details.
In multimodality medical image fusion, the rules are applied in the spatial domain and wavelet domain. Figure 1 shows an illustration of the image fusion method using the block-wise focus measure rule in the spatial domain. In this block, both source images are divided, and by computing the focus measure and choosing the maxima rule, a fused image is obtained. Similarly, Figure 2 shows the image fusion method using the decision map rule in the wavelet domain. Here, image fusion is performed in the wavelet domain by decomposing the approximation and detail parts.
Figure 1. An illustration of the image fusion method using block-wise focus measure rule in the spatial domain.
Figure 2. An illustration of the image fusion method using decision map rule in the wavelet domain.
In a recent area of research, medical imaging that provides a representation of the object plays a crucial role in the field of medical treatment [46,47,48]. The complete structure of the spectrum in digital image processing is helpful in medical diagnosis. For good treatment, radiologists have to combine organs or diseases. Moreover, because of design constraints, instruments cannot provide this type of information. For the superior quality of the image, distinguishing conditions in image processing demand high spatial and spectral information in a single image [49,50,51,52,53]. In the medical field, the significance of medical images is distinguished from other images. The significance of body organs or living tissues present in medical images can be correctly analyzed by improving the heterogeneous areas of the images. The objects obtained with identical modality and size may vary from one patient to another; they are defined through a standardized acquisition protocol in terms of shape, internal structure, and sometimes various views of the identical patient at identical times [54,55]. In biological anatomy, object delineation cannot be erased from the background. Automatic image analysis in the field of medicine does not provide fake measurements. Rather, the robustness of the algorithm does: because those images cannot be handled properly, they are simply rejected. This illustration shows that image fusion enhances the quality of the image. In multimodality, the medical image fusion process has the objective of improving the quality of images by decreasing error and redundancy in order to enhance overall image quality [56,57,58,59]. Clinical detection in the field of medical imaging is used for treatment and problem assessment.
In recent research trends in image fusion, motivation for analysis in image fusion is shown to have a better outcome in the latest innovations in medicine, remote sensing, and the military. With the high resolution, robustness, and effectiveness of the cost-effective image fusion technique, this methodology continues to generate critical information. Achieving the crucial data in image fusion is a more challenging and typical task because of the high cost of instruments and the huge amount of blur data present in the image. It is essential to understand the concept of image fusion. The idea of “image fusion” is the combination of two or more different or identical images to develop a new image that contains several increments of information from the various sources of images. The central aspect of image fusion is increasing the resolution of images taken from various low-resolution images. The objective is already implemented in the medical research area because coronary artery disease (CAD) is a type of disease that happens through a lack of blood supply to the heart; therefore, image transparency is required for this type of disease. The doctor also determines the report of the patient in the brain tumor disease; thus, in several modalities, applying the brain images is performed with image fusion. From the perspective of the researchers, image fusion is both exciting and challenging. Today, image fusion plays a crucial role in image classification for various applications such as satellite imaging, medical imaging, aviation, detection of a concealed weapons, multi-focus image fusion techniques, digital cameras, battle monitoring, awareness in a defense situation, the CCTV (surveillance) sector of target-tracking, gathering of intelligence concepts, authentication of the person, the geo-informatics sector, etc.

5. Experimental Results

Using the MATLAB version R2022a software, the experimental evaluation is complete. A resolution of 512 × 512 is used for the experimental results of all the images. There are numerous multi-modality effects observed in Figure 3, Figure 4, Figure 5, Figure 6 and Figure 7. Input source images are shown in Figure 3a,b. Non-conventional methods, such as [44,47,49,51,53], are combined with the established structures in all images. Figure 3c–g depicts the effects of [44,47,49,51,53], respectively. Similarly, ref. [53] different modalities are also being investigated in MR-T2 images and SPECT images. In Figure 3, the result of ref. [44] is satisfactory; however, in terms of edge preservation and texture preservation, the result of ref. [44] is not satisfactory. However, the contrasts are well-preserved. In some homogenous areas, the texture is also well-preserved. However, sharpness in the heterogeneous area is missing. In Figure 3, the result of ref. [47] is acceptable; nevertheless, when it comes to the preservation of edges and textures, the result of ref. [47] falls short of expectations. Nevertheless, the contrasts have been maintained very well. There are some areas of homogeneity, and the texture has also been nicely conserved. However, there is a lack of brightness in the heterogeneous area. In Figure 3, the outcome of ref. [49] is satisfactory; nevertheless, when it comes to the maintenance of edges and textures, the result of ref. [49] does not live up to expectations. Nevertheless, the contrasts of ref. [48] have been preserved quite beautifully. There are some places of homogeneity, and the texture has also been carefully preserved. There are also other areas where the texture has changed. The heterogeneous region, on the other hand, has an inadequate amount of light. In Figure 3, the result of ref. [51] is satisfactory; nevertheless, when it comes to the maintenance of the image’s edges and textures, the result of ref. [51] does not live up to expectations. Nevertheless, the contrasts have been maintained quite well throughout. There are certain regions of consistency, and the texture has also been maintained in an admirable fashion. On the other hand, the heterogeneous region suffers from a lack of brightness. In Figure 3, the result of ref. [53] is fine, but it falls short when it comes to keeping edges and textures. Nevertheless, the differences have been maintained in a really good way. There are some areas where everything looks the same, and the texture has also been maintained well. However, the area that is not homogenous is not very bright.
Figure 3. Multi-modality medical image fusion results (a) Source image: CT, (b) Source Image: MR, (c) Result of [44], (d) Result of [47], (e) Result of [48], (f) Result of [49], (g) Result of [51], (h) Result of [53].
Figure 4. Multi-modality medical image fusion results (a) Source image: CT, (b) Source Image: MR, (c) Result of [44], (d) Result of [47], (e) Result of [48], (f) Result of [49], (g) Result of [51], (h) Result of [53].
Figure 5. (a) Source image: MR-T2, (b) Source Image: SPECT, (c) Result of [44], (d) Result of [47], (e) Result of [48], (f) Result of [49], (g) Result of [51], (h) Result of [53].
Figure 6. (a) Source image: MR, (b) Source Image: PET, (c) Result of [44], (d) Result of [47], (e) Result of [48], (f) Result of [49], (g) Result of [51], (h) Result of [53].
Figure 7. Graphical result analysis: SSIM of comparative methods between different medical image datasets over different methods- CNN-FM [44], FCT-GA [47], OWM-SO [49], NSCT-CWT [51], JSM-CD [53], JB-LGE [48].
Figure 4a,b shows both the MR-T2 and SPECT images. These two images are combined into existing structures such as [44,47,49,51,53]. Figure 4c–g expresses the effects of [44,47,49,51,53], respectively. To test the most current methods, a different modality dataset is also used. As can be seen in Figure 4, the outcome of ref. [44] is acceptable, but it falls short when it comes to preserving edges and textures. However, distinctions remain clear. There are several consistent spots where the original texture has been maintained. However, there is a lack of clarity in a wildly varying domain. Figure 4 shows that the output of ref. [47] is passable, but it fails to meet expectations when it comes to edge and texture preservation. However, the contrasts are excellently preserved. Some places are consistent, and the texture has been well-preserved overall. The heterogeneous region, however, suffers from a severe shortage of illumination. While the result of ref. [49] in Figure 4 is satisfactory, it falls short when it comes to the preservation of edges and textures. The contrasts of ref. [48], however, have been retained very well. Some areas are consistent, and the original texture has been maintained with great care. The texture shifts are not limited to these spots. However, the light levels are inadequate in the area of heterogeneity. The output of ref. [51] in Figure 4 is good, but it falls short of expectations when it comes to preserving the image’s edges and textures. However, contrasts have been maintained to a satisfactory degree. Several areas of continuity have been achieved, and the texture has been preserved with great skill. In contrast, the area with uneven brightness is considered to be the heterogeneous zone. The output of ref. [53] in Figure 4 is acceptable, but it fails to maintain edges and textures. Nevertheless, the distinctions are quite well-preserved. In some spots, the uniformity of the surface has been maintained, and the texture, too, has been carefully preserved. The non-uniform region, however, has a low luminosity.
As shown in Figure 5, on the MR-T1 image and the MR-T2 image, the latest methods were also checked. Figure 5a,b shows the corresponding MR-T1 and MR-T2 images. Figure 5c–g shows the results of [44,47,49,51,53], respectively. It is evaluated from visual inspection that the approaches to visual consistency described in [53] are stronger in terms of edges and content definitions compared to the existing methods. The overall visual quality of [51,53] is comparatively better. However, [53] outperforms [51]. Refs. [44,47] also show better texture preservation; in some cases, however, artifacts are observed. Ref. [49] shows great results in terms of edge preservation and smoothness in uniform and non-uniform areas. As shown in Figure 5, ref. [44] preserves edges and textures poorly. However, differences exist. The original texture is preserved in various places. In a diverse domain, clarity is lacking. Ref. [47]’s output is adequate, but it fails to preserve the edges and texture, as shown in Figure 5. However, contrasts are maintained well. The texture is consistent in certain spots. However, the diverse zone lacks light. Ref. [49] in Figure 5 preserves edges and textures poorly. The contrasts in ref. [48] were wonderfully preserved. Some portions are consistent, and the original texture has been carefully maintained. These places are not the only ones with texture changes. In heterogeneity, light levels are inadequate. In Figure 5, ref. [51]’s result is good, but it fails to preserve the image’s edges and textures. However, contrasts have been maintained. Several areas of continuity and texture preservation have been achieved. The heterogeneous zone has inconsistent brightness. Figure 5 shows ref. [53]’s passable output, but it loses edges and textures. Nevertheless, the distinctions are well-maintained. The surface’s homogeneity and texture have been preserved in some areas. However, the non-uniform zone has poor brightness.
Input source images are shown in Figure 6a,b. All images, for non-conventional methods [44,47,49,51,53], are fused with established structures. Figure 6c–g, respectively, expresses the results of [44,47,49,51,53]. In this regard, the results of ref. [53] seem better compared to other existing methods. The result of ref. [51] is also satisfactory. However, the results of methods refs. [44,47,49] are not satisfactory. Figure 6 shows that while the result of ref. [44] is passable, it fails to adequately preserve edges and textures. Nonetheless, disparities persist. The original texture has been preserved in a few regular places. However, in a field with so many variations, clarity is lacking. As can be seen in Figure 6, while the output of ref. [47] is serviceable, it falls short of expectations in terms of maintaining edges and textures. Contrasts are kept well and the texture has been well-preserved and is uniform in some areas. Unfortunately, the heterogeneous area is severely lacking in light. As can be seen in Figure 6, the outcome of ref. [49] is passable, but it fails to adequately preserve the image’s edges and textures. However, the contrasts in ref. [48] have been preserved well. Consistency may be found in some places, and the original texture has been preserved with meticulous attention to detail. These are not the only locations where the texture has changed. However, there is not enough illumination to properly study the region’s heterogeneity. Figure 6 shows that while ref. [51] produces a good image, it fails to meet expectations in terms of edge and texture preservation. There is a high level of talent involved in maintaining the texture while also achieving continuity in a number of key locations. As opposed to this, the area of varying luminosity is known as the “heterogeneous zone.” Figure 6 shows acceptable output from ref. [53], but note that it does not preserve the edges or textures. However, there are still clear delineations between the elements. In some areas, the surface’s uniformity and texture have been preserved. However, the brightness is very low in highly textured areas.
The visual results are not sufficient to evaluate the performance of fusion methods. Hence, some popular performance metrics are utilized to evaluate the performance of multi-modality medical image fusion algorithms. Mutual information (MI), edge index (Q(ABF)), spatial frequency (SF), SSIM, and NIQE are the performance metrics. The results are evaluated and compared as shown in Table 3, Table 4 and Table 5.
Table 3. Performance evaluation of results using the Figure 1 input image dataset.
Table 4. Performance evaluation of results using the Figure 2 input image dataset.
Table 5. Performance evaluation of results using the Figure 3 input image dataset.
For a more critical analysis of performance, the results are also evaluated on a noisy image dataset, as shown in Table 6. From Table 3, Table 4, Table 5 and Table 6, it can be seen that all methods provide better results for different performance metrics.
Table 6. Performance evaluation of results using Figure 3 input image dataset with Gaussian noise.
Additionally, a graphical result analysis is also shown in Figure 7. In Figure 5, SSIMs of comparative methods are analyzed between different medical image datasets. In most of the cases, Refs. [51,53] provide better results compared to others.

7. Conclusions

This comparative survey presents a non-conventional multimodality-based diagnostic image analysis. Human advanced data should be sensitive to improved contrast (high), pixel density and edge details, and emphasize contrast view dependence, the edges of a fusion device, and the recognition of texture. Many types of noise mistakes and improvements in the information provided in the fused image show how much data are acquired from the original image for computation. The results imply that non-conventional approaches that use the transform domain have better results when utilizing different spatial domain architectures. In addition to visual effects, the performance measurements show that approaches employing a transformed domain strategy produce better results for analogue spatial domain schemes.

Author Contributions

Conceptualization, M.D.; methodology, P.S.; software, V.R.; validation, A.M. All authors have read and agreed to the published version of the manuscript.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, H.; Manjunath, B.S.; Mitra, S.K. Multisensor image fusion using the wavelet transform. Graph. Models Image Process. 1995, 57, 235–245. [Google Scholar] [CrossRef]
  2. Shu-Long, Z. Image fusion using wavelet transform. Int. Arch. Photogramm. Remote Sens. Spat. Inf. Sci. 2002, 34, 552–556. [Google Scholar]
  3. Atrey, P.K.; Hossain, M.A.; El Saddik, A.; Kankanhalli, M.S. Multimodal fusion for multimedia analysis: A survey. Multimed. Syst. 2010, 16, 345–379. [Google Scholar] [CrossRef]
  4. Mittal, A.; Moorthy, A.K.; Bovik, A.C. No-reference image quality assessment in the spatial domain. IEEE Trans. Image Process. 2012, 21, 4695–4708. [Google Scholar] [CrossRef] [PubMed]
  5. Singh, P.; Diwakar, M.; Cheng, X.; Shankar, A. A new wavelet-based multi-focus image fusion technique using method noise and anisotropic diffusion for real-time surveillance application. J. Real-Time Image Process. 2021, 18, 1051–1068. [Google Scholar] [CrossRef]
  6. Diwakar, M.; Tripathi, A.; Joshi, K.; Sharma, A.; Singh, P.; Memoria, M.; Kumar, N. A comparative review: Medical image fusion using SWT and DWT. Mater. Today Proc. 2020, 37 Pt 2, 3411–3416. [Google Scholar] [CrossRef]
  7. Sahu, D.K.; Parsai, M.P. Different image fusion techniques–A critical review. Int. J. Mod. Eng. Res. IJMER 2012, 2, 4298–4301. [Google Scholar]
  8. Ramandeep, R.K. Review on different aspects of image fusion for medical imaging. Int. J. Sci. Res. 2014, 3, 1887–1889. [Google Scholar]
  9. James, A.P.; Dasarathy, B.V. Medical image fusion: A survey of the state of the art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef]
  10. Indira, K.P.; Hemamalini, R.R. Analysis on image fusion techniques for medical applications. Int. J. Adv. Res. Electr. Electron. Instrum. Eng. 2014, 3, 2278–8875. [Google Scholar]
  11. Bhatnagar, G.; Wu, Q.J.; Liu, Z. A new contrast based multimodal medical image fusion framework. Neurocomputing 2015, 157, 143–152. [Google Scholar] [CrossRef]
  12. Diwakar, M.; Singh, P.; Shankar, A.; Nayak, S.R.; Nayak, J.; Vimal, S.; Singh, R.; Sisodia, D. Directive clustering contrast-based multi-modality medical image fusion for smart healthcare system. Netw. Model. Anal. Health Inform. Bioinform. 2022, 11, 15. [Google Scholar] [CrossRef]
  13. Deshmukh, D.P.; Malviya, A.V. A Review On: Image Fusion Using Wavelet Transform. Int. J. Eng. Trends Technol. 2015, 21, 376–379. [Google Scholar] [CrossRef]
  14. Bhavana, V.; Krishnappa, H.K. Multi-modality medical image fusion using discrete wavelet transform. Procedia Comput. Sci. 2015, 70, 625–631. [Google Scholar] [CrossRef]
  15. Ma, W.; Wang, K.; Li, J.; Yang, S.X.; Li, J.; Song, L.; Li, Q. Infrared and Visible Image Fusion Technology and Application: A Review. Sensors 2023, 23, 599. [Google Scholar] [CrossRef]
  16. El-Gamal, F.E.Z.A.; Elmogy, M.; Atwan, A. Current trends in medical image registration and fusion. Egypt. Inform. J. 2016, 17, 99–124. [Google Scholar] [CrossRef]
  17. Singh, P.; Diwakar, M.; Chakraborty, A.; Jindal, M.; Tripathi, A.; Bajal, E. A non-conventional review on image fusion techniques. In Proceedings of the 2021 IEEE 8th Uttar Pradesh Section International Conference on Electrical, Electronics and Computer Engineering, UPCON 2021, Dehradun, India, 11–13 November 2021; IEEE: Piscataway, NJ, USA, 2022. [Google Scholar] [CrossRef]
  18. Tiwari, A.K.; Pachori, R.B.; Kanhangad, V.; Panigrahi, B.K. Automated diagnosis of epilepsy using key-point-based local binary pattern of EEG signals. IEEE J. Biomed. Health Inform. 2016, 21, 888–896. [Google Scholar] [CrossRef]
  19. Li, H.; Liu, X.; Yu, Z.; Zhang, Y. Performance improvement scheme of multifocus image fusion derived by difference images. Signal Process. 2016, 128, 474–493. [Google Scholar] [CrossRef]
  20. Dhaundiyal, R.; Tripathi, A.; Joshi, K.; Diwakar, M.; Singh, P. Clustering based multi-modality medical image fusion. J. Phys. Conf. Ser. 2020, 1478, 12024. [Google Scholar] [CrossRef]
  21. Singh, P.; Diwakar, M. Wavelet-based multi-focus image fusion using average method noise diffusion (AMND). Recent Adv. Comput. Sci. Commun. 2021, 14, 2436–2448. [Google Scholar] [CrossRef]
  22. Diwakar, M.; Singh, P.; Shankar, A. Multi-modal medical image fusion framework using co-occurrence filter and local extrema in NSST domain. Biomed. Signal Process. Control 2021, 68, 102788. [Google Scholar] [CrossRef]
  23. Nsengiyumva, W.; Zhong, S.; Luo, M.; Zhang, Q.; Lin, J. Critical insights into the state-of-the-art NDE data fusion techniques for the inspection of structural systems. Struct. Control. Health Monit. 2022, 29, e2857. [Google Scholar] [CrossRef]
  24. Nejati, M.; Samavi, S.; Karimi, N.; Soroushmehr, S.R.; Shirani, S.; Roosta, I.; Najarian, K. Surface area-based focus criterion for multi-focus image fusion. Inf. Fusion 2017, 36, 284–295. [Google Scholar] [CrossRef]
  25. Xiao, D.; Wang, L.; Xiang, T.; Wang, Y. Multi-focus image fusion and robust encryption algorithm based on compressive sensing. Opt. Laser Technol. 2017, 91, 212–225. [Google Scholar] [CrossRef]
  26. Luo, X.; Zhang, Z.; Zhang, C.; Wu, X. Multi-focus image fusion using HOSVD and edge intensity. J. Vis. Commun. Image Represent. 2017, 45, 46–61. [Google Scholar] [CrossRef]
  27. Qin, X.; Zheng, J.; Hu, G.; Wang, J. Multi-focus image fusion based on window empirical mode decomposition. Infrared Phys. Technol. 2017, 85, 251–260. [Google Scholar] [CrossRef]
  28. Zong, J.J.; Qiu, T.S. Medical image fusion based on sparse representation of classified image patches. Biomed. Signal Process. Control 2017, 34, 195–205. [Google Scholar] [CrossRef]
  29. Daniel, E.; Anitha, J.; Kamaleshwaran, K.K.; Rani, I. Optimum spectrum mask based medical image fusion using Gray Wolf Optimization. Biomed. Signal Process. Control 2017, 34, 36–43. [Google Scholar] [CrossRef]
  30. Diwakar, M.; Rastogi, V.; Singh, P. Multi-modality Medical Image Fusion Using Fuzzy Local Information C-Means Clustering in SWT Domain. In Proceedings of the International Conference on Futuristic Trends in Networks and Computing Technologies, Chandigarh, India, 22–23 November 2019; Springer: Singapore, 2019; pp. 557–564. [Google Scholar]
  31. Li, H.; Qiu, H.; Yu, Z.; Li, B. Multifocus image fusion via fixed window technique of multiscale images and non-local means filtering. Signal Process. 2017, 138, 71–85. [Google Scholar] [CrossRef]
  32. Singh, S.; Anand, R.S. Ripplet domain fusion approach for CT and MR medical image information. Biomed. Signal Process. Control 2018, 46, 281–292. [Google Scholar] [CrossRef]
  33. Aishwarya, N.; Thangammal, C.B. Visible and infrared image fusion using DTCWT and adaptive combined clustered dictionary. Infrared Phys. Technol. 2018, 93, 300–309. [Google Scholar] [CrossRef]
  34. Shahdoosti, H.R.; Mehrabi, A. Multimodal image fusion using sparse representation classification in tetrolet domain. Digit. Signal Process. 2018, 79, 9–22. [Google Scholar] [CrossRef]
  35. He, C.; Liu, Q.; Li, H.; Wang, H. Multimodal medical image fusion based on IHS and PCA. Procedia Eng. 2010, 7, 280–285. [Google Scholar] [CrossRef]
  36. Daneshvar, S.; Ghassemian, H. MR and PET image fusion by combining IHS and retina-inspired models. Inf. Fusion 2010, 11, 114–123. [Google Scholar] [CrossRef]
  37. Escalante-Ramírez, B. The Hermite transform as an efficient model for local image analysis: An application to medical image fusion. Comput. Electr. Eng. 2008, 34, 99–110. [Google Scholar] [CrossRef]
  38. Yang, J.; Han, F.; Zhao, D. A block advanced PCA fusion algorithm based on PET/CT. In Proceedings of the 2011 Fourth International Conference on Intelligent Computation Technology and Automation, Shenzhen, China, 28–29 March 2011; IEEE: Piscataway, NJ, USA, 2011; Volume 2, pp. 925–928. [Google Scholar]
  39. Jindal, M.; Bajal, E.; Chakraborty, A.; Singh, P.; Diwakar, M.; Kumar, N. A novel multi-focus image fusion paradigm: A hybrid approach. Mater. Today Proc. 2020, 37, 2952–2958. [Google Scholar] [CrossRef]
  40. Vivone, G. Multispectral and hyperspectral image fusion in remote sensing: A survey. Inf. Fusion 2023, 89, 405–417. [Google Scholar] [CrossRef]
  41. Singh, P.; Shree, R. A New Computationally Improved Homomorphic Despeckling Technique of SAR Images. Int. J. Adv. Res. Comput. Sci. 2017, 8, 894–898. [Google Scholar]
  42. Manchanda, M.; Sharma, R. An improved multimodal medical image fusion algorithm based on fuzzy transform. J. Vis. Commun. Image Represent. 2018, 51, 76–94. [Google Scholar] [CrossRef]
  43. Yang, Y.; Wu, J.; Huang, S.; Fang, Y.; Lin, P.; Que, Y. Multimodal medical image fusion based on fuzzy discrimination with structural patch decomposition. IEEE J. Biomed. Health Inform. 2018, 23, 1647–1660. [Google Scholar] [CrossRef]
  44. Singh, S.; Anand, R.S. Multimodal medical image fusion using hybrid layer decomposition with CNN-based feature mapping and structural clustering. IEEE Trans. Instrum. Meas. 2019, 69, 3855–3865. [Google Scholar] [CrossRef]
  45. Gambhir, D.; Manchanda, M. Waveatom transform-based multimodal medical image fusion. Signal Image Video Process. 2019, 13, 321–329. [Google Scholar] [CrossRef]
  46. Li, X.; Guo, X.; Han, P.; Wang, X.; Li, H.; Luo, T. Laplacian redecomposition for multimodal medical image fusion. IEEE Trans. Instrum. Meas. 2020, 69, 6880–6890. [Google Scholar] [CrossRef]
  47. Arif, M.; Wang, G. Fast curvelet transform through genetic algorithm for multimodal medical image fusion. Soft Comput. 2020, 24, 1815–1836. [Google Scholar] [CrossRef]
  48. Li, X.; Zhou, F.; Tan, H.; Zhang, W.; Zhao, C. Multimodal medical image fusion based on joint bilateral filter and local gradient energy. Inf. Sci. 2021, 569, 302–325. [Google Scholar] [CrossRef]
  49. Shehanaz, S.; Daniel, E.; Guntur, S.R.; Satrasupalli, S. Optimum weighted multimodal medical image fusion using particle swarm optimization. Optik 2021, 231, 166413. [Google Scholar] [CrossRef]
  50. Tang, W.; He, F.; Liu, Y.; Duan, Y. MATR: Multimodal medical image fusion via multiscale adaptive transformer. IEEE Trans. Image Process. 2022, 31, 5134–5149. [Google Scholar] [CrossRef]
  51. Alseelawi, N.; Hazim, H.T.; Salim ALRikabi, H.T. A Novel Method of Multimodal Medical Image Fusion Based on Hybrid Approach of NSCT and DTCWT. Int. J. Online Biomed. Eng. 2022, 18, 28011. [Google Scholar] [CrossRef]
  52. Li, W.; Zhang, Y.; Wang, G.; Huang, Y.; Li, R. DFENet: A dual-branch feature enhanced network integrating transformers and convolutional feature learning for multimodal medical image fusion. Biomed. Signal Process. Control 2023, 80, 104402. [Google Scholar] [CrossRef]
  53. Zhang, C.; Zhang, Z.; Feng, Z.; Yi, L. Joint sparse model with coupled dictionary for medical image fusion. Biomed. Signal Process. Control 2023, 79, 104030. [Google Scholar] [CrossRef]
  54. Zhou, T.; Li, Q.; Lu, H.; Cheng, Q.; Zhang, X. GAN review: Models and medical image fusion applications. Inf. Fusion 2023, 91, 134–148. [Google Scholar] [CrossRef]
  55. Liu, J.; Dian, R.; Li, S.; Liu, H. SGFusion: A saliency guided deep-learning framework for pixel-level image fusion. Inf. Fusion 2023, 91, 205–214. [Google Scholar] [CrossRef]
  56. Rajalingam, B.; Priya, R.; Bhavani, R.; Santhoshkumar, R. Image Fusion Techniques for Different Multimodality Medical Images Based on Various Conventional and Hybrid Algorithms for Disease Analysis. In Research Anthology on Improving Medical Imaging Techniques for Analysis and Intervention; IGI Global: Hershey, PA, USA, 2023; pp. 268–299. [Google Scholar]
  57. Wang, X.; Hua, Z.; Li, J. Multi-focus image fusion framework based on transformer and feedback mechanism. Ain Shams Eng. J. 2023, 14, 101978. [Google Scholar] [CrossRef]
  58. Xie, L.; Wisse, L.E.; Wang, J.; Ravikumar, S.; Khandelwal, P.; Glenn, T.; Luther, A.; Lim, S.; Wolk, D.A.; Yushkevich, P.A. Deep label fusion: A generalizable hybrid multi-atlas and deep convolutional neural network for medical image segmentation. Med. Image Anal. 2023, 83, 102683. [Google Scholar] [CrossRef] [PubMed]
  59. Zhan, B.; Song, E.; Liu, H.; Gong, Z.; Ma, G.; Hung, C.C. CFNet: A medical image segmentation method using the multi-view attention mechanism and adaptive fusion strategy. Biomed. Signal Process. Control 2023, 79, 104112. [Google Scholar] [CrossRef]
  60. Yuan, F.; Zhang, Z.; Fang, Z. An effective CNN and Transformer complementary network for medical image segmentation. Pattern Recognit. 2023, 136, 109228. [Google Scholar] [CrossRef]
  61. Xie, S.; Li, H.; Wang, Z.; Zhou, D.; Ding, Z.; Liu, Y. PSMFF: A progressive series-parallel modality feature filtering framework for infrared and visible image fusion. Digit. Signal Process. 2023, 134, 103881. [Google Scholar] [CrossRef]
  62. Liu, X.; Wang, R.; Huo, H.; Yang, X.; Li, J. An attention-guided and wavelet-constrained generative adversarial network for infrared and visible image fusion. Infrared Phys. Technol. 2023, 129, 104570. [Google Scholar] [CrossRef]
  63. Alshathri, S.; Hemdan, E.E.D. An efficient audio watermarking scheme with scrambled medical images for secure medical internet of things systems. Multimed. Tools Appl. 2023, 82, 1–19. [Google Scholar] [CrossRef]
  64. Vasu, G.T.; Palanisamy, P. Gradient-based multi-focus image fusion using foreground and background pattern recognition with weighted anisotropic diffusion filter. Signal Image Video Process. 2023, 17, 1–13. [Google Scholar] [CrossRef]
  65. Jaganathan, S.; Kukla, M.; Wang, J.; Shetty, K.; Maier, A. Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–7 January 2023; pp. 2788–2798. [Google Scholar]
  66. Li, H.; Qian, W.; Nie, R.; Cao, J.; Xu, D. Siamese conditional generative adversarial network for multi-focus image fusion. Appl. Intell. 2023, 53, 1–16. [Google Scholar] [CrossRef]
  67. Fletcher, P.; De Santis, M.; Ippoliti, S.; Orecchia, L.; Charlesworth, P.; Barrett, T.; Kastner, C. Vector Prostate Biopsy: A Novel Magnetic Resonance Imaging/Ultrasound Image Fusion Transperineal Biopsy Technique Using Electromagnetic Needle Tracking Under Local Anaesthesia. Eur. Urol. 2023, 83, 249–256. [Google Scholar] [CrossRef] [PubMed]
  68. AlDahoul, N.; Karim, H.A.; Momo, M.A.; Escobara, F.I.F.; Tan, M.J.T. Space Object Recognition with Stacking of CoAtNets using Fusion of RGB and Depth Images. IEEE Access 2023, 11, 5089–5109. [Google Scholar] [CrossRef]
  69. Bao, H.; Zhu, Y.; Li, Q. Hybrid-scale contextual fusion network for medical image segmentation. Comput. Biol. Med. 2023, 152, 106439. [Google Scholar] [CrossRef]
  70. Wu, P.; Jiang, L.; Hua, Z.; Li, J. Multi-focus image fusion: Transformer and shallow feature attention matters. Displays 2023, 76, 102353. [Google Scholar] [CrossRef]
  71. Wang, C.; Lv, X.; Shao, M.; Qian, Y.; Zhang, Y. A novel fuzzy hierarchical fusion attention convolution neural network for medical image super-resolution reconstruction. Inf. Sci. 2023, 622, 424–436. [Google Scholar] [CrossRef]
  72. Li, J.; Liu, J.; Zhou, S.; Zhang, Q.; Kasabov, N.K. Infrared and visible image fusion based on residual dense network and gradient loss. Infrared Phys. Technol. 2023, 128, 104486. [Google Scholar] [CrossRef]
  73. Zeng, X.; Dong, Q.; Li, Y. MG-CNFNet: A multiple grained channel normalized fusion networks for medical image deblurring. Biomed. Signal Process. Control 2023, 82, 104572. [Google Scholar] [CrossRef]
  74. Zheng, J.; Liu, H.; Feng, Y.; Xu, J.; Zhao, L. CASF-Net: Cross-attention and cross-scale fusion network for medical image segmentation. Comput. Methods Programs Biomed. 2023, 229, 107307. [Google Scholar] [CrossRef]
  75. Yin, W.; He, K.; Xu, D.; Yue, Y.; Luo, Y. Adaptive low light visual enhancement and high-significant target detection for infrared and visible image fusion. Vis. Comput. 2023, 39, 1–20. [Google Scholar] [CrossRef]
  76. Hu, X.; Jiang, J.; Liu, X.; Ma, J. ZMFF: Zero-shot multi-focus image fusion. Inf. Fusion 2023, 92, 127–138. [Google Scholar] [CrossRef]
  77. Yang, X.; Huo, H.; Wang, R.; Li, C.; Liu, X.; Li, J. DGLT-Fusion: A decoupled global–local infrared and visible image fusion transformer. Infrared Phys. Technol. 2023, 128, 104522. [Google Scholar] [CrossRef]
  78. Kaya, Y.; Gürsoy, E. A novel multi-head CNN design to identify plant diseases using the fusion of RGB images. Ecol. Inform. 2023, 73, 101998. [Google Scholar] [CrossRef]
  79. Zhou, H.; Yang, G.; Jing, X.; Zhou, Y.; Ding, J.; Wang, Y.; Wang, F.; Zhao, L. Predictive Value of Ablative Margin Assessment After Microwave Ablation for Local Tumor Progression in Medium and Large Hepatocellular Carcinoma: Computed Tomography–Computed Tomography Image Fusion Method Versus Side-by-Side Method. J. Comput. Assist. Tomogr. 2023, 47, 31–37. [Google Scholar] [CrossRef]
  80. Wu, L.; Jiang, X.; Yin, Y.; Cheng, T.C.E.; Sima, X. Multi-band remote sensing image fusion based on collaborative representation. Inf. Fusion 2023, 90, 23–35. [Google Scholar] [CrossRef]
  81. El-Shafai, W.; Mahmoud, A.A.; Ali, A.M.; El-Rabaie, E.S.M.; Taha, T.E.; El-Fishawy, A.S.; Zahran, O.; El-Samie, F.E.A. Efficient classification of different medical image multimodalities based on simple CNN architecture and augmentation algorithms. J. Opt. 2023, 51, 1–13. [Google Scholar] [CrossRef]
  82. Kaur, P.; Singh, R.K. A review on optimization techniques for medical image analysis. Concurr. Comput. Pract. Exp. 2023, 35, e7443. [Google Scholar] [CrossRef]
  83. Fournier, E.; Batteux, C.; Mostefa-Kara, M.; Valdeolmillos, E.; Maltret, A.; Cohen, S.; Aerschot, I.V.; Guirgis, L.; Hascoët, S. Cardiac tomography-echocardiography imaging fusion: A new approach to congenital heart disease. Rev. Española Cardiol. Engl. Ed. 2023, 76, 10–18. [Google Scholar] [CrossRef]
  84. Li, L.; Mazomenos, E.; Chandler, J.H.; Obstein, K.L.; Valdastri, P.; Stoyanov, D.; Vasconcelos, F. Robust endoscopic image mosaicking via fusion of multimodal estimation. Med. Image Anal. 2023, 84, 102709. [Google Scholar] [CrossRef]
  85. Xu, H.; Ma, J. EMFusion: An unsupervised enhanced medical image fusion network. Inf. Fusion 2021, 76, 177–186. [Google Scholar] [CrossRef]
  86. Zhang, G.; Nie, R.; Cao, J.; Chen, L.; Zhu, Y. FDGNet: A pair feature difference guided network for multimodal medical image fusion. Biomed. Signal Process. Control 2023, 81, 104545. [Google Scholar] [CrossRef]
  87. Liu, Y.; Wang, L.; Cheng, J.; Li, C.; Chen, X. Multi-focus image fusion: A survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar] [CrossRef]
  88. Liu, Y.; Shi, Y.; Mu, F.; Cheng, J.; Chen, X. Glioma segmentation-oriented multi-modal mr image fusion with adversarial learning. IEEE/CAA J. Autom. Sin. 2022, 9, 1528–1531. [Google Scholar] [CrossRef]
  89. Zhang, Y.; Xiang, W.; Zhang, S.; Shen, J.; Wei, R.; Bai, X.; Zhang, L.; Zhang, Q. Local extreme map guided multi-modal brain image fusion. Front. Neurosci. 2022, 16, 1866. [Google Scholar] [CrossRef]
  90. Wang, A.; Luo, X.; Zhang, Z.; Wu, X.J. A disentangled representation based brain image fusion via group lasso penalty. Front. Neurosci. 2022, 16, 937861. [Google Scholar] [CrossRef]
  91. Naik, S.; Tech, M.; Limbachiya, T.; Kakadiya, U.; Satasiya, V. Multi Focus Image Fusion Techniques. Int. J. Recent Innov. Trends Comput. Commun. 2019, 3, 5. [Google Scholar] [CrossRef]
  92. Liu, H.; Li, S.; Zhu, J.; Deng, K.; Liu, M.; Nie, L. DDIFN: A Dual-discriminator Multi-modal Medical Image Fusion Network. ACM Trans. Multimed. Comput. Commun. Appl. 2023, 19, 3574136. [Google Scholar] [CrossRef]
  93. Qu, L.; Liu, S.; Wang, M.; Song, Z. Rethinking multi-exposure image fusion with extreme and diverse exposure levels: A robust framework based on Fourier transform and contrastive learning. Inf. Fusion 2023, 92, 389–403. [Google Scholar] [CrossRef]
  94. Ulucan, O.; Ulucan, D.; Turkan, M. Ghosting-free multi-exposure image fusion for static and dynamic scenes. Signal Process. 2023, 202, 108774. [Google Scholar] [CrossRef]
  95. Liu, Y.; Zhou, D.; Nie, R.; Hou, R.; Ding, Z.; Xia, W.; Li, M. Green fluorescent protein and phase contrast image fusion via Spectral TV filter-based decomposition. Biomed. Signal Process. Control 2023, 79, 104265. [Google Scholar] [CrossRef]
  96. Zhang, Y.; Wang, M.; Xia, X.; Sun, D.; Zhou, X.; Wang, Y.; Dai, Q.; Jin, M.; Liu, L.; Huang, G. Medical image fusion based on quasi-cross bilateral filtering. Biomed. Signal Process. Control 2023, 80, 104259. [Google Scholar] [CrossRef]
  97. Zhang, X.; Dai, X.; Zhang, X.; Jin, G. Joint principal component analysis and total variation for infrared and visible image fusion. Infrared Phys. Technol. 2023, 128, 104523. [Google Scholar] [CrossRef]
  98. Chen, J.; Ding, J.; Yu, Y.; Gong, W. THFuse: An Infrared and Visible Image Fusion Network using Transformer and Hybrid Feature Extractor. Neurocomputing 2023, 527, 71–82. [Google Scholar] [CrossRef]
  99. Kalantari, N.K.; Ramamoorthi, R. Deep high dynamic range imaging of dynamic scenes. ACM Trans. Graph. 2017, 36, 144. [Google Scholar] [CrossRef]
  100. Wang, Z.; Huang, L.; Kodama, K. Robust extension of light fields with probable 3D distribution based on iterative scene estimation from multi-focus images. Signal Processing: Image Commun. 2023, 111, 116896. [Google Scholar] [CrossRef]
  101. Haribabu, M.; Guriviah, V.; Yogarajah, P. Recent Advancements in Multimodal Medical Image Fusion Techniques for Better Diagnosis: An overview. Curr. Med. Imaging Former. Curr. Med. Imaging Rev. 2022, 18. [Google Scholar] [CrossRef] [PubMed]
  102. Huang, B.; Yang, F.; Yin, M.; Mo, X.; Zhong, C. A review of multimodal medical image fusion techniques. Comput. Math. Methods Med. 2020, 2020, 8279342. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.