Abstract
Multimodal medical image fusion (MMIF) is the process of merging different modalities of medical images into a single output image (fused image) with a significant quantity of information to improve clinical applicability. It enables a better diagnosis and makes the diagnostic process easier. In medical image fusion (MIF), an intuitionistic fuzzy set (IFS) plays a role in enhancing the quality of the image, which is useful for medical diagnosis. In this article, a new approach to intuitionistic fuzzy set-based MMIF has been proposed. Initially, the input medical images are fuzzified and then create intuitionistic fuzzy images (IFIs). Intuitionistic fuzzy entropy plays a major role in calculating the optimal value for three degrees, namely, membership, non-membership, and hesitation. After that, the IFIs are decomposed into small blocks and then perform the fusion rule. Finally, the enhanced fused image can be obtained by the defuzzification process. The proposed method is tested on various medical image datasets in terms of subjective and objective analysis. The proposed algorithm provides a better-quality fused image and is superior to other existing methods such as PCA, DWTPCA, contourlet transform (CONT), DWT with fuzzy logic, Sugeno’s intuitionistic fuzzy set, Chaira’s intuitionistic fuzzy set, and PC-NSCT. The assessment of the fused image is evaluated with various performance metrics such as average pixel intensity (API), standard deviation (SD), average gradient (AG), spatial frequency (SF), modified spatial frequency (MSF), cross-correlation (CC), mutual information (MI), and fusion symmetry (FS).
1. Introduction
In past decades, image fusion has matured significantly in the application fields such as medical [], military [,], and remote sensing []. Image fusion is a prominent application in the medical field for better analysis of human organs and tissues. In general, the medical image data is available from various imaging techniques such as magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), computed tomography (CT), T1-weighted MR, T2-weighted MR, positron emission tomography (PET), and single-photon emission computed tomography (SPECT) []. Each technique has different characteristics.
Multimodal medical images are widely characterized into two types: anatomical and functional modalities, respectively. Anatomical modalities are, namely, MRI, MRA, T1-weighted MR, T2-weighted MR, and CT. CT images represent a clear bone structure with lower distortion but do not distinguish physical changes, while MRI images provide delicate tissue information with high spatial resolution. CT imaging is used to diagnose diseases such as muscle disease, vascular conditions, bone fractures and tumors etc. MRI imaging is used to diagnose various issues in medial regions such as brain tumors, multiple sclerosis, lung cancer and treatment, brain hemorrhage, and dementia etc. Magnetic resonance angiography, or MRA, is a subset of MRI that utilizes magnetic fields and radio waves, which create images of the body’s arteries, helping clinicians to detect blood flow abnormalities. The weighted MR-T1 images reveal fat, while weighted MR-T2 images provide water content.
Functional modalities are PET, and SPECT. PET imaging gives functionality of human organs with high sensitivity. The PET imaging technology is used to diagnosis different diseases such as Alzheimer’s disease, Parkinson’s disease, cerebrovascular accident, and hematoma. The other application areas of PET imaging are lung and breast cancer diagnosis, and cancer treatment.
SPECT imaging provides blood flow information with minimal spatial resolution, and is used for different diagnoses, namely, brain and bone disorders, and heart problems. The application areas in SPECT imaging are pelvis irradiation detection and treatment, vulvar cancer, breast cancer assessment, and head and neck cancer diagnosis [,]. However, single medical image data cannot provide the required information for diagnosis. To overcome this, multimodal medical image fusion is necessary.
Multimodal medical image fusion is the process of merging different modalities of medical images into a single output image. Its advantages include decreased uncertainty, resilient system performance, and higher reliability, all of which contribute to more accurate diagnosis, thus improving treatment. From the literature, authors have reported various multimodality combinations. Fusion of T1- and T2-weighted MR images produce a fused image, and is used to identify tumor regions []. The soft and hard tissue information from MRI and CT images, respectively, are combined into a single resultant image by fusion resulting in better image analysis []. The T1-weighted MR and MRA [] combination provides perfect lesion locations with delicate tissues. The MRI–PET [] combination and MRI–SPECT [] combinations provide anatomical and functional information in a single image, which is used to better diagnosis disease and medical-related problems. The objective of this research article is to examine the relevance and advancement of information fusion approaches in medical imaging for investigation of clinical aspects and better treatment.
In any fusion strategy, two important requirements should be satisfied: it should not add any artifacts or blocking effects to the resultant image; and no information should be lost throughout the fusion process.
Image fusion techniques are broadly classified into three levels [], namely, pixel-level, feature-level, and decision-level. In pixel-level fusion, image pixel values are directly merged. In feature level fusion, various salient features are involved in the fusion process such as texture and shape. In decision-level fusion, the input images are fused based on multiple algorithms with decision rules.
2. Related Works
The preeminent research issue in medical image processing is to obtain maximum content of information by combining various modalities of medical images. Various existing techniques are included in this literature such as the simple average (Avg), maximum, and minimum methods. The average method provides a fused image with low contrast, while the maximum and minimum methods provide the less enhanced fused images. The Brovey method [] gives color distortions. Hybrid fusion methods such as the intensity-hue saturation (IHS) and principal component analysis (PCA) [] combination provides a degraded fused image with spatial distortions. However, the pyramid decomposition-based method [] shows better spectral information, but the required edge information is not sufficient. Discrete cosine transform (DCT) [] and singular value decomposition (SVD) [] methods give a fused image, which has a more complementary nature but does not show clear boundaries of the tumor region. The multi-resolution techniques, such as discrete wavelet transform (DWT) [], provides better localization in time and frequency domains, but cannot give the shift-invariance due to down-sampling. To overcome this the redundant wavelet transform (RWT) [] was employed. However, the above technique is highly complex and cannot provide sufficient edge information. The contourlet transform (CONT) technique [] provides more edge information in a fused image but does not provide the shift invariance. Shift invariance is the most desirable property and is applied in various applications of image processing. These are: image watermarking [], image enhancement [], image fusion [], and image deblurring []. The above mentioned drawbacks are addressed by the non-subsampled contourlet transform (NSCT) [] and non-subsampled Shearlet transform (NSST) [,]. Hybrid combinations of fusion techniques such as DWT and fuzzy logic [] provide a fused image with low contrast because of the higher uncertainties and vagueness, which is present in a fused image.
In general, medical images have poor illumination which means low contrast and poor visibility in some parts, which indicates uncertainties and vagueness. Visibility and enhancement are the required criteria in the medical field to diagnose the disease accurately. In the literature, various image enhancement techniques are reported, namely, gray-level transformation [] and histogram-based methods []. Yet, these methods are not properly improving the quality of medical images. Zadeh [] proposed a mathematical approach, namely, a fuzzy set in 1965. This fuzzy set approach has played a significant role by removing the vagueness present in the image. However, it did not eliminate the uncertainties. A fuzzy set does not provide reasonable results regarding more uncertainties because it considers only one uncertainty. This uncertainty is in the form of membership function, that lies between the range 0 to 1, where zero indicates the false membership function, and one indicates the true membership function. In the year 1986 Atanassov [] proposed a generalized version of the fuzzy set i.e., intuitionistic fuzzy set (IFS), which handles more uncertainties in the form of three degrees. These degrees are membership, non-membership, and hesitation degrees. The IFS technique is highly precise, and flexible in order to handle uncertainties and ambiguity problems.
In this literature review, the research gaps and drawbacks of various medical image fusion techniques are discussed and listed in Table 1:
Table 1.
Comparison of the existing fusion methods.
The main contribution of this research article is described as follows:
- A novel intuitionistic fuzzy set is used for the fusion process, which can enhance the fused image quality and complete the fusion process successfully.
- The intuitionistic fuzzy images are created by using the optimum value, α, which can be obtained from intuitionistic fuzzy entropy.
- The Intuitionistic cross-correlation function is employed to measure the correlation between intuitionistic fuzzy images and then produce a fused image without uncertainty and vagueness.
- The proposed fusion algorithm proves that the fused image has good contrast and enhanced edges and is superior to other existing methods both visually and quantitatively.
3. Materials and Methods
Intuitionistic fuzzy set (IFS) is used to solve the image processing tasks with membership and non-membership functions []. The implementation of IFS is briefly explained, starting from a fuzzy set.
Let us consider, a finite set P is
A fuzzy set F in a finite set P is numerically represented as:
where indicates the membership function of p in P, which lies between [0–1], and the non-membership function can be represented as and will be equal to . The IFS was introduced by Atanassov [] in 1986, which considers both functions, holding . The representation of intuitionistic fuzzy set (IFS) F in P in a mathematical form, is written as:
Which holds the condition . However, due to lack of knowledge while characterizing the membership degree, a novel parameter was introduced, called hesitation degree , by Szmidt and Kacpryzyk [], for each element p in F. This can be written as:
where .
Finally, based on the hesitation function, the IFS can be represented as
This article proposed a new intuitionistic fuzzy set-based medical image fusion that is superior for better diagnosis. Initially, the input images are fuzzified and then create intuitionistic fuzzy images with the help of the optimal value, α, which can be generated by intuitionistic fuzzy entropy (IFE) []. After that, the two intuitionistic fuzzy images are split into several blocks and then apply the intuitionistic fuzzy cross-correlation fusion rule []. Finally, the enhanced fused image can be obtained without uncertainty by rearrangement of blocks and accompanied by a defuzzification process.
3.1. Intuitionistic Fuzzy Generator
A function is called an intuitionistic fuzzy generator (IFG) [] if and , , which is a decreasing, continuous, and increasing function, and these are used for the construction of IFS. The fuzzy complements are calculated from the complement function, which is described as:
where is an increasing function with Some of the authors suggested different intuitionistic fuzzy generators using an increasing function, such as Sugeno’s [], Roy Chowdhury and Wang [].
3.2. Proposed Fuzzy Complement and Intuitionistic Fuzzy Generator
In this article, a novel fuzzy complement is created using an increasing function, which is described as:
With , and .
With the inverse function of is
Substituting the value of in Equation (6), we get
By the induction method, the Equation (8) becomes
Equation (11) is a fuzzy negation and it satisfies the following axioms:
- (i)
- P1: Boundary conditions:
- (ii)
- P2: Monotonicity
If then .
- (iii)
- P3: Involution
is involutive that indicates .
Proof:
It can be noticed that if , then ; this is equivalent to standard Zadeh’s fuzzy complement. □
The intuitionistic fuzzy generator cannot be represented by all of the fuzzy complements. If the fuzzy complement satisfies the conditions, it will be referred to as an intuitionistic fuzzy generator:
for all , with .
The proposed fuzzy complement is the intuitionistic fuzzy generator and it satisfies the conditions. From the Equation (11), non-membership degree values are computed by using a new intuitionistic fuzzy generator, and new IFS (NIFS) becomes:
and the hesitation degree can be represented as:
Equation (11), a new intuitionistic fuzzy generator, is used to expand and enhance the intensity levels over a range because some of the multimodal medical images are primarily dark. Varying the α value indicates a change in the intensity values not only in grayscale images but also a change in the ratio of components in the color images.
In image processing, entropy plays a significant role and is used to distinguish the texture of the image. The fuzzy entropy estimates ambiguity and fuzziness in a fuzzy set and was introduced by Zadeh. De Luca and S. Termini [] introduced the first skeleton of non-parabolic entropy in 1972. Many researchers [,] have proposed various structures of entropy methods employing the IFS theory. In this article, a novel IFE function is presented, that can be determined as in [], and it has been utilized to develop the proposed technique, which is described as:
where , , and are the hesitation, membership, and non-membership degrees, respectively. Entropy (IFE) function is computed by using Equation (14) for the values between [0.1–1.0], thus, it is optimized by calculating the highest entropy value using Equation (15), i.e.,
With the known value of α, the membership values of the new intuitionistic fuzzy set (NIFS) are calculated, and finally, the new intuitionistic fuzzy image (NIFI) is represented below:
3.3. Intuitionistic Fuzzy Cross-Correlation (IFCC)
The cross-correlation of IFS [] is a significant measure in IFS theory and has extraordinary fundamental potential in various areas, such as medical diagnosis, decision-making, recognition, etc. The IFCC function is used to measure the correlation between two intuitionistic fuzzy images (IFIs). Let and be a finite universe of discourse, then the correlation coefficient is described as, follows:
Here, the and IFCC values range from [0–1], which depends on the constant value ‘c’.
4. Proposed Fusion Method
In this section, we present a new approach to IFS-based multimodality medical image fusion with the IFCC fusion rule. Here, various combinations of medical images are involved in the fusion process such as T1–T2 weighted MR images, T1-weighted MR–MRA images, MRI–CT images, MRI–PET images, and MR-T2–SPECT images. This proposed method can be implemented in both grayscale and color images. This fusion algorithm is arranged sequentially as shown in Figure 1 and Figure 2.
Figure 1.
Flow chart of proposed grayscale medical image fusion algorithm.
Figure 2.
Flow chart of proposed color medical image fusion algorithm.
4.1. Grayscale Image Fusion Algorithm
- Read the registered input images and .
- Initially, the first input image is fuzzified by using Equation (18):
- 3.
- Compute the optimum value, for first input image by using IFE, which is given in Equations (14) and (15).
- 4.
- With the help of the optimized value, , calculate the fuzzified new IFI (NIFI) for the first input image by using Equations (19)–(22), which can be represented as .
The membership degree of the NIFI is created as:
Non-membership function is created as:
and finally, the hesitation degree is obtained as:
- 5.
- Similarly, for the second input image, repeat from step 2 to step 4 to obtain the optimum value, , used to calculate NIFI ():
- 6.
- Decompose the two NIFI images ( and ) into small blocks and the kth block of two decomposed images are represented as and , respectively.
- 7.
- Compute the intuitionistic fuzzy cross-correlation fusion rule between two windows of images ( and ) and the kth block of the fused image is obtained by using minimum, average, and maximum operations:
- 8.
- Reconstruct the fused IFI image by the combined small blocks.
- 9.
- Finally, the fused image can be obtained in the crisp domain by using the defuzzification process, which is obtained by the inverse function of Equation (18).
4.2. Color Image Fusion Algorithm
The complete fusion algorithm for the combination of gray (MRI) and color images (PET/SPECT) is arranged sequentially as shown in Figure 2.
- Consider MRI and PET/SPECT as input images. The PET/SPECT image is converted into an HSV color model, such as hue (H), saturation (S), and value (V).
- For the fusion process, take the MRI image and V component image, and then perform a grayscale image fusion algorithm from step 2 to step 9 as shown in Section 4.1, to get the fused component (V1).
- Finally, the colored fused image can be obtained by considering the brightness image (V1) and unchanged hue (H) and saturation (S) parts and then converting into the RGB color model.
5. Experimental Results and Discussion
This section represents a brief explanation of the effectiveness of the proposed method and a detailed comparison of various existing algorithms with the help of performance metrics. In this paper, all input medical images are assumed to be perfectly registered, and experiments are performed with two different modalities of medical images, where the data is collected and downloaded from metapix and whole brain atlas [,]. The fusion of these two modalities of the medical image will provide a composite image, which will be more useful for diagnosing diseases, tumors, lesion locations, etc.
In this article, we have performed a new intuitionistic fuzzy set-based image fusion over various modalities of medical image datasets of dimensions using the IFCC fusion rule. The proposed fusion algorithm is used to expand and enhance the intensity levels over a range because some of the medical images are primarily dark. Varying the α value indicates a change not only in the intensity values but also changes in the ratio of components in the color image. These enhanced medical images are fused to obtain a single image with more complementary information and better quality. Hence, we conclude that a single medical image cannot provide the required information regarding the disease. As a result, MIF is required to obtain all relevant and complete information in a single resultant image.
The evaluation of the fused image can be completed with the help of subjective (visual) and objective (quantitative) analysis, respectively. The subjective analysis is performed with the visual appearance, and the objective analysis is finished with a set of performance metrics. In this paper, eight metrics are used: API [], SD [], AG [], SF [], MSF [], CC [], MI [], and FS [].
The input images are and and the fused image is with dimensionality.
- ➢
- API: API is used to quantify the average intensity values of the fused image i.e., brightness, which can be defined as:
- ➢
- SD: SD is used to represent the amounts of intensity variations—contrast—in an image. It is described as
- ➢
- AG: This metric is used to measure the sharpness degree and clarity, which is represented as:
- ➢
- SF: SF reflects the rate of change in the gray level of the image and also measures the quality of the image. For better performance, the SF value should be high. It can be calculated as follows:
- ➢
- MSF: This metric is used to measure the overall active levels present in the fused image. It can be employed as follows:
- ➢
- CC: This metric represents the similarity between the source and fused images. The range of CC is [0–1]. For high similarity, the CC value is 1 and it decreases as the dissimilarity increases. It is represented as follows:
- ➢
- MI: The MI parameter is used to calculate the total information that is transferred to the fused image from input images.
is the MI of input and fused images, respectively. For better performance, the MI value should be high.
- ➢
- FS: FS is introduced to measure the symmetry of the fused image with respect to the source images. If the value of FS is close to 2, this indicates both input images equally contribute to the fused image. Therefore, the fused image quality will be better.
5.1. Subjective-Type Evaluation
The subjective evolution is carried out on various input datasets as shown in Figure 3. In this paper, five groups of datasets have been used. The group 1 input images are MR-T1–MR-T2 datasets as shown in Figure 3((p1–p4) and (q1–q4)). Group 2 input images are MR-T1 and MRA as shown in Figure 3((p5) and (q5)). Group 3 input images are MRI and CT in Figure 3((p6–p7) and (q6–q7)), and group 4 input data set images are MRI and PET in Figure 3((p8–p11) and (q8–q11)). Finally, group 5 input images are MR-T2 and SPECT datasets as shown in Figure 3((p12–p16) and (q12–q16)). In this article, the performance of the proposed fusion scheme is compared with various existing algorithms, namely, the PCA method, Naidu’s [] method, Sanjay’s [] method, contourlet transform (CONT) method, Chaira’s IFS [] method, Bala’s IFS [] method, Sugeno’s IFS [] method, and Zhu’s [] method are in Figure 4. The fusion results of the PCA method-based fusion images are shown in the first column in Figure 4(a1–a16), DWTPCA method-based fusion images are displayed in the second column in Figure 4(b1–b16), DWT with fuzzy method-based fusion images are shown in the third column in Figure 4(c1–c16), CONT method based fusion images are displayed in the fourth column in Figure 4(d1–d16), Chaira’s IFS-method based fusion images are shown in the fifth column in Figure 4(e1–e16), Bala’s IFS method based fusion images are displayed in the sixth column in Figure 4(f1–f16), Sugeno’s IFS-method based fusion images in the seventh column in Figure 4(g1–g16), PC- NSCT method based fusion images are in the eighth column in Figure 4(h1–h16). Finally, the proposed fusion images are exhibited in the last column in Figure 4(i1–i16). Subjective analysis is related to human perception, and the proposed fusion method proves, the fused image has greater contrast, luminance, and better edge information than other existing methods, and clear tumor regions are shown in Figure 4((i4), (i8), (i12), (i13), and (i16)).
Figure 3.
Medical image datasets: (p1–p4) and (q1–q4) are MR T1–MR T2 input images: (p5) and (q5) are T1 weighted MR–MRA input images; (p6,p7) and (q6,q7) are MRI–CT input images; (p8–p11) and (q8–q11) are MRI–PET input images; and (p12–p16) and (q12–q16) are the MR-T2–SPECT input images.

Figure 4.
Fused images using: (a) PCA method, (b) DWTPCA [] method, (c) DWT + fuzzy method [], (d) Contourlet transform (CONT) based method, (e) Chaira’s IFS [], (f) Bala’s IFS [], (g) Sugeno’s IFS [], (h) PC-NSCT method [], and (i) Proposed method.
The proposed fusion results show that the quality of the fused image is better than other existing fusion methods. Among all the groups of medical image datasets, the first group of medical image datasets are T1–T2 weighted MR images. Fusing these two images shows soft tissue and an enhanced tumor region. The second group of medical image datasets are MR-T1and MRA images. MR-T1 images produce delicate tissue data but do not detect the abnormalities in the image, while the MRA image easily detects the abnormalities but due to low spatial resolution, is unable to produce the tissue information. Fusion of these images (MR-T1 and MRA) shows the complementary information with detailed lesion locations in the fused image.
The third group dataset consists of MRI and CT images, which are taken from reference []. MRI imaging produces delicate tissue data, while CT imaging gives bone information. The combination of these two images produces a quality fused image, which will be more useful for the diagnosis of disease. The fourth and fifth medical image datasets are MRI–PET and MR-T2–SPECT images. The fusion of these combinations to get more complementary information is achieved in a fused image and highlights the tumor regions, which will be helpful for medical-related problems.
5.2. Objective Evaluation
The fused image quality cannot be completely judged by subjective analysis. Therefore, objective evaluation is preferable for better analysis of fused images using various quality metrics. The proposed method and other existing methods’ results are listed in Table 2, Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. The values of the average pixel intensity (API) are tabulated in Table 2. It can be observed that the proposed fusion method provides the highest API values, which indicates that the fused image has good quality. The graphical representations of API values are shown in Figure 5a. The standard deviation quantity values are tabulated in Table 3. It can be shown that the proposed method’s SD values are greater than the other existing techniques, which indicates the output fused image has better texture details and is graphically presented in Figure 5b.
Table 2.
Performance evaluation of the fusion methods using the API measure.
Table 3.
Performance evaluation of the fusion methods using SD measures.
Table 4.
Performance evaluation of the fusion methods using the AG measure.
Table 5.
Performance evaluation of the fusion methods using the SF measure.
Table 6.
Performance evaluation of the fusion methods using the MSF measure.
Table 7.
Performance evaluation of the fusion methods using the CC measure.
Table 8.
Performance evaluation of the fusion methods using the MI measure.
Table 9.
Performance evaluation of the fusion methods using the FS measure.
Figure 5.
Graphical representation of (a) API, (b) SD, (c) AG measures of proposed and other existing methods.
The average gradient (AG) values are shown in Table 4. It can be seen that the proposed method gives the highest AG values, which reveals that more complementary information is presented in a fused image, and this is presented graphically in Figure 5c.
The SF values are listed in Table 5. It can be seen that the SF of the proposed method gives superior values to the other methods, which indicates texture changes and detailed differences are reflected in a fused image, and this is shown graphically in Figure 6. The MSF values are listed in Table 6. It can be seen that the MSF values of the proposed method provides greater values than the other methods, which indicates that a fused image has more detailed information, and this is observed graphically in Figure 7a.
Figure 6.
Graphical representation of SF measures of proposed and existing methods.

Figure 7.
Graphical representation of (a) MSF, (b) CC, (c) MI, and (d) FS measures of proposed and existing methods.
The CC, MI, and FS values of all datasets and existing fusion methods are listed in Table 7, Table 8 and Table 9. In the proposed fusion method, the average values of CC, MI, and FS values are better, and some datasets are moderate, which shows that the proposed fused image has more information and symmetry. The graphical representation of CC, MI, and FS is shown in Figure 7b–d.
5.3. Ranking Analysis
In this article, the proposed intuitionistic fuzzy set based multimodal medical image fusion algorithm provides better results than other methods using various quality metrics. Objective evaluation was used in Section 5.2. This showed the ranking analysis of each method based on the average value of each quality metric, as shown in Table 10. The best performance of the fusion method was ranked 1, and the worst performance of the fusion method was ranked 9.
Table 10.
Performance evaluation of the fusion methods in the ranking strategy.
5.4. Running Time
The computational efficiency of the proposed and existing medical image fusion methods such as PCA, DWT, Contourlet, DWT + fuzzy, Chaira’s IFS, Bala’s IFS, Sugeno’s IFS, and PC-NSCT are shown in Table 11. Compared with all methods, the DWTPCA method takes the least execution time of 0.60 s because the image pixels are directly selected. Hence, it is found that the DWTPCA fusion method performance is poor in terms of subjectivity and objectivity. The highest execution time of the fusion method was PC-NSCT, which was 36.72 s due to decomposition levels and fusion rules. The second highest execution time of the Contourlet transforms method was 17.29 s. The third-highest execution time of the DWT + fuzzy method was 1.48 s. The average running time of the proposed method was 1.19. However, the proposed method provides better performance with relatively low execution times and less complexity than the other methods.
Table 11.
Average running time (seconds) of the proposed method with different existing methods.
6. Conclusions
In this article, a novel IFS-based medical image fusion process was proposed, which included four steps. Firstly, the registered input images were fuzzified. Secondly, intuitionistic fuzzy images were created by the optimum value, using IFE. Thirdly, a fused IFI image was obtained using the IFCC fusion rule with block processing. Fourthly, the defuzzification operation was performed for the final enhanced fused image. This method is an extension of the various existing methods, such as PCA, DWTPCA, DWT + Fuzzy, CONT, Chaira’s IFS, Bala’s IFS, Sugeno’s IFS, and PC-NSCT. These existing algorithms do not provide a quality fused image, and include various drawbacks, such as blocking artifacts, poor visibility of tumor regions, invisible blood vessels, low contrast, and vague boundaries. This proposed method overcomes the difficulties present in the existing methods and provides a better enhanced fused image without uncertainties.
The experimental result shows that the proposed fusion method gives a better fusion performance in terms of subjective and objective analysis, respectively. In Figure 4(i4), the soft tissue and tumor regions are clearly enhanced and the obtained SD (79.83) and SF (34.60) values are large in Table 3 and Table 5, respectively. In Figure 4(i5), the soft tissue and lesion structure information are reflected exactly in a fused image, and the obtained quantitative value is 75.38, as shown in Table 2. In Figure 4(i8), the anatomy and functional information are visible with high quality in a fused image, and the quantitative values attained show that SD, AG, SF, MSF, MI, and FS are higher (59.54, 5.80, 24.92, 51.53, 3.5689, 1.8658) in Table 3, Table 4, Table 5, Table 6, Table 7, Table 8 and Table 9. In Figure 4(i16), the tumor region was clearly enhanced, and attained high performance metric values compared to the other existing fusion methods. As previously discussed, the heart of this proposed fusion algorithm is to calculate the intuitionistic fuzzy membership function, which is obtained by the optimum value, using IFE. For better diagnosis and superior outcomes, the proposed fusion method can be extended to fuse different medical datasets based on the advanced fuzzy sets, such as the neutrosophic fuzzy set, pythagorean fuzzy set and fusion rules.
Author Contributions
Conceptualization, M.H.; methodology, M.H.; implementation, M.H; writing—original draft preparation, M.H.; writing—review and editing, M.H. and V.G.; visualization, M.H.; supervision, V.G. All authors have read and agreed to the published version of the manuscript.
Funding
This research received no external funding.
Institutional Review Board Statement
Not applicable.
Informed Consent Statement
Not applicable.
Data Availability Statement
Not applicable.
Conflicts of Interest
The authors declare no conflict of interest.
References
- Azam, M.A.; Khan, K.B.; Salahuddin, S.; Rehman, E.; Khan, S.A.; Khan, M.A.; Kadry, S.; Gandomi, A.H. A Review on Multimodal Medical Image Fusion: Compendious Analysis of Medical Modalities, Multimodal Databases, Fusion Techniques and Quality Metrics. Comput. Biol. Med. 2022, 144, 105253. [Google Scholar] [CrossRef]
- Ma, J.; Liu, Y.; Jiang, J.; Wang, Z.; Xu, H.; Huo, X.; Deng, Y.; Shao, K. Infrared and Visible Image Fusion with Significant Target Enhancement. Entropy 2022, 24, 1633. [Google Scholar]
- Deveci, M.; Gokasar, I.; Pamucar, D.; Zaidan, A.A.; Wen, X.; Gupta, B.B. Evaluation of Cooperative Intelligent Transportation System Scenarios for Resilience in Transportation Using Type-2 Neutrosophic Fuzzy VIKOR. Transp. Res. Part A Policy Pract. 2023, 172, 103666. [Google Scholar] [CrossRef]
- Mary, S.R.; Pachar, S.; Srivastava, P.K.; Malik, M.; Sharma, A.; Almutiri, G.T.; Atal, Z. Deep Learning Model for the Image Fusion and Accurate Classification of Remote Sensing Images. Comput. Intell. Neurosci. 2022, 2022, 2668567. [Google Scholar] [CrossRef] [PubMed]
- James, A.P.; Dasarathy, B.V. Medical Image Fusion: A Survey of the State of the Art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef]
- Kumar, M.; Kaur, A. Amita Improved Image Fusion of Colored and Grayscale Medical Images Based on Intuitionistic Fuzzy Sets. Fuzzy Inf. Eng. 2018, 10, 295–306. [Google Scholar] [CrossRef]
- Venkatesan, B.; Ragupathy, U.S.; Natarajan, I. A Review on Multimodal Medical Image Fusion towards Future Research. Multimed. Tools Appl. 2023, 82, 7361–73824. [Google Scholar] [CrossRef]
- Palanisami, D.; Mohan, N.; Ganeshkumar, L. A New Approach of Multi-Modal Medical Image Fusion Using Intuitionistic Fuzzy Set. Biomed. Signal Process. Control 2022, 77, 103762. [Google Scholar] [CrossRef]
- Prakash, O.; Park, C.M.; Khare, A.; Jeon, M.; Gwak, J. Multiscale Fusion of Multimodal Medical Images Using Lifting Scheme Based Biorthogonal Wavelet Transform. Optik 2019, 182, 995–1014. [Google Scholar] [CrossRef]
- Kumar, P.; Diwakar, M. A Novel Approach for Multimodality Medical Image Fusion over Secure Environment. Trans. Emerg. Telecommun. Technol. 2021, 32, e3985. [Google Scholar] [CrossRef]
- Dilmaghani, M.S.; Daneshvar, S.; Dousty, M. A New MRI and PET Image Fusion Algorithm Based on BEMD and IHS Methods. In Proceedings of the 2017 Iranian Conference on Electrical Engineering (ICEE), Tehran, Iran, 2–4 May 2017; pp. 118–121. [Google Scholar]
- Panigrahy, C.; Seal, A.; Mahato, N.K. MRI and SPECT Image Fusion Using a Weighted Parameter Adaptive Dual Channel PCNN. IEEE Signal Process. Lett. 2020, 27, 690–694. [Google Scholar] [CrossRef]
- Kaur, H.; Koundal, D.; Kadyan, V. Image Fusion Techniques: A Survey. Arch. Comput. Methods Eng. 2021, 28, 4425–4447. [Google Scholar] [CrossRef] [PubMed]
- Wang, Z.; Ziou, D.; Armenakis, C.; Li, D.; Li, Q. A Comparative Analysis of Image Fusion Methods. IEEE Trans. Geosci. Remote Sens. 2005, 43, 1391–1402. [Google Scholar] [CrossRef]
- He, C.; Liu, Q.; Li, H.; Wang, H. Multimodal Medical Image Fusion Based on IHS and PCA. Procedia Eng. 2010, 7, 280–285. [Google Scholar] [CrossRef]
- Li, M.; Dong, Y. Image Fusion Algorithm Based on Contrast Pyramid and Application. In Proceedings of the 2013 International Conference on Mechatronic Sciences, Electric Engineering and Computer (MEC), Shenyang, China, 20–22 December 2013; pp. 1342–1345. [Google Scholar]
- Tang, J. A Contrast Based Image Fusion Technique in the DCT Domain. Digit. Signal Process. 2004, 14, 218–226. [Google Scholar] [CrossRef]
- Liang, J.; He, Y.; Liu, D.; Zeng, X. Image Fusion Using Higher Order Singular Value Decomposition. IEEE Trans. Image Process. 2012, 21, 2898–2909. [Google Scholar] [CrossRef] [PubMed]
- Prasad, P.; Subramani, S.; Bhavana, V.; Krishnappa, H.K. Medical Image Fusion Techniques Using Discrete Wavelet Transform. In Proceedings of the 2019 3rd International Conference on Computing Methodologies and Communication (ICCMC), Erode, India, 27–29 March 2019; pp. 614–618. [Google Scholar]
- Li, X.; He, M.; Roux, M. Multifocus Image Fusion Based on Redundant Wavelet Transform. IET Image Process. 2010, 4, 283. [Google Scholar] [CrossRef]
- Khare, A.; Srivastava, R.; Singh, R. Edge Preserving Image Fusion Based on Contourlet Transform. In Proceedings of the Image and Signal Processing: 5th International Conference, ICISP 2012, Agadir, Morocco, 28–30 June 2012; Volume 7340, pp. 93–102. [Google Scholar]
- Sinhal, R.; Sharma, S.; Ansari, I.A.; Bajaj, V. Multipurpose Medical Image Watermarking for Effective Security Solutions. Multimed. Tools Appl. 2022, 81, 14045–14063. [Google Scholar] [CrossRef]
- Liu, M.; Mei, S.; Liu, P.; Gasimov, Y.; Cattani, C. A New X-Ray Medical-Image-Enhancement Method Based on Multiscale Shannon–Cosine Wavelet. Entropy 2022, 24, 1754. [Google Scholar]
- Liu, S.; Wang, M.; Yin, L.; Sun, X.; Zhang, Y.-D.; Zhao, J. Two-Scale Multimodal Medical Image Fusion Based on Structure Preservation. Front. Comput. Neurosci. 2022, 15, 133. [Google Scholar] [CrossRef]
- Chen, X.; Wan, Y.; Wang, D.; Wang, Y. Image Deblurring Based on an Improved CNN-Transformer Combination Network. Appl. Sci. 2023, 13, 311. [Google Scholar] [CrossRef]
- Ganasala, P.; Kumar, V. CT and MR Image Fusion Scheme in Nonsubsampled Contourlet Transform Domain. J. Digit. Imaging 2014, 27, 407–418. [Google Scholar] [CrossRef] [PubMed]
- Qiu, C.; Wang, Y.; Zhang, H.; Xia, S. Image Fusion of CT and MR with Sparse Representation in NSST Domain. Comput. Math. Methods Med. 2017, 2017, 9308745. [Google Scholar] [CrossRef] [PubMed]
- Liu, X.; Mei, W.; Du, H. Multi-Modality Medical Image Fusion Based on Image Decomposition Framework and Nonsubsampled Shearlet Transform. Biomed. Signal Process. Control 2018, 40, 343–350. [Google Scholar] [CrossRef]
- Sanjay, A.R.; Soundrapandiyan, R.; Karuppiah, M.; Ganapathy, R. CT and MRI Image Fusion Based on Discrete Wavelet Transform and Type-2 Fuzzy Logic. Int. J. Intell. Eng. Syst. 2017, 10, 355–362. [Google Scholar] [CrossRef]
- Cao, G.; Huang, L.; Tian, H.; Huang, X.; Wang, Y.; Zhi, R. Contrast Enhancement of Brightness-Distorted Images by Improved Adaptive Gamma Correction. Comput. Electr. Eng. 2018, 66, 569–582. [Google Scholar] [CrossRef]
- Salem, N.; Malik, H.; Shams, A. Medical Image Enhancement Based on Histogram Algorithms. Procedia Comput. Sci. 2019, 163, 300–311. [Google Scholar] [CrossRef]
- Zadeh, L.A. Fuzzy Sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
- Atanassov, K.T. Intuitionistic Fuzzy Sets. Fuzzy Sets Syst. 1986, 20, 87–96. [Google Scholar] [CrossRef]
- Güneri, B.; Deveci, M. Evaluation of Supplier Selection in the Defense Industry Using Q-Rung Orthopair Fuzzy Set Based EDAS Approach. Expert Syst. Appl. 2023, 222, 119846. [Google Scholar] [CrossRef]
- Szmidt, E.; Kacprzyk, J. Distances between Intuitionistic Fuzzy Sets. Fuzzy Sets Syst. 2000, 114, 505–518. [Google Scholar] [CrossRef]
- Chaira, T. A Novel Intuitionistic Fuzzy C Means Clustering Algorithm and Its Application to Medical Images. Appl. Soft Comput. 2011, 11, 1711–1717. [Google Scholar] [CrossRef]
- Huang, H.L.; Guo, Y. An Improved Correlation Coefficient of Intuitionistic Fuzzy Sets. J. Intell. Syst. 2019, 28, 231–243. [Google Scholar] [CrossRef]
- Bustince, H.; Kacprzyk, J.; Mohedano, V. Intuitionistic Fuzzy Generators Application to Intuitionistic Fuzzy Complementation. Fuzzy Sets Syst. 2000, 114, 485–504. [Google Scholar] [CrossRef]
- Sugeno, M. Fuzzy measures and fuzzy integrals—A survey. In Readings in Fuzzy Sets for Intelligent Systems; Elsevier: Amsterdam, The Netherlands, 1993; pp. 251–257. [Google Scholar]
- Roychowdhury, S.; Wang, B.H. Composite Generalization of Dombi Class and a New Family of T-Operators Using Additive-Product Connective Generator. Fuzzy Sets Syst. 1994, 66, 329–346. [Google Scholar] [CrossRef]
- De Luca, A.; Termini, S. A Definition of a Nonprobabilistic Entropy in the Setting of Fuzzy Sets Theory. Inf. Control 1972, 20, 301–312. [Google Scholar] [CrossRef]
- Joshi, D.; Kumar, S. Intuitionistic Fuzzy Entropy and Distance Measure Based TOPSIS Method for Multi-Criteria Decision Making. Egypt. Inform. J. 2014, 15, 97–104. [Google Scholar] [CrossRef]
- Hung, W.L.; Yang, M.S. Fuzzy Entropy on Intuitionistic Fuzzy Sets. Int. J. Intell. Syst. 2006, 21, 443–451. [Google Scholar] [CrossRef]
- Brain Image. Available online: http://www.metapix.de/examples.html (accessed on 3 February 2020).
- The Whole Brain Atlas. Available online: https://www.med.harvard.edu/aanlib/home.html (accessed on 3 February 2020).
- Bavirisetti, D.P.; Kollu, V.; Gang, X.; Dhuli, R. Fusion of MRI and CT Images Using Guided Image Filter and Image Statistics. Int. J. Imaging Syst. Technol. 2017, 27, 227–237. [Google Scholar] [CrossRef]
- Haddadpour, M.; Daneshavar, S.; Seyedarabi, H. PET and MRI Image Fusion Based on Combination of 2-D Hilbert Transform and IHS Method. Biomed. J. 2017, 40, 219–225. [Google Scholar] [CrossRef]
- Bavirisetti, D.P.; Dhuli, R. Multi-Focus Image Fusion Using Multi-Scale Image Decomposition and Saliency Detection. Ain Shams Eng. J. 2018, 9, 1103–1117. [Google Scholar] [CrossRef]
- Das, S.; Kundu, M.K. NSCT-Based Multimodal Medical Image Fusion Using Pulse-Coupled Neural Network and Modified Spatial Frequency. Med. Biol. Eng. Comput. 2012, 50, 1105–1114. [Google Scholar] [CrossRef]
- Shreyamsha Kumar, B.K. Image Fusion Based on Pixel Significance Using Cross Bilateral Filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
- Dammavalam, S.R. Quality Assessment of Pixel-Level ImageFusion Using Fuzzy Logic. Int. J. Soft Comput. 2012, 3, 11–23. [Google Scholar] [CrossRef]
- Naidu, V.P.S.; Raol, J.R. Pixel-Level Image Fusion Using Wavelets and Principal Component Analysis. Def. Sci. J. 2008, 58, 338–352. [Google Scholar] [CrossRef]
- Chaira, T. A Rank Ordered Filter for Medical Image Edge Enhancement and Detection Using Intuitionistic Fuzzy Set. Appl. Soft Comput. 2012, 12, 1259–1266. [Google Scholar] [CrossRef]
- Balasubramaniam, P.; Ananthi, V.P. Image Fusion Using Intuitionistic Fuzzy Sets. Inf. Fusion 2014, 20, 21–30. [Google Scholar] [CrossRef]
- Tirupal, T.; Mohan, B.C.; Kumar, S.S. Multimodal Medical Image Fusion Based on Sugeno’s Intuitionistic Fuzzy Sets. ETRI J. 2017, 39, 173–180. [Google Scholar] [CrossRef]
- Zhu, Z.; Zheng, M.; Qi, G.; Wang, D.; Xiang, Y. A Phase Congruency and Local Laplacian Energy Based Multi-Modality Medical Image Fusion Method in NSCT Domain. IEEE Access 2019, 7, 20811–20824. [Google Scholar] [CrossRef]
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).