An Improved Multimodal Medical Image Fusion Approach Using Intuitionistic Fuzzy Set and Intuitionistic Fuzzy Cross-Correlation

Multimodal medical image fusion (MMIF) is the process of merging different modalities of medical images into a single output image (fused image) with a significant quantity of information to improve clinical applicability. It enables a better diagnosis and makes the diagnostic process easier. In medical image fusion (MIF), an intuitionistic fuzzy set (IFS) plays a role in enhancing the quality of the image, which is useful for medical diagnosis. In this article, a new approach to intuitionistic fuzzy set-based MMIF has been proposed. Initially, the input medical images are fuzzified and then create intuitionistic fuzzy images (IFIs). Intuitionistic fuzzy entropy plays a major role in calculating the optimal value for three degrees, namely, membership, non-membership, and hesitation. After that, the IFIs are decomposed into small blocks and then perform the fusion rule. Finally, the enhanced fused image can be obtained by the defuzzification process. The proposed method is tested on various medical image datasets in terms of subjective and objective analysis. The proposed algorithm provides a better-quality fused image and is superior to other existing methods such as PCA, DWTPCA, contourlet transform (CONT), DWT with fuzzy logic, Sugeno’s intuitionistic fuzzy set, Chaira’s intuitionistic fuzzy set, and PC-NSCT. The assessment of the fused image is evaluated with various performance metrics such as average pixel intensity (API), standard deviation (SD), average gradient (AG), spatial frequency (SF), modified spatial frequency (MSF), cross-correlation (CC), mutual information (MI), and fusion symmetry (FS).


Introduction
In past decades, image fusion has matured significantly in the application fields such as medical [1], military [2,3], and remote sensing [4]. Image fusion is a prominent application in the medical field for better analysis of human organs and tissues. In general, the medical image data is available from various imaging techniques such as magnetic resonance imaging (MRI), magnetic resonance angiography (MRA), computed tomography (CT), T1-weighted MR, T2-weighted MR, positron emission tomography (PET), and single-photon emission computed tomography (SPECT) [5]. Each technique has different characteristics.
Multimodal medical images are widely characterized into two types: anatomical and functional modalities, respectively. Anatomical modalities are, namely, MRI, MRA, T1-weighted MR, T2-weighted MR, and CT. CT images represent a clear bone structure with lower distortion but do not distinguish physical changes, while MRI images provide delicate tissue information with high spatial resolution. CT imaging is used to diagnose diseases such as muscle disease, vascular conditions, bone fractures and tumors etc. MRI imaging is used to diagnose various issues in medial regions such as brain tumors, multiple sclerosis, lung cancer and treatment, brain hemorrhage, and dementia etc. Magnetic resonance angiography, or MRA, is a subset of MRI that utilizes magnetic fields and radio

Related Works
The preeminent research issue in medical image processing is to obtain maximum content of information by combining various modalities of medical images. Various existing techniques are included in this literature such as the simple average (Avg), maximum, and minimum methods. The average method provides a fused image with low contrast, while the maximum and minimum methods provide the less enhanced fused images. The Brovey method [14] gives color distortions. Hybrid fusion methods such as the intensity-hue saturation (IHS) and principal component analysis (PCA) [15] combination provides a degraded fused image with spatial distortions. However, the pyramid decompositionbased method [16] shows better spectral information, but the required edge information is not sufficient. Discrete cosine transform (DCT) [17] and singular value decomposition (SVD) [18] methods give a fused image, which has a more complementary nature but does not show clear boundaries of the tumor region. The multi-resolution techniques, such as discrete wavelet transform (DWT) [19], provides better localization in time and frequency domains, but cannot give the shift-invariance due to down-sampling. To overcome this the redundant wavelet transform (RWT) [20] was employed. However, the above technique is highly complex and cannot provide sufficient edge information. The contourlet transform (CONT) technique [21] provides more edge information in a fused image but does not provide the shift invariance. Shift invariance is the most desirable property and is applied in various applications of image processing. These are: image watermarking [22], image enhancement [23], image fusion [24], and image deblurring [25]. The above mentioned drawbacks are addressed by the non-subsampled contourlet transform (NSCT) [26] and non-subsampled Shearlet transform (NSST) [27,28]. Hybrid combinations of fusion techniques such as DWT and fuzzy logic [29] provide a fused image with low contrast because of the higher uncertainties and vagueness, which is present in a fused image.
In general, medical images have poor illumination which means low contrast and poor visibility in some parts, which indicates uncertainties and vagueness. Visibility and enhancement are the required criteria in the medical field to diagnose the disease accurately. In the literature, various image enhancement techniques are reported, namely, gray-level transformation [30] and histogram-based methods [31]. Yet, these methods are not properly improving the quality of medical images. Zadeh [32] proposed a mathematical approach, namely, a fuzzy set in 1965. This fuzzy set approach has played a significant role by removing the vagueness present in the image. However, it did not eliminate the uncertainties. A fuzzy set does not provide reasonable results regarding more uncertainties because it considers only one uncertainty. This uncertainty is in the form of membership function, that lies between the range 0 to 1, where zero indicates the false membership function, and one indicates the true membership function. In the year 1986 Atanassov [33] proposed a generalized version of the fuzzy set i.e., intuitionistic fuzzy set (IFS), which handles more uncertainties in the form of three degrees. These degrees are membership, non-membership, and hesitation degrees. The IFS technique is highly precise, and flexible in order to handle uncertainties and ambiguity problems.
In this literature review, the research gaps and drawbacks of various medical image fusion techniques are discussed and listed in Table 1: The main contribution of this research article is described as follows: • A novel intuitionistic fuzzy set is used for the fusion process, which can enhance the fused image quality and complete the fusion process successfully.

•
The intuitionistic fuzzy images are created by using the optimum value, α, which can be obtained from intuitionistic fuzzy entropy.

•
The Intuitionistic cross-correlation function is employed to measure the correlation between intuitionistic fuzzy images and then produce a fused image without uncertainty and vagueness.

•
The proposed fusion algorithm proves that the fused image has good contrast and enhanced edges and is superior to other existing methods both visually and quantitatively.

Materials and Methods
Intuitionistic fuzzy set (IFS) is used to solve the image processing tasks with membership and non-membership functions [34]. The implementation of IFS is briefly explained, starting from a fuzzy set.
Let us consider, a finite set P is A fuzzy set F in a finite set P is numerically represented as: where µ F (p) indicates the membership function of p in P, which lies between [0-1], and the non-membership function can be represented as v F (p) and will be equal to 1 − µ F (p). The IFS was introduced by Atanassov [33] in 1986, which considers both µ F (p) and v F (p) The representation of intuitionistic fuzzy set (IFS) F in P in a mathematical form, is written as: Which holds the condition 0 ≤ µ F (p) + v F (p) ≤ 1. However, due to lack of knowledge while characterizing the membership degree, a novel parameter was introduced, called hesitation degree π F (p), by Szmidt and Kacpryzyk [35], for each element p in F. This can be written as: where 0 ≤ π F (p) ≤ 1. Finally, based on the hesitation function, the IFS can be represented as This article proposed a new intuitionistic fuzzy set-based medical image fusion that is superior for better diagnosis. Initially, the input images are fuzzified and then create intuitionistic fuzzy images with the help of the optimal value, α, which can be generated by intuitionistic fuzzy entropy (IFE) [36]. After that, the two intuitionistic fuzzy images are split into several blocks and then apply the intuitionistic fuzzy cross-correlation fusion rule [37]. Finally, the enhanced fused image can be obtained without uncertainty by rearrangement of blocks and accompanied by a defuzzification process.
The intuitionistic fuzzy generator cannot be represented by all of the fuzzy complements. If the fuzzy complement satisfies the conditions, it will be referred to as an intuitionistic fuzzy generator: 1], with N(0) = 1 and N(1) = 0. The proposed fuzzy complement is the intuitionistic fuzzy generator and it satisfies the conditions. From the Equation (11), non-membership degree values are computed by using a new intuitionistic fuzzy generator, and new IFS (NIFS) becomes: and the hesitation degree can be represented as: Equation (11), a new intuitionistic fuzzy generator, is used to expand and enhance the intensity levels over a range because some of the multimodal medical images are primarily dark. Varying the α value indicates a change in the intensity values not only in grayscale images but also a change in the ratio of components in the color images.
In image processing, entropy plays a significant role and is used to distinguish the texture of the image. The fuzzy entropy estimates ambiguity and fuzziness in a fuzzy set and was introduced by Zadeh. De Luca and S. Termini [41] introduced the first skeleton of non-parabolic entropy in 1972. Many researchers [42,43] have proposed various structures of entropy methods employing the IFS theory. In this article, a novel IFE function is presented, that can be determined as in [36], and it has been utilized to develop the proposed technique, which is described as: where and v F (p i ) are the hesitation, membership, and non-membership degrees, respectively. Entropy (IFE) function is computed by using Equation (14) for the α values between [0.1-1.0], thus, it is optimized by calculating the highest entropy value using Equation (15), i.e., With the known value of α, the membership values of the new intuitionistic fuzzy set (NIFS) are calculated, and finally, the new intuitionistic fuzzy image (NIFI) is represented below:

Intuitionistic Fuzzy Cross-Correlation (IFCC)
The cross-correlation of IFS [37] is a significant measure in IFS theory and has extraordinary fundamental potential in various areas, such as medical diagnosis, decision-making, recognition, etc. The IFCC function is used to measure the correlation between two intuitionistic fuzzy images (IFIs). Let C 1 , C 2 ∈ IFS(P) and P = {p 1 , p 2 , . . . . . . , p n } be a finite universe of discourse, then the correlation coefficient is described as, follows: Here, the α g and β g and IFCC values range from [0-1], which depends on the constant value 'c'.

Proposed Fusion Method
In this section, we present a new approach to IFS-based multimodality medical image fusion with the IFCC fusion rule. Here, various combinations of medical images are involved in the fusion process such as T1-T2 weighted MR images, T1-weighted MR-MRA images, MRI-CT images, MRI-PET images, and MR-T2-SPECT images. This proposed method can be implemented in both grayscale and color images. This fusion algorithm is arranged sequentially as shown in Figures 1 and 2 Here, the and and IFCC values range from [0-1], which depends on the constant value 'c'.

Proposed Fusion Method
In this section, we present a new approach to IFS-based multimodality medical image fusion with the IFCC fusion rule. Here, various combinations of medical images are involved in the fusion process such as T1-T2 weighted MR images, T1-weighted MR-MRA images, MRI-CT images, MRI-PET images, and MR-T2-SPECT images. This proposed method can be implemented in both grayscale and color images. This fusion algorithm is arranged sequentially as shown in Figures 1 and 2.
Here, the and and IFCC values range from [0-1], which depends on the constant value 'c'.

Proposed Fusion Method
In this section, we present a new approach to IFS-based multimodality medical image fusion with the IFCC fusion rule. Here, various combinations of medical images are involved in the fusion process such as T1-T2 weighted MR images, T1-weighted MR-MRA images, MRI-CT images, MRI-PET images, and MR-T2-SPECT images. This proposed method can be implemented in both grayscale and color images. This fusion algorithm is arranged sequentially as shown in Figures 1 and 2.

1.
Read the registered input images I 1 and I 2 .

2.
Initially, the first input image I 1 is fuzzified by using Equation (18): where I gh1 is the gray pixel of the first input image. I max and I min represent the highest and least gray level pixel values of the first input image, respectively.

3.
Compute the optimum value, α opt1 for first input image by using IFE, which is given in Equations (14) and (15).

4.
With the help of the optimized value, α opt1 , calculate the fuzzified new IFI (NIFI) for the first input image by using Equations (19)- (22), which can be represented as I IF1 .
The membership degree of the NIFI is created as: Non-membership function is created as: (20) and finally, the hesitation degree is obtained as:

5.
Similarly, for the second input image, repeat from step 2 to step 4 to obtain the optimum value, α opt2 , used to calculate NIFI (I IF2 ): 6. Decompose the two NIFI images (I IF1 and I IF2 ) into small i × j blocks and the k th block of two decomposed images are represented as I IF1k and I IF2k , respectively. 7.
Compute the intuitionistic fuzzy cross-correlation fusion rule between two windows of images (I IF1k and I IF2k ) and the k th block of the fused I IFk image is obtained by using minimum, average, and maximum operations: 8.
Reconstruct the fused IFI image by the combined small blocks. 9.
Finally, the fused image can be obtained in the crisp domain by using the defuzzification process, which is obtained by the inverse function of Equation (18).

Color Image Fusion Algorithm
The complete fusion algorithm for the combination of gray (MRI) and color images (PET/SPECT) is arranged sequentially as shown in Figure 2.

1.
Consider MRI and PET/SPECT as input images. The PET/SPECT image is converted into an HSV color model, such as hue (H), saturation (S), and value (V).

2.
For the fusion process, take the MRI image and V component image, and then perform a grayscale image fusion algorithm from step 2 to step 9 as shown in Section 4.1, to get the fused component (V 1 ).

3.
Finally, the colored fused image can be obtained by considering the brightness image (V 1 ) and unchanged hue (H) and saturation (S) parts and then converting into the RGB color model.

Experimental Results and Discussion
This section represents a brief explanation of the effectiveness of the proposed method and a detailed comparison of various existing algorithms with the help of performance metrics. In this paper, all input medical images are assumed to be perfectly registered, and experiments are performed with two different modalities of medical images, where the data is collected and downloaded from metapix and whole brain atlas [44,45]. The fusion of these two modalities of the medical image will provide a composite image, which will be more useful for diagnosing diseases, tumors, lesion locations, etc.
In this article, we have performed a new intuitionistic fuzzy set-based image fusion over various modalities of medical image datasets of dimensions 256 × 256 using the IFCC fusion rule. The proposed fusion algorithm is used to expand and enhance the intensity levels over a range because some of the medical images are primarily dark. Varying the α value indicates a change not only in the intensity values but also changes in the ratio of components in the color image. These enhanced medical images are fused to obtain a single image with more complementary information and better quality. Hence, we conclude that a single medical image cannot provide the required information regarding the disease. As a result, MIF is required to obtain all relevant and complete information in a single resultant image.
The evaluation of the fused image can be completed with the help of subjective (visual) and objective (quantitative) analysis, respectively. The subjective analysis is performed with the visual appearance, and the objective analysis is finished with a set of performance metrics. In this paper, eight metrics are used: API [46], SD [46], AG [47], SF [48], MSF [49], CC [50], MI [51], and FS [48].
The input images are I 1 (g, h) and I 2 (g, h) and the fused image is Fused(g, h) with G × H dimensionality.
API: API is used to quantify the average intensity values of the fused image i.e., brightness, which can be defined as: SD: SD is used to represent the amounts of intensity variations-contrast-in an image. It is described as AG: This metric is used to measure the sharpness degree and clarity, which is represented as: SF: SF reflects the rate of change in the gray level of the image and also measures the quality of the image. For better performance, the SF value should be high. It can be calculated as follows: where MSF: This metric is used to measure the overall active levels present in the fused image. It can be employed as follows: where CC: This metric represents the similarity between the source and fused images. The range of CC is [0-1]. For high similarity, the CC value is 1 and it decreases as the dissimilarity increases. It is represented as follows: where

MI:
The MI parameter is used to calculate the total information that is transferred to the fused image from input images. where is the MI of input I 1 (g, h) and fused images, and is the MI of input I 2 (g, h) and fused images, respectively. For better performance, the MI value should be high.
FS: FS is introduced to measure the symmetry of the fused image with respect to the source images. If the value of FS is close to 2, this indicates both input images equally contribute to the fused image. Therefore, the fused image quality will be better.

Subjective-Type Evaluation
The subjective evolution is carried out on various input datasets as shown in Figure 3. In this paper, five groups of datasets have been used. The group 1 input images are MR-T1-MR-T2 datasets as shown in Figure 3((p1-p4) and (q1-q4)). Group 2 input images are MR-T1 and MRA as shown in Figure 3((p5) and (q5)). Group 3 input images are MRI and CT in Figure 3((p6-p7) and (q6-q7)), and group 4 input data set images are MRI and PET in Figure 3((p8-p11) and (q8-q11)). Finally, group 5 input images are MR-T2 and SPECT datasets as shown in Figure 3((p12-p16) and (q12-q16)). In this article, the performance of the proposed fusion scheme is compared with various existing algorithms, namely, the PCA method, Naidu's [52] method, Sanjay's [29] method, contourlet transform (CONT) method, Chaira's IFS [53] method, Bala's IFS [54] method, Sugeno's IFS [55] method, and Zhu's [56] method are in Figure 4. The fusion results of the PCA method-based fusion images are shown in the first column in Figure 4(a1-a16), DWTPCA method-based fusion images are displayed in the second column in Figure 4(b1-b16), DWT with fuzzy method-based fusion images are shown in the third column in Figure 4(c1-c16), CONT method based fusion images are displayed in the fourth column in Figure 4(d1-d16), Chaira's IFS-method based fusion images are shown in the fifth column in Figure 4(e1-e16), Bala's IFS method based fusion images are displayed in the sixth column in Figure 4(f1-f16), Sugeno's IFS-method based fusion images in the seventh column in Figure 4(g1-g16), PC-NSCT method based fusion images are in the eighth column in Figure 4(h1-h16). Finally, the proposed fusion images are exhibited in the last column in Figure 4(i1-i16). Subjective analysis is related to human perception, and the proposed fusion method proves, the fused image has greater contrast, luminance, and better edge information than other existing methods, and clear tumor regions are shown in Figure 4((i4), (i8), (i12), (i13), and (i16)).
OR PEER REVIEW 12 of 23    The proposed fusion results show that the quality of the fused image is better than other existing fusion methods. Among all the groups of medical image datasets, the first group of medical image datasets are T1-T2 weighted MR images. Fusing these two images shows soft tissue and an enhanced tumor region. The second group of medical image datasets are MR-T1and MRA images. MR-T1 images produce delicate tissue data but do not detect the abnormalities in the image, while the MRA image easily detects the abnormalities but due to low spatial resolution, is unable to produce the tissue information. Fusion of these images (MR-T1 and MRA) shows the complementary information with detailed lesion locations in the fused image.
The third group dataset consists of MRI and CT images, which are taken from reference [44]. MRI imaging produces delicate tissue data, while CT imaging gives bone information. The combination of these two images produces a quality fused image, which will be more useful for the diagnosis of disease. The fourth and fifth medical image datasets are MRI-PET and MR-T2-SPECT images. The fusion of these combinations to get more complementary information is achieved in a fused image and highlights the tumor regions, which will be helpful for medical-related problems.

Objective Evaluation
The fused image quality cannot be completely judged by subjective analysis. Therefore, objective evaluation is preferable for better analysis of fused images using various quality metrics. The proposed method and other existing methods' results are listed in Tables 2-9. The values of the average pixel intensity (API) are tabulated in Table  2. It can be observed that the proposed fusion method provides the highest API values, which indicates that the fused image has good quality. The graphical representations of API values are shown in Figure 5a. The standard deviation quantity values are tabulated in Table 3. It can be shown that the proposed method's SD values are greater than the The proposed fusion results show that the quality of the fused image is better than other existing fusion methods. Among all the groups of medical image datasets, the first group of medical image datasets are T1-T2 weighted MR images. Fusing these two images shows soft tissue and an enhanced tumor region. The second group of medical image datasets are MR-T1and MRA images. MR-T1 images produce delicate tissue data but do not detect the abnormalities in the image, while the MRA image easily detects the abnormalities but due to low spatial resolution, is unable to produce the tissue information. Fusion of these images (MR-T1 and MRA) shows the complementary information with detailed lesion locations in the fused image.
The third group dataset consists of MRI and CT images, which are taken from reference [44]. MRI imaging produces delicate tissue data, while CT imaging gives bone information. The combination of these two images produces a quality fused image, which will be more useful for the diagnosis of disease. The fourth and fifth medical image datasets are MRI-PET and MR-T2-SPECT images. The fusion of these combinations to get more complementary information is achieved in a fused image and highlights the tumor regions, which will be helpful for medical-related problems.

Objective Evaluation
The fused image quality cannot be completely judged by subjective analysis. Therefore, objective evaluation is preferable for better analysis of fused images using various quality metrics. The proposed method and other existing methods' results are listed in Tables 2-9. The values of the average pixel intensity (API) are tabulated in Table 2. It can be observed that the proposed fusion method provides the highest API values, which indicates that the fused image has good quality. The graphical representations of API values are shown in Figure 5a. The standard deviation quantity values are tabulated in Table 3. It can be shown that the proposed method's SD values are greater than the other existing techniques, which indicates the output fused image has better texture details and is graphically presented in Figure 5b.
The average gradient (AG) values are shown in Table 4. It can be seen that the proposed method gives the highest AG values, which reveals that more complementary information is presented in a fused image, and this is presented graphically in Figure 5c.
The SF values are listed in Table 5. It can be seen that the SF of the proposed method gives superior values to the other methods, which indicates texture changes and detailed differences are reflected in a fused image, and this is shown graphically in Figure 6. The MSF values are listed in Table 6. It can be seen that the MSF values of the proposed method provides greater values than the other methods, which indicates that a fused image has more detailed information, and this is observed graphically in Figure 7a.        The CC, MI, and FS values of all datasets and existing fusion methods are listed in Tables 7-9. In the proposed fusion method, the average values of CC, MI, and FS values are better, and some datasets are moderate, which shows that the proposed fused image has more information and symmetry. The graphical representation of CC, MI, and FS is shown in Figure 7b The average gradient (AG) values are shown in Table 4. It can be seen that the proposed method gives the highest AG values, which reveals that more complementary information is presented in a fused image, and this is presented graphically in Figure 5c.
The SF values are listed in Table 5. It can be seen that the SF of the proposed method gives superior values to the other methods, which indicates texture changes and detailed differences are reflected in a fused image, and this is shown graphically in Figure 6. The MSF values are listed in Table 6. It can be seen that the MSF values of the proposed method provides greater values than the other methods, which indicates that a fused image has more detailed information, and this is observed graphically in Figure 7a.

Ranking Analysis
In this article, the proposed intuitionistic fuzzy set based multimodal medical image fusion algorithm provides better results than other methods using various quality metrics. Objective evaluation was used in Section 5.2. This showed the ranking analysis of each method based on the average value of each quality metric, as shown in Table 10. The best performance of the fusion method was ranked 1, and the worst performance of the fusion method was ranked 9. Table 10. Performance evaluation of the fusion methods in the ranking strategy.

Ranking Analysis
In this article, the proposed intuitionistic fuzzy set based multimodal medical image fusion algorithm provides better results than other methods using various quality metrics. Objective evaluation was used in Section 5.2. This showed the ranking analysis of each method based on the average value of each quality metric, as shown in Table 10. The best performance of the fusion method was ranked 1, and the worst performance of the fusion method was ranked 9.

Conclusions
In this article, a novel IFS-based medical image fusion process was proposed, which included four steps. Firstly, the registered input images were fuzzified. Secondly, intuitionistic fuzzy images were created by the optimum value, α using IFE. Thirdly, a fused IFI image was obtained using the IFCC fusion rule with block processing. Fourthly, the defuzzification operation was performed for the final enhanced fused image. This method is an extension of the various existing methods, such as PCA, DWTPCA, DWT + Fuzzy, CONT, Chaira's IFS, Bala's IFS, Sugeno's IFS, and PC-NSCT. These existing algorithms do not provide a quality fused image, and include various drawbacks, such as blocking artifacts, poor visibility of tumor regions, invisible blood vessels, low contrast, and vague boundaries. This proposed method overcomes the difficulties present in the existing methods and provides a better enhanced fused image without uncertainties.
The experimental result shows that the proposed fusion method gives a better fusion performance in terms of subjective and objective analysis, respectively. In Figure 4(i4), the soft tissue and tumor regions are clearly enhanced and the obtained SD (79.83) and SF (34.60) values are large in Tables 3 and 5, respectively. In Figure 4(i5), the soft tissue and lesion structure information are reflected exactly in a fused image, and the obtained quantitative value is 75.38, as shown in Table 2. In Figure 4(i8), the anatomy and functional information are visible with high quality in a fused image, and the quantitative values attained show that SD, AG, SF, MSF, MI, and FS are higher (59.54, 5.80, 24.92, 51.53, 3.5689, 1.8658) in Tables 3-9. In Figure 4(i16), the tumor region was clearly enhanced, and attained high performance metric values compared to the other existing fusion methods. As previously discussed, the heart of this proposed fusion algorithm is to calculate the intuitionistic fuzzy membership function, which is obtained by the optimum value, α using IFE. For better diagnosis and superior outcomes, the proposed fusion method can be extended to fuse different medical datasets based on the advanced fuzzy sets, such as the neutrosophic fuzzy set, pythagorean fuzzy set and fusion rules.