Image Fusion Algorithm Selection Based on Fusion Validity Distribution Combination of Difference Features

: Aiming at addressing the problem whereby existing image fusion models cannot reﬂect the demand of diverse attributes (e.g., type or amplitude) of difference features on algorithms, leading to poor or invalid fusion effect, this paper puts forward the construction and combination of difference features fusion validity distribution based on intuition-possible sets to deal with the selection of algorithms with better fusion effect in dual mode infrared images. Firstly, the distances of the amplitudes of difference features between fused images and source images are calculated. The distances can be divided into three levels according to the fusion result of each algorithm, which are regarded as intuition-possible sets of fusion validity of difference features, and a novel construction method of fusion validity distribution based on intuition-possible sets is proposed. Secondly, in view of multiple amplitude intervals of each difference feature, this paper proposes a distribution combination method based on intuition-possible set ordering. Difference feature score results are aggregated by a fuzzy operator. Joint drop shadows of difference feature score results are obtained. Finally, the experimental results indicate that our proposed method can achieve optimal selection of algorithms that has relatively better effect on the fusion of difference features according to the varied feature amplitudes.


Introduction
The fusion of dual mode Infrared images can synthesize their imaging advantages, which is conducive to effective storage of the detection images, significantly improving the imaging quality and detection accuracy of the detection system [1,2]. It has been widely used in military surveillance, infrared countermeasures, industrial monitoring and fault detection of lesions, etc. The infrared intensity and polarization images fusion of the site is an effective way to ensure the integrity of the information can be fully utilized, and the complementary advantages of image can improve the detection and recognition of target objects [3][4][5]. However, due to the differences in imaging mechanisms and the diversity of detection environments and targets, the difference characteristics of the two images in the same scene are complicated and varied. Different features have different fusion performance (fusion effectiveness) for each algorithm, so it is difficult to meet the different needs with static algorithms, which is only possible by selecting the appropriate algorithm according to the difference features, so as to meet the fusion requirements of complex infrared imaging scenes. At the moment, adjusting the fusion algorithm dynamically and optimally according to different attributes of difference features has been the key technology and a hot topic to improve pertinence and effectiveness of dual mode infrared image fusion [6].
At present, for dual mode infrared image fusion, through qualitative analysis of the relationship between some known difference feature types and multiple fusion algorithms, a better fusion algorithm is determined. By combining support value conversion (SVT) wherein the difference in brightness relationship with Top-Hat transform, [7] has improved the fused images contrast. Reference [8] analyzed the fusion performance of multi-resolution transform domain methods such as DWT, SWT, CVT, CT, DTCWT, NSCT, etc., established the corresponding relationship between these algorithms and difference features, and achieved good fusion results. Xiang [9] considered the influence of statistical difference features in the NSCT domain on the adaptive dual-channel unit link PCNN, which enriched the details of fused images. Meng [10] described the relationship between the brightness difference feature and the fusion algorithm with salient maps and points of interest, and reserved salient and bright targets for the fused image. In Reference [11] the mapping relationship between the edge details of visible light and infrared images and the fusion scheme based on compressed sensing was studied. The above algorithms have a good fusion effect under certain circumstances.
The above studies all use the crisp value to describe the fusion effect of difference features. It is recommended to apply fuzzy sets to overcome the situation where the crisp value cannot resolve the uncertain relationship between the difference feature and the fusion algorithm. An improved fuzzy set was used to fuse the low-frequency part of infrared and visible image fusion [12]. In addition, based on basic human judgment and membership functions, fuzzy logic inference was used to process multi-band image synchronization fusion [13]. Therefore, the fuzzy set theory is usually used to solve the image fusion problem by combining some multi-resolution transform domain methods instead of crisp values.
For actual target detection, however, another attribute of the difference features between the two kinds of images, called 'amplitude', in addition to the one known as 'type', equally has great effects on the fusion results. The changes of difference feature types and amplitudes are also random in practice, especially for a dynamic detection scene, making the changes of various attributes more complicated. The existing methods only consider the single attribute-type fusion effect on algorithms, which cannot reflect the impact of different attributes (such as type and amplitude) on algorithm selection, and is unable to quantify fusion validity change state of difference feature attributes, leading to poor or invalid fusion effects. Therefore, only when an algorithm with pertinence is selected optimally according to different attributes of difference features can the fusion quality of dual mode infrared image be improved.
For the fusion of infrared polarization and intensity images, the pre-selected fusion algorithm cannot always maintain better fusion performance when the amplitudes of different features are different, so the fusion algorithm effect is not fixed and changes dynamically along with the amplitudes of difference features. For the actual detection images, the fusion validity used to describe the degree of influence of the difference feature value is mostly predicted and estimated through the fusion results of existing, limited and similar scene images, so the measurement of fusion validity is predictable and possible. Therefore, a possibility distribution is needed to adopt the description of fusion validity change process [14][15][16]. However, the possibility distribution can express the dynamic changes of integration fusion validity, only describing double-sided properties, but it cannot describe the middle state of fusion validity of difference features [17][18][19], which has a great influence on fusion algorithm selection. In order to solve this problem, some researchers used intuitionistic fuzzy sets to model and manage uncertainty, which means that the process of word calculation is completed in the field of group decision-making. Tirupal studied a multi-modal medical image fusion model based on Yager's intuitionistic fuzzy sets [20]. The initial image is converted to complementary intuition Yager fuzzy images and the best new objective function value, called intuitionistic fuzzy entropy, is used to obtain membership functions and non-membership function parameters. In Reference [21], a new infrared and visible image fusion method employing a non-subsampled contourlet transform (NSCT) with intuitionistic fuzzy sets is put forward, which outperformed the advanced fusion methods in terms of objective assessment and visual quality. Therefore, this paper takes advantage of intuitionistic fuzzy set and possibility theory and proposes a novel construction method of fusion validity distribution based on intuition-possible sets to solve the algorithm selection problem. For multiple difference features, a distribution combination method based on intuition-possible set ordering is proposed. Fusion validity scores of the algorithms on multiple intervals of difference feature amplitude are calculated, and comprehensive values of the algorithms for different feature amplitudes are obtained. Based on this, for each difference feature, the algorithm with relative better effect is selected. The flow chart of the method is shown in Figure 1. The descriptions of some particular terms are listed in Table 1.

Particular Terms Description
Difference feature The difference information of infrared polarization and intensity image.

Diverse attribute
The type and amplitude of difference features.
The type of difference features The brightness, the edge and detailed features, including gray mean, standard deviation, edge intensity and spatial frequency. The amplitude of difference features The absolute difference of the feature pixels intensity value of two types of image.

Fusion validity
It is used to measure effective degree of fusing the features in fused images and the source images for the specific fusion algorithm.
Fusion validity distribution It can reflect fusion validity changing process of attributes of image difference features to algorithms.
Intuition possible sets It can realize quantitative description of fusion effect changing process of difference feature of images to algorithms.
Relative better effect From the comprehensive consideration of objective evaluation and subjective evaluation, this algorithm has the best fusion effect.

Distribution construction
The changing process based on the new method (intuition possible sets) is constructed.
The following main contributions of this paper can be highlighted: (1) Intuition-possible sets are built to model fusion validity of attributes of image difference features. (2) A novel construction method of fusion validity distribution based on intuitionpossible sets is proposed, which can reflect the fusion validity change process of attributes of image difference features to algorithms. (3) This paper puts forward a distribution combination method based on intuitionpossible set ordering to solve the optimal algorithm selection problem that has a relatively better effect on the fusion of the difference features according to the varied image feature attribute values, which provides the basis to algorithm classification and mimicry bionic fusion. The rest of this paper is organized as follows: Section 2 briefly analyses the type of difference feature for infrared polarization and intensity images. Section 3 determines intuition-possible set on fusion effect according to the distance of amplitudes of difference features, then proposes a fusion validity distribution construction method. A distribution combination method based on intuition possible set ordering is put forward. Experimental results and comparisons are given and analyzed. Conclusions are presented in Section 4.

Determination of the Type of Difference Feature
The main reason for the formation of the difference features is that there are differences between the types of imaging characteristics, including the radiation difference between the target and background, the atmospheric transmission difference and the imager response difference. It can be seen from eight groups of closely registered infrared polarization and intensity experimental images (with a uniform size of 256 × 256) in Figure 2 that the infrared polarization images have sharp edge feature details, etc., but lack sufficient brightness characteristics. Infrared intensity image information based on thermal radiation luminance characteristics is significant, but lacks sufficient edge and detailed features. Thus, the difference in the infrared polarization and intensity image, edge features and details are evident. The difference feature selected in this paper includes gray mean, standard deviation, edge intensity and spatial frequency. These are defined as follows:

•
Gray mean: In the grayscale image, the brightness information changes continuously from dark to bright. The difference gray mean value represents the absolute value of mean difference of all pixel intensity values of two types of images in the dual-mode infrared images, thus it can effectively reflect the change of the difference of brightness characteristics of images.

•
Edge intensity: The edge information is the contour structural feature of the human eye recognition feature information, and the distributions of the two types of images are very different. The difference edge intensity represents the absolute value of the difference in the edge amplitude intensity of the two types of images. This paper selects the Sobel operator based on the commonly used edge extraction operators to extract the edge amplitude intensity information to characterize the edge feature difference change of the images. • Standard deviation: The difference standard deviation can reflect the discrete situation of the gray scale of the dual mode infrared image compared to the average gray scale. The larger the difference standard deviation is, the more discrete the gray level distribution is, indicating the greater the contrast between the two kinds of images, the more information available, that is, the better the fusion effect. • Spatial frequency: Difference spatial frequency can reflect the sharpness of pixel gray value changes in dual-mode infrared images, can effectively represent image texture feature information, and reflect the image's ability to describe the contrast of small details. The greater the difference spatial frequency, the clearer the fused image. The above can be well described image information, so we adopt four difference features in this paper, labelled as T 1 , T 2 , T 3 and T 4.

Intuition Possible Sets
Let X be the universe of discourse, X is a primary variable taken from X, if there are two mappings: and: and meeting the following condition: Then, A is called an intuition possible set in the universe of discourse X, and it can be expressed as Equation (4): Here, π 1 A and π 2 A are possibility distribution and non-possibility distribution of A respectively, and π 1 A (x) and π 2 A (x) represent possibility degree and non-possibility degree A (x j ) of x j (j = 1, 2, · · · , n) as π 1 j and π 2 j . < π 1 j , π 2 j > is an ordered pair of possibility degree and non-possibility degree of x j .
For infrared polarization and intensity image fusion, X represents different amplitude values of difference feature, and intuition possibility set A is the set of high fusion effect of difference feature to algorithms. Then π 1 A (x) represents the possibility degree of high fusion validity of difference feature amplitude x to algorithms when we set the value of difference feature amplitude equal to x, and π 2 A (x) is the non-possibility degree of high fusion validity of difference feature amplitude, that is, the possibility degree of low fusion validity of difference feature amplitude x. π 3 A (x) is the hesitancy degree of medium fusion validity of difference feature amplitude x.
We first obtain a series of image blocks using 16 × 16 size and non-overlap block method to process fused image and source images respectively. Then the distances of the amplitudes of difference feature between fused image and source images are calculated by distance similarity as Equation (5).
where D X is n dimensional vector of the distances of the amplitudes of difference feature for all image blocks. X i f represents the mean of the amplitudes of difference feature for the ith image block in fused images. Also, X i P and X i I are, respectively, the mean of the amplitudes of difference feature for the i-th image block in infrared polarization and infrared intensity images. X can be difference feature T 1 , T 2 , T 3 or T 4 .
Discrete points figure of the four types of difference feature can be obtained as shown in Figure 3. Here, the amplitudes of difference feature are assigned as abscissa axis, and the normalized distances of the amplitudes of difference feature between fused image (e.g., A 5 ) and source images are given as vertical axis.

Distribution Construction
For the infrared polarization and intensity images, the better fusion results(that is, higher fusion validity) of the fused images are shown as follows: (i) The brightness feature value of the fused image is closely related to that of the infrared intensity image. (ii) The edge and texture feature values of the fused images are also as close as possible to those of the infrared polarization images. Algorithm 1 summarizes the procedure of the distribution construction step by step in pseudo code format.

Algorithm 1: Distribution construction
Input: Infrared polarization image, infrared intensity image and fused image Output: Fusion validity distribution Step 1: The amplitudes of difference feature // T 1 , T 2 , T 3 and T 4 .
Step 2: Calculation of the distance of the amplitudes of difference feature using (5) Step 3: Building of Intuition possible sets on fusion effect Step 4: Construction of fusion validity distribution a. Determine the number of image blocks N X k b. Initialize n X k 1 ,n X k 2 and n X k 3 c. Update n X k 1 , n X k 2 and n X k 3 using Equations (6) , we cannot determine the fusion validity to be high or low under this circumstance, i.e., medium state.
The amplitudes of difference feature in Figure 3 can be divided into K intervals, and we can obtain the total number of image blocks N X k included in each amplitude interval X k (k = 1, 2, · · · , K). Meanwhile, the number of image blocks n X k 1 , n X k 2 and n X k 3 can be got included in three distance intervals for each amplitude interval. The mathematical formulation of fusion validity of intuition possible set is given as follows: Here, π Then, fusion validity distribution of intuition possible set can be obtained as shown in Figure 4. Similarly, we can get the fusion validity distribution of each difference feature to A 1 , A 2 , A 3 , A 4 , A 5 , A 6 , A 7 , A 8 , A 9 , A 10 , A 11 and A 12 . Take difference feature T 1 as an example, Figure 5 shows fusion validity distribution of difference feature T 1 to different algorithms.
· · · π 1 mL , π 2 mL        Then, the total evaluation value V k (k = 1, 2, · · · K) of the amplitude of difference feature T 1 to algorithm A i is expressed as follows: Obviously, V i (i = 1, 2, · · · , m) is also an intuition-possible set. Let A =< π 1 ik , π 2 ik > be an intuition-possible set, the score function and accuracy function of A is defined as [42,43]: For two intuition possible sets A i and A j [44][45][46], then: If m( A i ) < m( A j ), then A j is superior to A i , denoted by A i < A j ; 3.
If m( A i ) = m( A j ), then, We can calculate score results ranging from the 1st amplitude interval of difference feature T 1 to the 10th interval regarding fusion algorithm A i , as shown in Table 2. Table 3 is score results ranging from the 11th amplitude interval of difference feature T 1 to the 20th interval. According to Tables 2 and 3, for each amplitude interval of difference feature T 1 , the nearer the score of algorithm approximates to 1, the better fusion effect this algorithm will have. The nearer the score of algorithm approximates to −1, the worse fusion effect the algorithm will have. Also, we can compute scores of each amplitude interval of the other three classes of difference features based on intuition possible set ordering. The advantage of such approach is that we can know the mapping between the amplitude of difference feature and fusion algorithms clearly. When the amplitude of difference feature changes, the fusion validity of each algorithm will also change accordingly. The total evaluation values of the amplitudes of difference features to algorithm A i are presented in Table 4 and pictorially depicted in Figure 6. According to Table 4 and Figure 6, we can easily observe that, for difference feature T 1 , the A 12 algorithm outperforms the other algorithms when source images contain T 2 , T 3 , and T 4 with the proposed intuition-possible set ordering method in this study. For difference feature T 2 , T 3 , and T 4 , the A 6 algorithm has obvious advantages in the fidelity of salient information (including contrast, edge feature and texture) and human visual effect.
The score results of difference features use the disjunctive fuzzy operator to select the optimal fusion algorithm with the largest score. The aggregation of four difference feature score results of source images are presented in Figure 7. We can also obtain joint drop shadows of aggregation of four difference feature score results, shown in Figure 8.   By counting the numbers of each fusion algorithm appears in the joint drop shadow graph and the corresponding score values, the proportion f i of the fusion algorithm A i in the area and the average score value E i are calculated. Because the values are both related to the performance of algorithm A i , the fusion index E i is constructed, as shown in Equation (11). The fusion index of different fusion algorithms synthesis of all the difference feature amplitudes is summed, and the fusion algorithm corresponding to the highest value is the optimal fusion algorithm:

Experimental Results and Comparisons
To test the proposed fusion validity distribution construction and combination method in this study, we select two groups of infrared polarization and intensity images with different scenes (as shown in Figure 9) to experimental verification. Figures 10 and 11 are fused images with the above fusion algorithms for each group of verified images respectively. The fusion index E i for each group are shown in Tables 5 and 6.    Table 5. E i of each fusion algorithms of the first group.  Table 6. E i of each fusion algorithms of the second group. We adopt both subjective and objective assessment to analyse the fused results with the algorithm selected by the proposed fusion validity distribution construction and combination method. It can be observed from Figure 10 that the fusion results of A 1 , A 6 and A 12 algorithms can retain the brightness of infrared intensity image, while the other fused images are dark as a whole. Under the premise of preserving the fused images with high brightness and contrast, algorithm A 6 has better texture detail and edge contour (particularly the box region of Figure 10) preservation performance, also it has better visual effects and resolution. For the box region with great difference of source image in Figure 11, algorithms A 5 and A 6 have better visual brightness, contrast, texture details and edge profile edge contour preservation performance than the other algorithms, particularly the A 5 algorithm, which preserves the texture details completely. The other fused images are dark, leading to poor human visual effect.

Difference Features Index Fusion Algorithm
It can be observed from Tables 5 and 6 and Figure 12, for the first group, in the five combination forms (T 1 and T 2 , T 1 and T 3 , T 1 and T 4 , T 2 and T 3 , T 2 and T 4 ), the fusion indicators of fusion algorithm A 6 are higher than other algorithms, and it is obvious that A 6 is much larger than other algorithms on the final calculation, so it is concluded that A 6 is the optimal fusion algorithm for the first group. In the same way, A 5 is the optimal fusion algorithm for the second group. In order to prove the effectiveness and advantages of this method in this study, based on the first set of images, a comparative analysis was made with the existing fusion effectiveness based on fuzzy theory. The fusion validity as follows: Thus, we can obtain fusion validity scatterplot of the four difference features, as shown in Figure 13.
The difference feature information and fuzzy operators in the above cases are used to rank the fusion algorithm. With fusion validity based on fuzzy theory, the order of fusion algorithms, namely, The rankings order compared with the result obtained by this paper is slightly different, but the best algorithm of the two methods is the same, that is A 6 . It shows that the method proposed in this paper is effective.
To further demonstrate the correctness of the sort order, we compare the values of X j=1:8 (information entropy, Q 0 , Q w , Q E , VIFF, SSIM, mutual information, and average gradient). The fusion results of the 12 fusion algorithms corresponding to the two groups of source images are evaluated, we employ grade score R i (as shown in Formula (12)) to fully consider all evaluation indexes evaluate: According to the ranking r .j of each fusion algorithm about the value, the optimal fusion algorithm is selected and compared with the selected fusion algorithm of proposed method. The results are shown in Table 7. The following can be observed from the analysis of the data in the Table 7, the R of algorithm A 6 in the first group is significantly smaller than other fusion algorithms, which indicates that A 6 is the optimal fusion algorithm of the first group, with the strongest overall performance. With regard to the suboptimal algorithm A 1 , the X 2 and X 3 values are higher than the others, and the X 1 value is second highest, and the other values rank ahead. It is reasonable that A 1 is the suboptimal algorithm. Therefore, we can use the proposed method based on the intuitive possible sets to select the best fusion algorithm in dual mode infrared images.
In the second group, the R of the A 5 algorithm is significantly better. For other fusion algorithms, it is consistent with the experimental results in Tables 5 and 6, which verifies the correctness and effectiveness of the method in this paper. The algorithm with better fusion result by subjective and objective analysis are selected by the proposed fusion validity distribution construction and combination method, which shows that based on intuition possible set, it is feasible and effective to select the best fusion algorithm. According to multi-attributes (e.g., type and amplitude) of difference feature, the proposed method can select relatively better fusion algorithm specifically.

Conclusions
This paper proposes a novel fusion validity distribution construction method based on intuition-possible sets. We have considered the dynamic changes of difference feature attributes (including type and amplitude) to select algorithm and established the mappings of the attributes of difference feature and fusion algorithm. The proposed distribution can realize quantitative description of fusion effect changing process of difference feature of images to algorithms.
For multiple amplitude intervals of difference features, this paper put forward a distribution combination method based on intuition possible set ordering. Through calculating the score function and accuracy function of intuition possible sets on fusion validity, we can select fusion algorithm with relatively better result. It provides new way to algorithm classification and mimicry bionic fusion.
In the process of approaching a solution, future research is formulated with the following questions in mind: (1) In this article, four difference features are used to select the best one of the twelve fusion algorithms. In further research, we should consider the recently published solutions for image fusion to choose the algorithm with better fusion effects. (2) Although the method proposed in this paper has great advantages in selecting the optimal fusion algorithm, there is still some room for improvement. We utilize a fuzzy operator to aggregate difference feature score results in this paper. It would be very interesting to apply some fuzzy weighted averaging operators to cope with fusion validity distribution combination of difference features, such as fuzzy ordered weighted averaging operator, and Pythagorean fuzzy averaging and geometric averaging operators, etc.

Institutional Review Board Statement:
We choose to exclude this statement since the study did not involve humans or animals.

Informed Consent Statement:
We all decide to submit this manuscript for publication.