Next Article in Journal
MagicFinger: 3D Magnetic Fingerprints for Indoor Location
Previous Article in Journal
Optical Fibre Pressure Sensors in Medical Applications
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators

1
Image Processing Center, Beijing University of Aeronautics and Astronautics, Beijing 100191, China
2
State Key Laboratory of Virtual Reality Technology and Systems, Beihang University, Beijing 100191, China 
Sensors 2015, 15(7), 17149-17167; https://doi.org/10.3390/s150717149
Submission received: 28 May 2015 / Revised: 29 June 2015 / Accepted: 9 July 2015 / Published: 15 July 2015
(This article belongs to the Section Physical Sensors)

Abstract

:
The crucial problem of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details and combine these features into the final fusion result to produce a clear fused image. To obtain an effective fusion result with clear image details, an algorithm for infrared and visual image fusion through the fuzzy measure and alternating operators is proposed in this paper. Firstly, the alternating operators constructed using the opening and closing based toggle operator are analyzed. Secondly, two types of the constructed alternating operators are used to extract the multi-scale features of the original infrared and visual images for fusion. Thirdly, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. Finally, the final fusion features are incorporated with the original infrared and visual images using the contrast enlargement strategy. All the experimental results indicate that the proposed algorithm is effective for infrared and visual image fusion.

1. Introduction

Different imaging sensors produce images with different degrees of superiority [1,2,3,4,5,6,7,8,9]. Infrared imaging sensors produce images with important regions which could not be observed by visual imaging sensors. Visual images contain rich details which could not be provided by the infrared image. An effective and useful way to produce an image with important regions and rich details is to fuse the infrared and visual image.
The image regions in infrared images and the rich details in visual images are the spatial information. Infrared and visual image fusion should effectively combine these spatial features to produce a clear fusion result with rich details. The crucial issue of infrared and visual image fusion is how to effectively extract the image features, including the image regions and details. Combining these features into the final fusion result would produce a clear fusion image. To achieve this purpose, many algorithms have been proposed [10,11,12,13,14,15,16,17,18,19,20,21,22,23,24]. A direct averaging algorithm is simple and easy to implement [10,11], but image details may be heavily smoothed, which cannot produce a clear fusion image with rich details. Wavelet, curvelet and shearlet transforms [12,13,14,15,16] extract image features through the pyramid decomposition of the original infrared and visual images. However, some useful image information may be lost, which may produce unclear fusion results. Segmentation-based algorithms [17,18,19] are also used for image fusion, but, the effective segmentation results which may affect the fusion result cannot be obtained easily. Independent component analysis, principal component analysis or Laplacian pyramid-based algorithms [20,21,22] extract the main information of the original infrared and visual images to produce the fusion image, but again, some image information is lost, which may result in an unclear fusion image. Neural networks and some intelligent tools [23,24] were also tried for image fusion. However, most of them are mainly used for multi-focus image fusion.
Mathematical morphology has been the important theory in the field of image analysis [25,26,27,28,29,30,31,32], which is also used for infrared and visual image fusion [4,5,10,25]. Using the pyramid decomposition strategy based on the morphological operators is useful for image fusion [26,27]. Although a clear fusion image may be produced, some image details may be still smoothed and some artifacts may also be produced. This would affect the further analysis of the fusion result. Top-hat transforms have been used or improved for the fusion of infrared and visual images [5,10,29,32]. However, some image details of the original images may be not well preserved in the final fusion image. Toggle operators using opening and closing as primitives was also used for infrared and visual image fusion [4], which could preserve image details in the final fusion image, but some image details are still smoothed. In all, most of the existing algorithms may not perform well for producing a clear fusion result with rich details.
Morphological alternating filters [26,27], which are the classical alternating operators, are defined as alternatively operating the morphological opening and closing operators [26,27,30,31]. Then, both the bright and dark image features could be identified by the alternating filters. However, because of the defect of smoothing useful image information, the classical alternating filters may not effectively identify some useful image features or may produce noise in the resulting image. This would affect the performance of infrared and visual image fusion. Since the alternating operators are effective morphological operators, a new way of constructing the novel alternating operators with more effective performance for feature extraction has been proposed [33]. The constructed alternating operators using the opening and closing based toggle operator could effectively extract the spatial features, including the image regions and details. These features could be used for fusion, which may produce a clear fusion result with rich details. Moreover, combining the multi-scale features in the morphological operator-based algorithm is one important step. The fuzzy measure, linear index of fuzziness [34,35,36,37,38,39] used in this paper, is defined based on the spatial information of images, which could be used to quantify the importance of the multi-scale spatial features. Then, using the fuzzy measure, the important multi-scale spatial features could be effectively combined.
Based on the analysis above, an effective algorithm for infrared and visual image fusion by using the fuzzy measure and constructed alternating operators is demonstrated in this paper. Firstly, based on the analysis of the constructed alternating operators using opening and closing based toggle operators, two types of alternating operators are used for extracting the multi-scale fusion features. Secondly, the extracted multi-scale fusion features are combined through the fuzzy measure based weight strategy to form the final fusion features. Finally, the final fusion image is produced by adjusting the contrast of the final fusion features. All the experimental results indicate that because the alternating operators could effectively extract the features for fusion and the fuzzy measure could effectively fuse the features, the proposed algorithm performs effectively for infrared and visual image fusion.

2. Mathematical Morphology

2.1. Basic Morphological Operators

Many of the morphological operators are the useful tool for different applications [25,26,27,28,29,30,31], which are usually defined based on two sets: the original image f (x, y) and structuring element B (u, v). The pixel coordinates of f and B are represented by (x, y) and (u, v), respectively. Two of the basic morphological operators, dilation ( ) and erosion ( Θ ), are defined using f and B as follows:
f   B   =   max u , v ( f ( x u , y v ) + B ( u , v ) )
f   Θ B   =   min u , v ( f ( x + u , y + v ) B ( u , v ) )
Two important morphological operators, opening and closing (denoted by f B and f B ), are defined by composing the morphological dilation and erosion as follows:
f B   =   ( f   Θ B )   B
f B   =   ( f   B )   Θ B

2.2. Toggle Operator

Toggle operators are defined based on the results of morphological operators following different pre-defined rules. One toggle operator defined based on the opening and closing operator is as follows [32]:
T O   ( f ) ( x ,   y )   =  
{ f B ( x , y ) ,   if    f B ( x , y ) f ( x , y ) < f ( x , y ) f B ( x , y ) f B ( x , y ) ,   if    f B ( x , y ) f ( x , y ) > f ( x , y ) f B ( x , y ) f ( x , y ) ,    e l s e
Opening and closing smooth the bright and dark image features, which would change the gray values of these features. This definition of toggle operator indicates that the smoothed image features by opening or closing with larger gray value changes would be retained in the toggle operator result. These remaining image features usually represent the important features in the images [4,32].

3. Alternating Operator by Opening and Closing Based Toggle Operator

3.1. Basic Operator

Because of the smoothing by opening, the identified bright image features by TO would have smaller gray values than the corresponding pixels of the original image. Thus, the identified bright image features by TO could be obtained as follows [4,32]:
I F B B ( f ) ( x ,   y ) = max ( f ( x , y ) T O B ( f ) ( x , y ) , 0 )
IFB contains the bright image features, which has similar properties as the morphological opening operator [32]. Similarly, the identified dark image features by TO could be obtained as follows [4,32]:
I F D B   ( f ) ( x ,   y )   =   max ( T O B ( f ) ( x , y ) f ( x , y ) , 0 )
IFD contains the dark image features, which has similar properties as the morphological closing operator [32].

3.2. Multi-Scale Extension

Multi-scale structuring elements could be used by morphological operators to extract the multi-scale image features. Suppose B1, …, Bn be a sequence of multi-scale structuring elements. Bi represents the structuring element at scale i, 1 ≤ In. Through utilizing the structuring element Bi at scale i, the multi-scale expression of toggle operator is as follows [4,32]:
T O B i ( x ,   y )   =  
{ f B i ( x , y ) ,   if    f B i ( x , y ) f ( x , y ) < f ( x , y ) f B i ( x , y ) f B i ( x , y ) ,   if    f B i ( x , y ) f ( x , y ) > f ( x , y ) f B i ( x , y ) f ( x , y ) ,    e l s e
By using the multi-scale toggle operator T O B i , the multi-scale expressions of IFB and IFD are as follows [4,32]:
I F B B i ( f ) ( x ,   y )   =   max ( f ( x , y ) T O B i ( f ) ( x , y ) , 0 )
I F D B i   ( f ) ( x ,   y )   =   max ( T O B i ( f ) ( x , y ) f ( x , y ) , 0 )

3.3. Alternating Operators

IFB and IFD have similar properties as the morphological opening and closing operators, respectively. Utilizing a strategy similar to constructing the alternating filters through alternatively operating the opening and closing, the alternating operators through alternatively operating the IFB and IFD could be defined as follows [33]:
A O i 1 ( x , y )   =   I F D B i ( I F B B i ( x , y ) )
A O i 2 ( x , y )   =   I F B B i ( I F D B i ( x , y ) )
A O i 3 ( x , y )   =   I F B B i ( I F D B i ( I F B B i ( x , y ) ) )
A O i 4 ( x , y )   =   I F D B i ( I F B B i ( I F D B i ( x , y ) ) )
Because IFB and IFD smooth the bright and dark image features, the constructed alternating operators sequentially smooth the bright and dark image features at different scales, which indicates that the constructed alternating operators could be used to identify the image features at different scales. This would be useful for different image analysis applications.

4. Infrared and Visual Image Fusion

4.1. Multi-Scale Fusion Feature Extraction

The two types of alternating operators A O i 1 and A O i 2 alternatively operate the morphological opening and closing operators, which could both smooth the important bright and dark image features. In infrared and visual images, the effective featrues are bright or dark features. This means that these two types of alternating operators A O i 1 and A O i 2 could be used to extract both the bright and dark features, which would be helpful for the infrared and visual image fusion.
Because A O i 1 and A O i 2 could both smooth the bright and dark image features, the gray values of these image features are different compared to the gray values of these image features in the original image. Thus, extracting image features through comparing the gray values of the result of morphological operators and the original infrared or visual images [33] would be also effective for extracting the features for infrared and visual image fusion.
Let f and g represent the original infrared and visual images for fusion. For infrared image f, bright image features having large gray values may become small after the smoothing by alternating operator A O i 1 following the increasing of the scale numbers. Thus, by using the first type of alternating operator A O i 1 , the identified bright features of the original infrared image corresponding to scale i could be expressed as follows:
B F A O i 1 ( f )   =   max [ f ( x , y ) [ AO i 1 ( f ) ] ( x , y ) , 0 ]
Also, by using the second type of alternating operator A O i 2 , the identified bright features of the original infrared image corresponding to scale i could be expressed as follows:
B F A O i 2 ( f )   =   max [ f ( x , y ) [ AO i 2 ( f ) ] ( x , y ) , 0 ]
The bright features of the original infrared image f extracted by the two types of the alternating operators could be calculated as the combination of B F A O i 1 ( f ) and B F A O i 2 ( f ) as follows:
B F A O i ( f )   =   [ B F A O i 1 ( f )   +   B F A O i 2 ( f ) ] / 2
In the same way, the bright features of the original visual image g extracted by the two types of the alternating operators could be calculated as follows:
B F A O i ( g )   =   [ B F A O i 1 ( g )   +   B F A O i 2 ( g ) ] / 2
where:
B F A O i 1 ( g )   =   max [ g ( x , y ) [ AO i 1 ( g ) ] ( x , y ) , 0 ]
B F A O i 2 ( g )   =   max [ g ( x , y ) [ AO i 2 ( g ) ] ( x , y ) , 0 ]
B F A O i ( f ) represents the extracted bright features of the original infrared image by using the two alternating operators A O i 1 and A O i 2 . B F A O i ( g ) represents the bright features of the original visual image extracted by using the two alternating operators A O i 1 and A O i 2 . To produce the fusion image, the bright features of the original infrared and visual images should be combined.
Morphological operators mainly operate on the gray values of images, thus the pixel-wise comparing strategy on the gray values [4,5,6,10,25,27,32,33,38] has been an effective way for combining the image features. In this paper, this strategy is adopted for fusion the bright features of the original infrared and visual images extracted by the two types of alternating operators A O i 1 and A O i 2 as follows:
B F A O i ( f ,   g )   =   { [ B F A O i ( f ) ] ( x , y ) , [ B F A O i ( g ) ] ( x , y ) , i f [ B F A O i ( f ) ] ( x , y ) > [ B F A O i ( g ) ] ( x , y ) e l s e
Similarly, for infrared image f, dark features having small gray values may become large after the smoothing by alternating operator A O i 1 following the increasing of the scale numbers. Thus, by using the first type of alternating operator A O i 1 , the identified dark features of the original infrared image corresponding to scale i could be expressed as follows:
D F A O i 1 ( f )   =   max [ [ AO i 1 ( f ) ] ( x , y ) f ( x , y ) , 0 ]
Also, by using the second type of alternating operator A O i 2 , the identified dark features of the original infrared image corresponding to scale i could be expressed as follows :
D F A O i 2 ( f )   =   max [ [ AO i 2 ( f ) ] ( x , y ) f ( x , y ) , 0 ]
The dark features of the original infrared image f extracted by the two types of the alternating operators could be calculated as the combination of D F A O i 1 ( f ) and D F A O i 2 ( f ) as follows:
D F A O i ( f )   =   [ D F A O i 1 ( f )   +   D F A O i 2 ( f ) ] / 2
In the same way, the dark features of the original visual image g extracted by the two types of the alternating operators could be calculated as follows:
D F A O i ( g )   =   [ D F A O i 1 ( g )   +   D F A O i 2 ( g ) ] / 2
where:
D F A O i 1 ( g )   =   max [ [ AO i 1 ( g ) ] ( x , y ) g ( x , y ) , 0 ]
D F A O i 2 ( g )   =   max [ [ AO i 2 ( g ) ] ( x , y ) g ( x , y ) , 0 ]
Thus, based on D F A O i ( f ) and D F A O i ( g ) , the dark fusion features of the original infrared and visual images extracted by the two types of alternating operators A O i 1 and A O i 2 are as follows.
D F A O i ( f ,   g )   =   { [ D F A O i ( f ) ] ( x , y ) , [ D F A O i ( g ) ] ( x , y ) , i f [ D F A O i ( f ) ] ( x , y ) > [ D F A O i ( g ) ] ( x , y ) e l s e

4.2. Fuzzy Measure Based Final Fusion Feature Calculation

The bright fusion features B F A O i ( f ,   g ) at the ith scale contain the fusion features corresponding to the ith scale. These multi-scale bright fusion features should be combined to form the final fusion features.
These extracted fusion features are the crucial information for infrared and visual image fusion. These features contain the important spatial information of the original images. Then, the bright fusion features at any scale B F A O i ( f ,   g ) , which contains more spatial information should be combined into the final fusion feature image with a larger weight.
The fuzzy theory [35,36,39] has been effectively used for image analysis applications. One image I with size M × N could be treated as the fuzzy set through refining the gray value of I as follows:
μ ( x , y ) = I ( x , y ) / I max
where μ ( x , y ) represents the fuzzy value of the pixel (x, y) in image I. Imax represents the maximum gray value of I. Based on the fuzzy value μ ( x , y ) , one fuzzy measure, linear index of fuzziness (denoted by γ ) [34,37,38], could be calculated as follows:
γ ( I )   =   2 M × N x = 1 M y = 1 N min { p x y , ( 1 p x y ) }
where:
p x y =   sin [ π 2 × ( 1 μ ( x , y ) ) ]
This measure, γ , using the fuzzy theory based value to calculate the contained spatial information of an image. Thus, γ could be used to construct the weight value for calculating the final fusion features.
The weight value of the bright fusion features of each scale i could be calculated as follows:
w f i   =   γ [ B F A O i ( f , g ) ] / i γ [ B F A O i ( f , g ) ]
where wfi represents the weight value of the bright fusion features of scale i.
By using the weight value wfi, the final bright fusion features could be calculated as follows:
F B F A O   ( f ,   g )   = i w f i × B F A O i ( f , g )
FBFAO (f, g) represents the final bright fusion features calculated from the multi-scale bright features by using the fuzzy measure γ . The calculation of FBFAO (f, g) indicates that, the bright features with more spatial information are used with a larger weight to form the final bright fusion features. Therefore, FBFAO (f, g) would contain more spatial information, which could produce the effective fusion image with clear regions and rich details. This would produce an effective fusion result of the original infrared and visual images. Also, the final dark fusion features could be calculated as follows:
F D F A O   ( f ,   g )   = i d f i × D F A O i ( f , g )
where:
d f i   =   γ [ D F A O i ( f , g ) ] / i γ [ D F A O i ( f , g ) ]
where dfi represents the weight value of the dark fusion features of scale i. FDFAO (f, g) represents the final dark fusion features calculated from the multi-scale dark features by using the fuzzy measure γ .

4.3. Infrared and Visual Image Fusion

FBFAO (f, g) and FDFAO (f, g) are the final bright and dark fusion features. One direct but effective way of producing fusion image based on the extracted bright and dark fusion features is the contrast adjustment strategy [4,5,6,10,25,27,32,33,38], which could be recognized as one special type of morphological contrast operators. In this paper, we also use this strategy to import the final features into the original infrared and visual images to produce the final fusion image as follows:
F = B × w1 + FBFAO (f, g) × w2FDFAO (f, g) × w3
where B is the base image which contains the basic information of the original infrared and visual images. Usually, B could be calculated as the mean of the original infrared and visual images [4,5,6,10]. F is the final fusion image. w1, w2 and w3 are the weights which are used to adjust the contrast of the final fusion image.
In this expression, the bright image features are added on and the dark image features are subtracted from the base image, which would not only combine the image features of the original images into the final fusion image, but also further enhance the image features. Therefore, the proposed algorithm would be effective for infrared and visual image fusion.

4.4. Parameter Analysis

Structuring elements, scale number n, w1, w2 and w3 are the main parameters used in the proposed algorithm. Because the flat structuring element is simple and easy to implement, the flat structuring element is used in this paper. In flat structuring element, the size of the structuring element at each scale is valued as the size of the corresponding scale. The shape of the structuring element is the square shape which has been recognized as the simple, effective and widely used shape in mathematical morphology [4,5,6,10]. Because the image details usually exist at the low scales [4,5,6,10], there is no need to use many scales. Usually, using 3~5 scales are enough. In this paper, we use n = 3 scales.
w1, w2 and w3 are the positive values used to adjust the contrast of the final fusion image, which could be valued in the interval [0,5]. To obtain an effective fusion image with good contrast, w2 and w3 should be large. Also, to keep the basic information of the original infrared and visual images, w1 should be close to 1. To be simple, we use w1 = 1.0, w2 = w3 = 2.0 in this paper. Experimental results on different types of infrared and visual images verified that the proposed algorithm using these parameters was effective.

5. Experimental Results

5.1. Visual Comparisons

To show the effective performance of the proposed algorithm for infrared and visual image fusion, experiments comparing it with the multi-scale top-hat transform-based algorithm (MSTHT) [10], multi-scale shift invariant discrete wavelet transform-based algorithm (SIDWT) [15,25], multi-scale Laplacian pyramid-based algorithm (LP) [22,25], multi-scale center-surround top-hat transform-based algorithm (MSNTHT) [6] and multi-scale toggle operator-based algorithm MSTOOC [4] are performed. SIDWT and LP are multi-scale theory-based algorithms, which could perform effectively for infrared and visual image fusion. MSTHT, MSTHST, MSNTHT and MSTOOC are multi-scale morphological theory-based algorithms, which could be effectively used for infrared and visual image fusion. The proposed algorithm is the multi-scale theory-based algorithm using morphological operators and is effective for infrared and visual image fusion. Therefore, MSTHT, SIDWT, LP, MSTHST, MSNTHT and MSTOOC are appropriate algorithms for the comparison.
The data sets used are standard data sets for infrared and visual image fusion, which could be downloaded from www.imagefusion.org. The sizes of these images range from 360 × 270 to 512 × 512. The images in the data sets are obtained under different environments. For example, the UNcamp images contain the natural and building background and the people target region is a protruding region. Also, the Dune images contain a wild background and a protruding people target region. The Navi images are obtained from the sensors located on a helicopter. Using these data sets obtained from different environments would be reasonable to verify the effectiveness of the fusion algorithms.
Some examples are shown below. In these examples, (a) is the original infrared image; (b) is the original visual image; (c) is the fusion result of MSTHT; (d) is the fusion result of SIDWT; (e) is the fusion result of LP; (f) is the fusion result of MSTHST; (g) is the fusion result of MSNTHT; (h) is the fusion result of MSTOOC; (i) is the fusion result of the proposed algorithm.
Figure 1 is an example of infrared and visual image fusion of the UNcamp images. Infrared and visual image fusion should effectively combine the image regions and details in the original images into the final fusion image. Thus, the fusion image should be clear and contain rich image details, which is useful for the further image analysis. Because some details are still smoothed, the results of MSTHT, MSTHST and MSNTHT are not clear and the details of the results of SIDWT and LP are also not clear and even worse than the result of MSTHST. The result of MSTOOC contains more details than the results of MSTHT, SIDWT, LP, MSTHST and MSNTHT, which is clearer. However, the result of the proposed algorithm is the clearest and it contains the richest image details. Therefore, the proposed algorithm performs better for infrared and visual image fusion than other algorithms.
Figure 1. An example on UNcamp images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Figure 1. An example on UNcamp images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Sensors 15 17149 g001
Figure 2 is an example of infrared and visual image fusion of the Dune images. The original images are not clear. Although MSTHT, SIDWT and LP combine the original infrared and visual images, some details are still smoothed, which results in a not very clear image. The result of MSNTHT is clearer than MSTHT, and the contrast is good, but the details are still not very clear. The results of MSTHST and MSTOOC are good and the details are clear. However, comparing with the result of the proposed algorithms, the details of the results of MSTHST and MSTOOC are not very clear. Especially, the details in the result of the proposed algorithm are very rich and the result is clearer than the results of other algorithms. These indicate the better performance of the proposed algorithm.
Figure 2. An example on Dune images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Figure 2. An example on Dune images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Sensors 15 17149 g002
Figure 3 is an example of infrared and visual image fusion performed on the Navi images. The details in the original images are not clear. It would be important to produce a clear fusion result with rich details. The details of the result of MSTHT are not clear, thus the fusion result is unclear. The result of MSNTHT is better than MSTHT, but the image details are still not clear. The details of the results of SIDWT, LP and MSTHST are clearer than MSTHT and MSNTHT, but the details in the result of MSTOOC are clearer than SIDWT, LP and MSTHST. Moreover, among these algorithms, the result of the proposed algorithm is the clearest and the details are very rich, which indicates its effective performance for infrared and visual image fusion. This would be very useful for the further image analysis.
Figure 3. An example on Navi images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Figure 3. An example on Navi images. (a) Original infrared image (b) Original visual image (c) Result of MSTHT; (d) Result of SIDWT (e) Result of LP (f) Result of MSTHST; (g) Result of MSNTHT (h) Result of MSTOOC (i) Result of the proposed algorithm.
Sensors 15 17149 g003
Figure 1, Figure 2 and Figure 3 indicate that the results of the proposed method contain the richest details. The fusion result of the proposed algorithm contains both the spatial information of the original infrared and visual images. The proposed algorithm not only combines these features, but also enhances these features. The high-pass filter may enhance the features, but may not perform the fusion.
Although MSNTHT combines the features of the original images, the details are not very rich. Moreover, the result of MSNTHT is closer to one of the original images (Figure 1 and Figure 3) or the contrast of the result of MSNTH is very different from the original images (Figure 2), which means MSNTHT may not preserve well the information of the original images in the final fusion image.
Infrared and visual images obtained under different environments are used to verify the effective performance of the proposed algorithm. The results show that the proposed algorithm could effectively combine the image details and regions of the original images into the final fusion image, providing a clear fusion result with rich image details.

5.2. Quantitative Comparisons

To show the effective performance of the proposed algorithm through a quantitative comparison, widely used measures, including entropy [6,40,41], spatial frequency [42], mean gradient [25,43] and Q measure [25,44], are adopted in this paper.
Entropy is a widely used measure to quantify the information content of an image. The fusion result of infrared and visual images contains the information of both original images. Thus, the entropy could be used as one measure to quantify the performance of the fusion algorithms. A large value of the entropy means the corresponding fusion result contains rich information, which indicates a good performance of the corresponding algorithm for infrared and visual image fusion.
Spatial frequency is defined based on the contained spatial information in an image. Fusion of infrared and visual images would combine the image regions and details of the original images into the final fused image. The fused image has clear details and should contain more spatial information. Therefore, using spatial frequency as a quantitative measure is appropriate. A large value of the spatial frequency indicates a good performance of the corresponding algorithm for infrared and visual image fusion.
Mean gradient is calculated based on the spatial gradient information. Infrared and visual image fusion should effectively combine the spatial information and produce a clear fusion result, so the mean gradient would be also an appropriate measure in the quantitative comparison. A large mean gradient value indicates a good performance of the corresponding algorithm for fusion.
Q measure has been widely used to quantify the quality of an image. An effective algorithm for infrared and visual image fusion should produce a fusion image with good quality. Thus, Q measure could be also an appropriate measure to do a quantitative comparison. Also, a large value of Q measure indicates a good fusion performance.
Infrared and visual images obtained under different environments are processed by different algorithms. The mean value of the entropy, spatial frequency, mean gradient and Q measure values of all the fusion results related to each algorithm is shown in Figure 4, Figure 5, Figure 6 and Figure 7, respectively.
Figure 4 shows that, the entropy value of the proposed algorithm is larger than that of the other algorithms. This means the fusion result of the proposed algorithm contains more information than those of the other algorithms, which would provide a more effective fusion result for further image analysis. Thus, the proposed algorithm performs better than other algorithms.
Figure 5 shows that the spatial frequency value of the proposed algorithm is the largest, which verifies that the proposed algorithm based on the constructed alternating operator could give clear fusion results with rich image details.
Also, in Figure 6, the value of mean gradient of the proposed algorithm is the largest. This means the proposed algorithm combines the region and details of original infrared and visual images, which produces effective and clear fusion results.
In Figure 7, although the value of the Q measure of the proposed algorithm is not large compared with other algorithms, the value is not very different from the values of the other algorithms. This indicates the quality of the fusion images of the proposed algorithm is also good. More importantly, the values of the proposed algorithm in Figure 4, Figure 5 and Figure 6 are very larger than the values of other algorithms. Thus, in all, the performance of the proposed algorithm for fusion is effective for infrared and visual images. Therefore, because of the effective feature extraction by the constructed alternating operator and the fusion of the multi-scale features using the fuzzy measure, the proposed algorithm performs effectively for infrared and visual image fusion.
Figure 4. Quantitative comparison using measure entropy.
Figure 4. Quantitative comparison using measure entropy.
Sensors 15 17149 g004
Figure 5. Quantitative comparison using measure spatial frequency.
Figure 5. Quantitative comparison using measure spatial frequency.
Sensors 15 17149 g005
Figure 6. Quantitative comparison using measure mean gradient.
Figure 6. Quantitative comparison using measure mean gradient.
Sensors 15 17149 g006
Figure 7. Quantitative comparison using measure Q.
Figure 7. Quantitative comparison using measure Q.
Sensors 15 17149 g007
To quantitatively compare the processing times, all the algorithms are performed on images of 360 × 270 size using a computer equipped with an Intel Pentium 4, 2.6 GHz CPU and 512 MB of memory. The mean processing time of each algorithm is listed in Table 1. In Table 1, the processing times of MSTHT, SIDWT and LP are shorter than other algorithms, because the calculation of the pyramid- based multi-scale theory in SIDWT and LP is faster. Also, because the morphological operator in MSTHT is simple, the processing time of MSTHT is shorter than MSTHST, MSNTH, MSTOOC and the proposed algorithm. Especially, as the calculation of the center-surround top-hat transform used in MSNTH is time-consuming, the processing time of MSNTH is the longest. Because the alternating operators in the proposed algorithm are complicated, the processing time of the proposed algorithm is longer than MSTHT, SIDWT and LP, but the processing time of the proposed algorithm is shorter than MSTHST, MSNTH and MSTOOC. More importantly, the visual and quantitative comparisons verified that the performance of the proposed algorithm for infrared and visual image fusion was more effective than that of the other algorithms, therefore, the proposed algorithm performs effectively overall.
Table 1. Processing time comparison (s).
Table 1. Processing time comparison (s).
MSTHTSIDWTLPMSTHSTMSNTHMSTOOCProposed Algorithm
0.9230.7330.0823.87425.4508.8141.828

6. Conclusions

Extracting the features of the original infrared and visual images to form a clear fusion image is a crucial task. This paper proposes an effective algorithm for infrared and visual image fusion based on the fuzzy measure and the alternating operators constructed by opening and closing-based toggle operators. The extraction of the multi-scale features of the original infrared and visual images for fusion by using two types of the constructed alternating operators is discussed in detail. Also, the extracted multi-scale features are combined through the fuzzy measure-based weight strategy to form the final fusion features. In the end, the effective fusion result is produced through importing the final fusion features into the original infrared and visual images using the contrast enlargement strategy.
Because the toggle operator using opening and closing as primitives could identify the important features in the original infrared and visual images, the alternating operators could extract the features for infrared and visual image fusion well, and two types of alternating operators are used for feature extraction, which could strengthen the performance of the proposed algorithm for infrared and visual image fusion. Moreover, the strategy of combining the multi-scale features through the fuzzy measure could produce the final fusion features with rich spatial information, which would be useful for preserving the details and important regions of the original infrared and visual images in the final fused image. All of these features and the experimental results indicate that, the proposed algorithm is effective for infrared and visual image fusion, which may be also used well for other image analysis applications.

Acknowledgments

The author thanks the anonymous reviewers and editor for their very constructive comments. This work is partly supported by the National Natural Science Foundation of China (Grant No. 61271023), Program for New Century Excellent Talents in Universities (NCET-13-0020), founding of State Key Laboratory of Virtual Reality Technology and Systems, Beihang University (VR-2014-ZZ-06) and Fundamental Research Funds for the Central Universities (YWF-15-YHXY-022, YWF-14-YHXY-029, YWF-13-T-RSC-028).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lieber, C.; Urayama, S.; Rahim, N.; Tu, R.; Saroufeem, R.; Reubner, B. Multimodal near infrared spectral imaging as an exploratory tool for dysplastic esophageal lesion identification. Opt. Express 2006, 14, 2211–2219. [Google Scholar] [CrossRef] [PubMed]
  2. Leviner, M.; Maltz, M. A new multi-spectral feature level image fusion method for human interpretation. Infrared Phys. Technol. 2009, 52, 79–88. [Google Scholar] [CrossRef]
  3. Bai, X.; Zhou, F.; Xue, B. Infrared dim small target enhancement using toggle contrast operator. Infrared Phys. Technol. 2012, 55, 177–182. [Google Scholar] [CrossRef]
  4. Bai, X.; Zhang, Y. Detail preserved fusion of infrared and visual images by using opening and closing based toggle operator. Opt. Laser Technol. 2014, 63, 105–113. [Google Scholar] [CrossRef]
  5. Bai, X. Morphological infrared image enhancement based on multi-scale sequential toggle operator using opening and closing as primitives. Infrared Phys. Technol. 2015, 68, 143–151. [Google Scholar] [CrossRef]
  6. Bai, X.; Zhou, F.; Xue, B. Fusion of infrared and visual images through region extraction by using multi scale center-surround top-hat transform. Opt. Express 2011, 19, 8444–8457. [Google Scholar] [CrossRef] [PubMed]
  7. Jung, H.; Park, S. Multi-sensor fusion of Landsat 8 thermal infrared (TIR) and panchromatic (PAN) images. Sensors 2014, 14, 24425–24440. [Google Scholar] [CrossRef] [PubMed]
  8. Rehman, N.; Ehsan, S.; Abdullah, S.; Akhtar, M.; Mandic, D.; McDonald-Maier, K. Multi-scale pixel-based image fusion using multivariate empirical mode decomposition. Sensors 2015, 15, 10923–10947. [Google Scholar] [CrossRef] [PubMed]
  9. Zhao, J.; Zhou, Q.; Chen, Y.; Feng, H.; Xu, Z.; Li, Q. Fusion of visible and infrared images using saliency analysis and detail preserving based image decomposition. Infrared Phys. Technol. 2013, 56, 93–99. [Google Scholar] [CrossRef]
  10. Mukhopadhyay, S.; Chanda, B. Fusion of 2D grayscale images using multiscale morphology. Pattern Recognit. 2001, 34, 1939–1949. [Google Scholar] [CrossRef]
  11. Xydeas, C.; Petrović, V. Objective image fusion performance measure. Electron. Lett. 2000, 36, 306–309. [Google Scholar] [CrossRef]
  12. Pajares, G.; Cruz, J. A wavelet-based image fusion tutorial. Pattern Recognit. 2004, 37, 1855–1872. [Google Scholar] [CrossRef]
  13. Amolins, K.; Zhang, Y.; Dare, P. Wavelet based image fusion techniques—An introduction, review and comparison. ISPRS J. Photogramm. Remote Sens. 2007, 62, 249–263. [Google Scholar] [CrossRef]
  14. Nencini, F.; Garzelli, A.; Baronti, S.; Alparone, L. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156. [Google Scholar] [CrossRef]
  15. Ioannidou, S.; Karathanassi, V. Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion. IEEE Geosci. Remote Sens. Lett. 2007, 4, 166–170. [Google Scholar] [CrossRef]
  16. Sun, W.; Hu, S.; Liu, S.; Sun, Y. Infrared and Visible Image Fusion Based on Object Extraction and Adaptive Pulse Coupled Neural Network via Non-Subsampled Shearlet Transform. In Proceedings of International Conference on Signal Processing, Hangzhou, China, 19–23 October 2014; pp. 946–951.
  17. Yang, Y.; Tong, S.; Huang, S.; Lin, P. Dual-tree complex wavelet transform and image block residual-based multi-focus image fusion in visual sensor networks. Sensors 2014, 14, 22408–22430. [Google Scholar] [CrossRef] [PubMed]
  18. Li, S.; Yang, B. Multifocus image fusion using region segmentation and spatial frequency. Image Vis. Comput. 2008, 26, 971–979. [Google Scholar] [CrossRef]
  19. Toet, A.; Hogervorst, M.; Nikolov, S.; Lewis, J.; Dixon, T.; Bull, D.; Canagarajah, C. Towards cognitive image fusion. Inf. Fusion 2010, 11, 95–113. [Google Scholar] [CrossRef]
  20. Mitianoudis, N.; Stathaki, T. Pixel-based and region-based image fusion schemes using ICA bases. Inf. Fusion 2007, 8, 131–142. [Google Scholar] [CrossRef]
  21. Cvejic, N.; Bull, D.; Canagarajah, N. Region-based multimodal image fusion using ICA bases. IEEE Sens. J. 2007, 7, 743–751. [Google Scholar] [CrossRef]
  22. Bulanona, D.; Burks, T.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng. 2009, 103, 12–22. [Google Scholar] [CrossRef]
  23. Wang, Z.; Ma, Y.; Gu, J. Multi-focus image fusion using PCNN. Pattern Recognit. 2010, 43, 2003–2016. [Google Scholar] [CrossRef]
  24. Huang, W.; Jing, Z. Multi-focus image fusion using pulse coupled neural network. Pattern Recognit. Lett. 2007, 28, 1123–1132. [Google Scholar] [CrossRef]
  25. Bai, X.; Chen, X.; Zhou, F.; Liu, Z.; Xue, B. Multiscale top-hat selection transform based infrared and visual image fusion with emphasis on extracting regions of interest. Infrared Phys. Technol. 2013, 60, 81–93. [Google Scholar] [CrossRef]
  26. Serra, J. Image Analysis and Mathematical Morphology; Academic Press: New York, NY, USA, 1982. [Google Scholar]
  27. Soille, P. Morphological Image Analysis-Principle and Applications; Springer: Berlin, Germany, 2003. [Google Scholar]
  28. Bai, X.; Sun, C.; Zhou, F. Splitting touching cells based on concave points and ellipse fitting. Pattern Recognit. 2009, 42, 2434–2446. [Google Scholar] [CrossRef]
  29. Bai, X.; Zhou, F. Analysis of new top-hat transformation and the application for infrared dim small target detection. Pattern Recognit. 2010, 43, 2145–2156. [Google Scholar] [CrossRef]
  30. Matheron, G. Random Sets and Integral Geometry; Wiley: New York, NY, USA, 1975. [Google Scholar]
  31. Sternberg, S. Grayscale morphology. Comput. Vis. Graph. Image Underst. 1986, 35, 333–355. [Google Scholar] [CrossRef]
  32. Bai, X. Morphological enhancement of microscopy mineral image using opening and closing based toggle operator. J. Microsc. 2014, 253, 12–23. [Google Scholar] [CrossRef] [PubMed]
  33. Bai, X.; Zhang, Y. Enhancement of microscopy mineral images through constructing alternating operators using opening and closing based toggle operator. J. Opt. 2014, 16, 125407. [Google Scholar] [CrossRef]
  34. Lai, R.; Yang, Y.; Wang, B.; Zhou, H. A quantitative measure based infrared image enhancement algorithm using plateau histogram. Opt. Commun. 2010, 283, 4283–4288. [Google Scholar] [CrossRef]
  35. Zadeh, L. Fuzzy sets. Inf. Control 1965, 8, 338–353. [Google Scholar] [CrossRef]
  36. Ramathilagam, S.; Pandiyarajan, R.; Sathya, A.; Devi, R.; Kannan, S. Modified fuzzy c-means algorithm for segmentation of T1–T2-weighted brain MRI. J. Comput. Appl. Math. 2011, 235, 1578–1586. [Google Scholar] [CrossRef]
  37. Kaufmann, A. Introduction to the Theory of Fuzzy; Academic Press: New York, NY, USA, 1975. [Google Scholar]
  38. Bai, X.; Zhou, F.; Xue, B. Infrared image enhancement through contrast enhancement by using multi scale new top-hat transform. Infrared Phys. Technol. 2011, 54, 61–69. [Google Scholar] [CrossRef]
  39. Bai, X.; Chen, Z.; Zhang, Y.; Liu, Z.; Lu, Y. Spatial Information Based FCM for Infrared Ship Target Segmentation. In Proceedings of the International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 5127–5131.
  40. Roberts, J.; Aardt, J.; Ahmed, F. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2. [Google Scholar] [CrossRef]
  41. Chen, Y.; Xue, Z.; Blum, R. Theoretical analysis of an information-based quality measure for image fusion. Inf. Fusion 2008, 9, 161–175. [Google Scholar] [CrossRef]
  42. Aslantas, V.; Kurban, R. A comparison of criterion functions for fusion of multi-focus noisy images. Opt. Commun. 2009, 282, 3231–3242. [Google Scholar] [CrossRef]
  43. Pradham, P.; Younan, N.; King, R. Concepts of Image Fusion in Remote Sensing Applications. In Image Fusion: Algorithms and Applications; Stathaki, T., Ed.; Academic Press: London, UK, 2008; pp. 391–428. [Google Scholar]
  44. Piella, G.; Heijmans, H. A New Quality Metric for Image Fusion. In Proceedings of International Conference on Image Processing, Barcelona, Spain, 14–18 September 2003; pp. 173–176.

Share and Cite

MDPI and ACS Style

Bai, X. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators. Sensors 2015, 15, 17149-17167. https://doi.org/10.3390/s150717149

AMA Style

Bai X. Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators. Sensors. 2015; 15(7):17149-17167. https://doi.org/10.3390/s150717149

Chicago/Turabian Style

Bai, Xiangzhi. 2015. "Infrared and Visual Image Fusion through Fuzzy Measure and Alternating Operators" Sensors 15, no. 7: 17149-17167. https://doi.org/10.3390/s150717149

Article Metrics

Back to TopTop