Sensors 2012, 12(5), 5872-5887; doi:10.3390/s120505872

Article
Improved Image Fusion Method Based on NSCT and Accelerated NMF
Juan Wang 1, Siyu Lai 2,* and Mingdong Li 1
1
College of Computer Science, China West Normal University, 1 Shida Road, Nanchong 637002, China; E-Mails: wjuan0712@126.com (J.W.); mdong_li@163.com (M.L.)
2
Department of Medical Imaging, North Sichuan Medical College, 234 Fu Jiang Road, Nanchong 637000, China
*
Author to whom correspondence should be addressed; E-Mail: lsy_791211@126.com.
Received: 5 March 2012; in revised form: 24 April 2012 / Accepted: 25 April 2012 /
Published: 7 May 2012

Abstract

: In order to improve algorithm efficiency and performance, a technique for image fusion based on the Non-subsampled Contourlet Transform (NSCT) domain and an Accelerated Non-negative Matrix Factorization (ANMF)-based algorithm is proposed in this paper. Firstly, the registered source images are decomposed in multi-scale and multi-direction using the NSCT method. Then, the ANMF algorithm is executed on low-frequency sub-images to get the low-pass coefficients. The low frequency fused image can be generated faster in that the update rules for W and H are optimized and less iterations are needed. In addition, the Neighborhood Homogeneous Measurement (NHM) rule is performed on the high-frequency part to achieve the band-pass coefficients. Finally, the ultimate fused image is obtained by integrating all sub-images with the inverse NSCT. The simulated experiments prove that our method indeed promotes performance when compared to PCA, NSCT-based, NMF-based and weighted NMF-based algorithms.
Keywords:
image fusion; non-subsampled contourlet transform; nonnegative matrix factorization; neighborhood homogeneous measurement

1. Introduction

Image fusion is an effective technology that synthesizes data from multiple sources and reduces uncertainty, which is beneficial to human and machine vision. In the past decades, it has been adopted in a variety of fields, including automatic target recognition, computer vision, remote sensing, robotics, complex intelligent manufacturing, medical image processing, and military purposes. Reference [1] proposed a framework for the field of image fusion. The fusion process is performed at different levels of the information representation, which is sorted in ascending order of abstraction: pixel, feature, and decision levels. Of these, pixel-level fusion has been broadly studied and applied for it is the foundation of other two levels.

Pixel-level image fusion consists of two parts: space domain and frequency domain. The classic algorithms in the frequency domain include Intensity Hue Saturation (IHS) [2], Principal Component Analysis (PCA) [3], pyramid [4,5], wavelet [6,7], wavelet packet [8], Dual Tree Complex Wavelet Transform (DT-CWT) [9,10], curvelet [11,12], contourlet [13,14], and Non-subsampled Contourlet Transform (NSCT) [15], etc.

Until recently, the multi-resolution decomposition based algorithms have been widely used in the multi-source image fusion field, and effectively overcome spectrum distortion. Wavelet transformation provides great time-frequency analytical features and is the focus of multi-source image fusion. NSWT is made up of the tensor product of two one-dimension wavelets, solving the shift-invariant lacking problem that the traditional wavelets cannot do. Being lacking in anisotropy, NSWT fails to express direction-distinguished texture and edges sparsely. In 2002, Do and Vetteri proposed a flexible contourlet transform method that may efficiently detect the geometric structure of images attributed to their properties of multi-resolution, local and directionality [13], but the spectrum aliasing phenomenon occurs posed by unfavorable smoothness of the basis function. Cunha et al. put forward the NSCT method [15] in 2006; improvements have been made in solving contourlet limitations, and it was an ultra-perfect transformation with attributes of shift-invariance, multi-scale and multi-directionality [16].

Non-Negative Matrix Factorization (NMF) is a relatively new matrix analysis method [17] presented by Lee and Seung in 1999, and has been proven to converge to its local minimum in 2000 [18]. It has been successfully adopted in a variety of applications, including image analysis [19,20], text clustering [21], speech processing [22], pattern recognition [2325], and so on. Unfortunately, some NMF-involved works are time consuming. In order to reduce time costs, an improved NMF algorithm has been introduced in this paper. Our improved NMF algorithm is applied to fuse the low-frequency information in he NSCT domain, while the fusion of high-frequency details can be realized by adopting the Neighborhood Homogeneous Measurement (NHM) technique used in reference [26]. The experimental results demonstrate that the proposed fusion method can effectively extract useful information from source images and inject it into the final fused one which has better visual effects, and the running of the algorithm takes less CPU time compared with the algorithms proposed in [27] and [18].

The remainder of this paper is organized as follows: we introduce NSCT in Section 2. This is followed by a brief discussion on how NMF is constructed, and how we improve it. Section 4 presents the whole framework of the fusion algorithm. Section 5 shows experimental results for image fusion using the proposed technique, as well as the discussion and comparisons with other typical methods. Finally, the last Section concludes with a discussion of our and future works.

2. Non-Subsampled Contourlet Transform (NSCT)

NSCT is proposed on the grounds of contourlet conception [13], which discards the sampling step during the image decomposition and reconstruction stages. Furthermore, NSCT presents the features of shift-invariance, multi-resolution and multi-dimensionality for image presentation by using a non-sampled filter bank iteratively.

The structure of NSCT consists of two parts, as shown in Figure 1(a): Non-Subsampled Pyramid (NSP) and Non-Subsampled Directional Filter Banks (NSDFB) [15]. NSP, a multi-scale decomposed structure, is a dual-channel non-sampled filter that is developed from the àtrous algorithm. It does not contain subsampled processes. Figure 1(b) shows the framework of NSP, for each decomposition of next level, the filter H (z) is firstly sampled an using upper-two sampling method, the sampling matrix is D = (2, 0; 0, 2). Then, low-frequency components derived from the last level are decomposed iteratively just as its predecessor did. As a result, a tree-like structure that enables multi-scale decomposition is achieved. NSDFB is constructed based on the fan-out DFB presented by Bamberger and Smith [28]. It does not include both the super-sampling and sub-sampling steps, but relies on sampling the relative filters in DFB by treating D = (1, 1; 1, −1), which is illustrated in Figure 1(c). If we conduct L levels of directional decomposition on a sub-image that decomposed by NSP in a certain scale, then 2L number of band-pass sub-images, the same size to original one, are available. Thus, one low-pass sub-image and j = 1 L 2 l j band-pass directional sub-images are generated by carrying out L levels of NSCT decomposition.

3. Improved Nonnegative Matrix Factorization

3.1. Nonnegative Matrix Factorization (NMF)

NMF is a recently developed matrix analysis algorithm [17,18], which can not only describe low-dimensional intrinsic structures in high-dimensional space, but achieves linear representation for original sample data by imposing non-negativity constraints on its bases and coefficients. It makes all the components non-negative (i.e., pure additive description) after being decomposed, as well as realizes the non-linear dimension reduction. NMF is defined as:

Conduct N times of investigation on a M-dimensional stochastic vector v, then record these data as vj, j = 1,2,…, N, let V = [V•1, V•2, VN], where V•j = vj, j = 1,2,…, N. NMF is required to find a non-negative M × L base matrix W = [W•1, W•2,…, WN] and a L × N coefficient factor H = [H•1, H•2,…, HN], so that VWH [17]. The equation can also be wrote in a more intuitive form of that V . j i = 1 L W . i H . j, where L should be chose to satisfy (M + N) L < MN.

In the purpose of finding the appropriate factors W and H, the commonly used two objective functions are depicted as [18]:

E ( V W H ) = V W H F 2 = i = 1 M j = 1 N ( V i j ( W H ) i j ) 2
D ( V W H ) = i = 1 M j = 1 N ( V i j log V i j ( W H ) i j V i j + ( W H ) i j )

In respect to Equations (1) and (2), ∀i, a, j subject to Wia > 0 and Haj > 0, a is a integer. ‖•‖F is the Frobenius norm, Equation (1) is called as the Euclid distance while Equation (2) is referred to as K-L divergence function. Note that, finding the approximate solution to VWH is considered equal to the optimization of the above mentioned two objective functions.

3.2. Accelerated Nonnegative Matrix Factorization (ANMF)

Roughly speaking, the NMF algorithm has high time complexity that results in limited advantages for the overall performance of algorithm, so that the introduction of improved iteration rules to optimize the NMF is extremely crucial to promote the efficiency. In the point of algorithm optimization, NMF is a majorization problem that contains a non-negative constraint. Until now, a wide range of decomposition algorithms have been investigated on the basis of non-negative constraints, such as the multiplicative iteration rules, interactive non-negative least squares, gradient method and projected gradient [29], among which the projected gradient approach is capable of reducing the time complexity of iteration to realize the NMF applications under mass data conditions. In addition, these works are distinguished by meaningful physical significance, effective sparse data, enhanced classification accuracy and striking time decreases. We propose a modified version of projected gradient NMF that will greatly reduce the complexity of iterations; the main idea of the algorithm is listed below.

As we know, the Lee-Seung algorithm continuously updates H and W, fixing the other, by taking a step in a certain weighted negative gradient direction, namely:

H i j H i j η i j [ f H ] i j H i j + η i j ( W T A W T W H ) i j
W i j W i j ς i j [ f W ] i j W i j + ς i j ( A H T W H H T ) i j
where ηij and ζij are individual weights for the corresponding gradient elements, which are expressed like follows:
η i j = H i j ( W T W H ) i j , ς i j = W i j ( W H H T ) i j
and then the updating formulas are:
H i j H i j ( W T A ) i j ( W T W H ) i j , W i j W i j ( A H T ) i j ( W H H T ) i j

We notice that the optimal H related to a fixed W can be obtained, column by column, by independently:

min 1 2 A e j W H e j 2 2 s.t. H e j 0
where ej is the jth column of the n × n identity matrix. Similarly, we can also acquire the optimal W relative to a fixed H by solving, row by row:
min 1 2 A T e i H W T e i 2 2 s.t. W T e i 0
where ei is the ith column of the m × m identity matrix. Actually, both Equations (7) and (8) can be changed into an ordinary form:
min 1 2 A x b 2 2 s.t. x 0
where A ≥ 0 and b ≥ 0. As the variables and given data are all nonnegative, the problem is therefore named the Totally Nonnegative Least Squares (TNNLS) issue.

We propose to revise the algorithm claimed in article [17] by using the same update rule with step-length α in [27] to the successive updates in improving the objective functions about the two TNNLS problems mentioned in Equations (7) and (8) As a result, this brings about a modified form of the Lee-Seung algorithm that successively updates the matrix H column by column and W row by row, with individual step-length α and β for each column of H and each row of W respectively. So we try to write the update rule as:

H i j H i j + α j η i j ( W T A W T W H ) i j
W i j W i j + β i ς i j ( A H T W H H T ) i j
where ηij and ζij are set equal to some small positive number as described in [27], αj (j = 1,2,…,n) and βi (i = 1,2,…,m) are step-length parameters can be computed as follows. Let x > 0, q =AT(bAx) and p = [x./(ATAx)] ○ q, where the symbol “./” means component-wise division and “○” denotes multiplication. Then we introduce variable ô ∈ (0, 1):
α = min ( p T q p T A T A p , τ max { α ^ : x + α ^ p 0 } )

We can easily obtain the step-length formula of αj or βi if (A, b, x) is replaced by (W, Aej, Hej) or (HT, ATei, WTei), respectively. It is necessary to point out that q is the negative gradient of the objective function, and the search direction p is a diagonally scaled negative gradient direction. The step-length α or β is either the minimum of the objective function in the search direction or a τ-fraction of the step to the boundary of the nonnegative quadrant.

Learning from article [27] that both quantities, pTq/pTATAp and max{â : x + âp ≥ 0} are greater than 1 in the definition of the step α, thereby, we make αj ≥ 1 and βi ≥ 1 by treating τ sufficiently close to 1. In our experiment, we choose τ = 0.99 which practically guarantees that α and β are always greater than 1.

Obviously, when α←1 or β←1, update Equations (10) and (11) reduce to updates Equations (3) and (4) In our algorithm, the step-length parameters are allowed to be greater than 1. It is this indicates that for any given (W, H), we can get at least the same or greater decrease in the objective function than the algorithm in [27]. Hence, we call the proposed algorithm the Accelerated NMF (ANMF). Besides, the experiments in Section 5.5 will demonstrate that ANMF algorithm is indeed superior to that algorithm by generating better test results, especially when the amount of iterations is not too big.

4. The ANMF and NSCT Combined Algorithm

4.1. The Selection of Fusion Rules

As we know, approximation of an image belongs to the low-frequency part, while the high-frequency counterpart exhibits detailed features of edge and texture. In this paper, the NSCT method is utilized to separate the high and low components of the source image in the frequency domain, and then the two parts are dealt with different fusion rules according to their features. As a result, the fused image can be more complementary, reliable, clear and better understood.

By and large, the low-pass sub-band coefficients approximate the original image at low-resolution; it generally represents the image contour, but high-frequency details such as edges, region contours are not contained, so we take the ANMF algorithm to determine the low-pass sub-band coefficients which including holistic features of the two source images. The band-pass directional sub-band coefficients embody particular information, edges, lines, and boundaries of region, the main function of which is to obtain as many spatial details as possible. In our paper, a NHM based local self-adaptive fusion method is adopted in band-pass directional sub-band coefficients acquisition phase, by calculating the identical degree of the corresponding neighborhood to determine the selection for band-pass coefficients fusion rules (i.e., regional energy or global weighted).

4.2. The Course of Image Fusion

Given that the two source images are A and B, with the same size, both have been registered, F is fused image. The fusion process is shown in Figure 2 and the steps are given as follows (Figure 2):

  • Adopt NSCT to implement the multi-scale and multi-direction decompositions for source images A and B, and the sub-band coefficients { C i 0 A ( m , n ) , C i , l A ( m , n ) }, { C i 0 B ( m , n ) , C i , l B ( m , n ) } can be obtained.

  • Construct matrix V on the basis of low-pass sub-band coefficients C i 0 A ( m , n ) and C i 0 B ( m , n ) :

    V = [ v A , v B ] = [ v 1 A v 2 A v n A v 1 B v 2 B v n B ]
    where vA and vB are column vectors consisting of pixels coming from A and B, respectively, according to principles of row by row. n is the number of pixels of source image. We perform the ANMF algorithm described in Section 3.2 on V, from which W that is actually the low-pass sub-band coefficients of fused image F is separated. We set maximum iteration number as 1,000 with τ = 0.99.

    The fusion rule NHM is applied to band-pass directional sub-band coefficients C i , l A ( m , n ), C i , l B ( m , n ) of source images A, B. The NHM is calculated as:

    N H M i , j ( m , n ) 2 { ( k , j ) N i , j ( m , n ) | C i , j A ( m , n ) | | C i , j B ( m , n ) | } E i , j A ( m , n ) + E i , j B ( m , n )
    where Ei,l(m, n) is regarded as the neighborhood energy under resolution of 2l in direction i, Ni,l (m, n) is the 3 × 3 neighborhood centers at point (m, n). In fact, NHM quantifies the identical degree of corresponding neighborhoods for two images, the higher the identical degree is, the greater the NHM value should be. Because 0 ≤ NHMi,l(m, n) ≤ 1, we define a threshold T; generally we have it that 0.5 < T < 1. As the quality of fusion image is partly influenced by T (see Table 1), we take two factors into consideration [i.e., when T =0.75 the SD (Standard Deviation) and AG (Average Gradient) are better], so the threshold is given as T = 0.75. The fusion rule of band-pass directional sub-band coefficients is expressed as:

    when NHMi,l (m, n) < T:

    { C i , l F ( m , n ) = C i , l A ( m , n ) if E i , l A ( m , n ) E i , l B ( m , n ) C i , l F ( m , n ) = C i , l B ( m , n ) if E i , l A ( m , n ) < E i , l B ( m , n )
    when NHMi,l (m, n) ≥ T:
    C i , l F ( m , n ) = N H M i , l ( m , n ) max ( C i , l F ( m , n ) , C i , l A ( m , n ) ) + ( 1 N H M i , l ( m , n ) ) min ( C i , l F ( m , n ) , C i , l B ( m , n ) )

  • Perform inverse NSCT transform on the fusion coefficients of F obtained from step (2) and get the ultimate fusion image F.

5. Experiments and Analysis

5.1. Experimental Conditions and Quantified Evaluation Indexes

To verify the effectiveness of the proposed algorithm, three groups of images are used under the MATLAB 7.1 platform in this Section. All source images must be registered and with 256 gray levels. By comparison with the five typical algorithms below: NSCT-based method (M1), NMF-based method (classic NMF, M2), weighted NMF-based method (M3), PCA and wavelet, we can learn more about the one presented in our paper.

It may be possible to evaluate the image fusion subjectively, but subjective evaluation is likely affected by the biases of different observers, psychological status and even mental states. Consequently, it is absolutely necessary to establish a set of objective criteria for quantitative evaluation. In this paper, we select the Information Entropy (IE), Standard Deviation (SD), Average Gradient (AG), Peak Signal to Noise Ratio (PSNR), Q index [30], Mutual Information (MI), and Expanded Spectral Angle Mapper (ESAM) [31] as our evaluation metrics. IE is one of the most important evaluation indexes, whose value directly reflects the amount of information in the image. The larger the IE is the more information is contained in a fused image. SD indicates the deviation degree between the gray values of pixels and the average of the fused image. In a sense, the fusion effect is in direct proportion to the value of the SD. AG is capable of expressing the definition extent of the fused image, the definition extent will be better with an increasing AG value. PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise. The larger the PSNR is, the better is the image. MI is a quantity that measures the mutual dependence of the two random variables, so a better fusion effect makes for a bigger MI. Q index measures the amount of edge information “transferred” from source images to the fused one to give an estimation of the performance of the fusion algorithm. Here, larger Q value means better algorithm performance. ESAM is an especially informative metric in terms of measuring how close the pixel values of the two images are and we take the AE (average ESAM) as an overall quality index for measuring the difference between the two source images and the fused one. The higher the AE, the less the similarity of two images will be. The AE is computed using a sliding window approach, in this work, sliding windows with a size of 16 × 16, 32 × 32, and 64 × 64 are used.

5.2. Multi-Focus Image Fusion

A pair of “Balloon” images are chose to be source images, both are 200 by 160 in size. As can be seen from Figure 3(a), the left side of the image is in focus while the other side is out of focus. The opposite phenomenon emerges in Figure 3(b). Six variant approaches, M1–M3, PCA, wavelet (bi97), and our method, are applied to test the fusion performance. Figure 3c–h) show the simulated results.

From an intuitive point of view, the M1method produces a poor intensity that makes Figure 3(c) somewhat dim. On the contrary, the other five algorithms generate better performance in this aspect, but artifacts located in the middle right of Figure 3(e) can be found. Compared with the M2 and M3 methods, although the definition of the bottom left region in our method is slightly lower than that of the two algorithms, the holistic presentation is superior to the two. As for PCA and wavelet, the similar visual effects as Figure 3(h) are obtained, except the middle bottom balloon in Figure 3(f) is slightly blurred. Statistic results in Tables 2 and 3 verify the above visual effects further.

Table 2 illustrates that the proposed method has advantages over most of other algorithms since all the criteria, in both protection of image details and fusion of image information, are superior to that of M1–M3 and PCA. Of them, the indexes IE, SD, AG, PSNR of our method exceed those of M1, M2, M3 and PCA by 3.1%, 1.3%, 1.5% and 0.8% (for IE), 6.0%, 2.7%, 2.1% and 1.1% (for SD), 0.6%, 3.3%, 0.6% and 0.3 (for AG), 5.3%, 2.6%, 1.8% and 1.3% (for PSNR), respectively. These four basic indices indicate that our method provides a better visual effect. As for index Q, the 0.9844 value of our method means the best fusion algorithm performance when compared to values of the former four algorithms. In MI, our method is also the best, being superior to that of M1 by 19%; the latter is in effect the worst one. As for wavelet, four of six metrics are slightly inferior to ours while two of six metrics are inferior to ours. From Table 3, it can be found that our method has the lowest AE (AEaF, AEbF denote similarity between source image (a) and the fused one; (b) and the fused one, respectively) followed by wavelet, M3, PCA, M2, and method M1 has the highest AE. Therefore, in terms of transferring details, the performances of our method, wavelet, M3, PCA, M2, and M1 decrease.

5.3. Medical Image Fusion

Figure 4(a,b) are medical CT and MRI images whose sizes are 256 by 256. Six different methods, including our proposed one, are adopted to evaluate the fusion performance, and the simulated results are shown in Figure 4(c–h).

From Figure 4, images based on methods M2 and M3 are not fused well enough for the information in the MRI source image is not fully described yet. Although the external contour of M1 is clear, the overall effect is poor, which is confirmed by the low brightness of the image and the appearance of some undesirable artifacts observed on both sides of the cheek. Oppositely, PCA, wavelet and our methods not only produce distinct outlines and rationally control the brightness level, but also preserve and enhance image detailed information well. Related evaluations are recorded in Tables 4 and 5.

As revealed in Table 4, the proposed method is nearly the best based on the fact that the metrics of IE, SD, AG, PSNR of Figure 4(h) are all greater than that of the former four algorithms (percentages are not listed). The IE value of M1 is the lowest, which is precisely in accord with the image. Our method possesses an AG index of 29.209 which implies the image is clearer than images based on other approaches. In PSNR and SD, our method performs well, being second only to the wavelet approach, and the SD of M1 beats that of M2 and M3. As to Q index and MI, our method takes the second place in MI and the first place in Q, which indicates that the details and edges from source images are well inherited. These details and edges are extremely important for medical diagnosis. Like in experiment 1, our method achieves the lowest values both in AEaF and AEbF, and that of wavelet, M3, PCA, M2 and M1 arrange in ascending order.

5.4. Visible and Infrared Image Fusion

A group of registered visible and infrared images with a size of 360 by 240 showing a person walking in front of a house are labeled as Figure 5(a,b).

Of these, Figure 5(a) has a clear background, but infrared thermal sources cannot be detected. Conversely, Figure 5(b) highlights the person and house but its ability to render other surroundings is weak. Effective fusions are achieved by the six methods. After concrete analysis on the six fused images, we draw the following conclusions: we can find that the image based on method M1 is the worst in overall effect, especially a dark area around the person, which is partly caused by the significant differences between two source images. Method M2 produces more smooth details than M1, as a case in point, the road on the right side of the image and the grass on the other side can easily be recognized for the enhancement of intensity. Approximate effects displayed in Figure 5(e–h) are achieved by using M3, PCA, wavelet and our method, from which we can easily distinguish most parts of the scene except the lighting beside the house in Figure 5(e) that can hardly be observed. It is difficult to judge the performances of the latter four methods through visual observation in case of the concrete data are not provided by Table 6.

In so far as IE, AG, and PSNR are concerned, the proposed technique is evidently better than the former four ones. Specially, the value of our method exceeds them by 1.6%, 4.9%, and 0.7% while the SD is slightly smaller when compared with M3. In index Q, the optimal value is obtained on the basis of the wavelet approach, while that of M1 holds the final place. As for MI, our method still ranks the first place in Table 6. Analogous effects are achieved in Table 7, statistics show that the similarities between visible light, infrared and fused images generated by our method are the best in that both AEaF and AEbF are the smallest.

5.5. Numerical Experiment on ANMF

In this section, we compare the performance of ANMF with that of algorithms presented in article [27] and [18] in order to prove its advantages. The algorithms are implemented in Matlab and applied to the Equinox face database [32]. The contrast experiments are conducted four times, where p is as described in Section 3.2 and n denotes for the number of images chosen from the face database. The Y axis of Figure 6 represents the number of iterations repeated by the three algorithms and the X axis is the time consumption scale. We choose one group of these experiments and demonstrate the results in Figure 6 with p = 100 and n = 1,000, in which algorithm in [18] is first performed for a given number of iterations and record the time elapsed and then run algorithm in [27] and our algorithm until the time consumed is equivalent to that of the former, respectively. We note that our algorithm offers improvements in all given time points, however, the relative improvement percentage of our method over other two algorithms goes down when the number of iterations increases. Actually, the performance of our method increases about 36.8%, 26.4%, 15.7%, 12.6%, 7.5% and 37.9%, 29.6%, 19.4%, 17.8%, 12.6%, respectively, when comparing with the algorithms in [27] and [18] at each of five time points. In other words, our method converges faster, especially at the early stages, but the percentage tends to decline, which implies that this attribute is merely useful for real-time applications without very large-scale data sets.

5.6. Discussion

Image fusion with different models and numerical tests are conducted in our experiments, where the above four experiments indicate that the proposed method has a notable superiority in image fusion performance over the four other techniques examined (see Sections 5.2–5.4), and has better iteration efficiency (see Section 5.5). We observed that images based on wavelet and our proposed methods enjoy the best visual effect, and then the PCA, M3, M2, and M1 are the worst. In addition to visual inspection, quantitative analysis is also conducted to verify the validity of our algorithm from the angles of information amount, statistical features, gradient, signal to noise ratio, edge preservation, information theory and similarity of structure. The values in these metrics prove that the experiments achieve the desired objective.

6. Conclusions

In this paper, we have presented a technique for image fusion based on the NSCT and ANMF model. The accelerated NMF method modifies the traditional update rules of W and H, which achieves better effect by adopting the theory of matrix decomposition. The current approaches on the basis of NMF usually need more iterations to converge when compared to the proposed method, but the same or better results can be attained by our technique via less iterations. The results of simulation experiments show that the proposed algorithm can not only reduce computational complexities, but achieve better or equal performances when compared with other mentioned techniques both from the visual and statistical standpoints. The optimization for our method will be the next step in order to apply it in large scale data sets.

This work is supported by Sichuan Provincial Department of Education (11ZB034). The authors also gratefully acknowledge the helpful comments and suggestions of the reviewers, which have improved the presentation.

References

  1. Abidi, M.A.; Gonzalez, R.C. Data Fusion in Robotics and Machine Intelligence; Academic Press: San Diego, CA, USA, 1992.
  2. Schetselaar, E.M. Fusion by the IHS transform: Should we use cylindrical or spherical coordinates. Int. J. Remote Sens. 1998, 19, 759–765.
  3. Lallier, E. Real-Time Pixel-Level Image Fusion through Adaptive Weight Averaging. Technical Report; Royal Military College of Canada: Kingston, ON, Canada, 1999.
  4. Burt, P.J.; Adelson, E.H. Merging images through pattern decomposition. Proc. SPIE Appl. Digit. Image Process. 1985, 575, 173–181.
  5. Toet, A. Multi-scale contrast enhancement with application to image fusion. Opt. Eng. 1992, 31, 1026–1031.
  6. Mallat, S.G. A theory for multi-resolution signal decomposition the wavelet representation. IEEE Trans. Pattern Anal. Mach. Intell. 1989, 11, 647–693.
  7. Koren, I.; Laine, A.; Taylor, F. Image fusion using steerable dyadic wavelet transform. Int. Conf. Image Process. 1995, 3, 232–235.
  8. Cao, W.; Li, B.C.; Zhang, Y.A. A remote sensing image fusion method based on PCA transform and wavelet packet transform. Int. Conf. Neural Netw. Signal Process. 2003, 2, 976–981.
  9. Hill, P.R.; Bull, D.R.; Canagarajah, C.N. Image fusion using a new framework for complex wavelet transforms. Int. Conf. Image Process. 2005, 2, 1338–1341.
  10. Ioannidou, S.; Karathanassi, V. Investigation of the dual-tree complex and shift-invariant discrete wavelet transforms on Quickbird image fusion. IEEE Geosci. Remote Sens. Lett. 2007, 4, 166–170.
  11. Nencini, F.; Garzelli, A.; Baronti, S. Remote sensing image fusion using the curvelet transform. Inf. Fusion 2007, 8, 143–156.
  12. Choi, M.; Kim, R.Y.; Nam, M.R. Fusion of multi-spectral and panchromatic satellite images using the curvelet transform. IEEE Geosci. Remote Sens. Lett. 2005, 2, 136–140.
  13. Do, M.N.; Vetterli, M. The contourlet transform: An efficient directional multi-resolution image representation. IEEE Trans. Image Process. 2005, 14, 2091–2106.
  14. Song, H.; Yu, S.; Song, L. Fusion of multi-spectral and panchromatic satellite images based on contourlet transform and local average gradient. Opt. Eng. 2007, 46, 1–3.
  15. Cunha, A.L.; Zhou, J.P.; Do, M.N. The non-subsampled contourlet transform: Theory, design and applications. IEEE Trans.Image Process 2006, 15, 3089–3101.
  16. Qu, X.B.; Yan, J.W.; Yang, G.D. Multi-focus image fusion method of sharp frequency localized contourlet transform domain based on sum-modified-laplacian. Opt. Precision Eng. 2009, 17, 1203–1211.
  17. Lee, D.D.; Seung, H.S. Learning the parts of objects by non-negative matrix factorization. Nature 1999, 401, 788–791.
  18. Lee, D.D.; Seung, H.S. Algorithm for non-negative matrix factorization. Adv. Neural Inf. Process. Syst. 2001, 13, 556–562.
  19. Buchsbaum, G.; Bloch, O. Color categories revealed by non-negative matrix factorization of Munsell color spectra. Vis. Res. 2002, 42, 559–563.
  20. Miao, Q.G.; Wang, B.S. A novel algorithm of multi-sensor image fusion using non-negative matrix factorization. J. Comput.-Aided Des. Compu. Gr. 2005, 17, 2029–2032.
  21. Pauca, V.; Shahnaz, F.; Berry, M. Text Mining Using Non-Negative Matrix Factorizations. Proceedings of the 4th SIAM International Conference on Data Mining, Lake Buena Vista, FL, USA, 22–24 April 2004; pp. 22–24.
  22. Novak, M.; Mammone, R. Use of Non-Negative Matrix Factorization for Language Model Adaptation in a Lecture Transcription Task. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing, Salt Lake, UT, USA, 7–11 May 2001; pp. 541–544.
  23. Guillamet, D.; Bressan, M.; Vitria, J. A Weighted Non-Negative Matrix Factorization for Local Representations. Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; pp. 942–947.
  24. Feng, T.; Li, S.Z.; Shun, H.Y. Local Non-Negative Matrix Factorization as a Visual Representation. Proceedings of the 2nd International Conference on Development and Learning, Cambridge, MA, USA, 12–15 June 2002; pp. 1–6.
  25. Liu, W.X.; Yuan, K.H.; Ye, D.T. Reducing microarray data via non-negative matrix factorization for visualization and clustering analysis. J. Biomed. Inf. 2008, 41, 602–606.
  26. Feng, P.; Wei, B.; Jin, W. A fusion algorithm of digital neutron radiation image and digital X-ray image with contourlet transform. Int. Conf. Opt. Instrum. Technol. 2008, 7156, 1–6.
  27. Michael, M.; Zhang, Y. An interior-point gradient method for large-scale totally nonnegative least squares problems. Int. J. Optim. Theory Appl. 2005, 126, 191–202.
  28. Bamberger, R.H.; Smith, M.T. A filter bank for the directional decomposition of images: Theory and design. IEEE Trans. Signal Process. 1992, 40, 882–893.
  29. Li, L.; Zhang, Y.J. A survey on algorithms of non-negative matrix factorization. Acta Electron. Sinica 2008, 36, 737–747.
  30. Anjali, M.; Bhirud, S.G. Objective criterion for performance evaluation of image fusion techniques. Int. J. Comput. Appl. 2010, 1, 57–60.
  31. Chen, S.H.; Su, H.B.; Zhang, R.H. The tradeoff analysis for remote sensing image fusion using expanded spectral angle mapper. Sensors 2008, 8, 520–528.
  32. EQUINOX Corpotation. Equinox Face Database. Available online: http://www.equinoxsensors.com/products/HID.html (accessed on 25 April 2012).
Sensors 12 05872f1 200
Figure 1. Diagram of NSCT, NSP and NSDFB. (a) NSCT filter bands; (b) Three-levels NSP; (c) Decomposition of NSDFB.

Click here to enlarge figure

Figure 1. Diagram of NSCT, NSP and NSDFB. (a) NSCT filter bands; (b) Three-levels NSP; (c) Decomposition of NSDFB.
Sensors 12 05872f1 1024
Sensors 12 05872f2 200
Figure 2. Flowchart of fusion algorithm.

Click here to enlarge figure

Figure 2. Flowchart of fusion algorithm.
Sensors 12 05872f2 1024
Sensors 12 05872f3 200
Figure 3. Multi-focus source images and fusion results. (a) Left-focused image; (b) Right-focused image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.

Click here to enlarge figure

Figure 3. Multi-focus source images and fusion results. (a) Left-focused image; (b) Right-focused image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.
Sensors 12 05872f3 1024
Sensors 12 05872f4 200
Figure 4. Medical source images and fusion results. (a) CT image; (b) MRI image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.

Click here to enlarge figure

Figure 4. Medical source images and fusion results. (a) CT image; (b) MRI image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.
Sensors 12 05872f4 1024
Sensors 12 05872f5 200
Figure 5. Visible and infrared source images and fusion results. (a) Visible band image; (b) Infrared band image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.

Click here to enlarge figure

Figure 5. Visible and infrared source images and fusion results. (a) Visible band image; (b) Infrared band image; (c) Fused image based on M1; (d) Fused image based on M2; (e) Fused image based on M3; (f) Fused image based on PCA; (g) Fused image based on wavelet; (h) Fused image based on our method.
Sensors 12 05872f5 1024
Sensors 12 05872f6 200
Figure 6. Numerical comparison between three algorithms.

Click here to enlarge figure

Figure 6. Numerical comparison between three algorithms.
Sensors 12 05872f6 1024
Table Table 1. The tradeoff selection for T.

Click here to display table

Table 1. The tradeoff selection for T.
TSDAGTSDAG
0.5530.4788.37840.7530.5398.5109
0.630.6648.43220.830.5418.4376
0.6530.4128.45090.930.6298.4415
0.730.4568.53220.9530.3768.2018
Table Table 2. Comparison of the fusion methods for multi-focus images.

Click here to display table

Table 2. Comparison of the fusion methods for multi-focus images.
M1M2M3PCAWaveletProposed method
IE7.32767.45947.44867.49377.59827.5608
SD28.70529.72829.93430.20631.12730.539
AG8.45818.23958.45958.48538.50148.5109
PSNR(dB)35.23636.24636.53936.74637.53337.224
Q Index0.95790.97230.97060.98120.99010.9844
MI3.41323.52683.98014.05384.12574.2578
Table Table 3. ESAM values between multi-focus and fused images.

Click here to display table

Table 3. ESAM values between multi-focus and fused images.
M1M2PCAM3WaveletProposed method
AEaF16 × 1620.3719.9619.8919.8219.2718.96
AEaF32 × 3219.8519.3219.2919.2418.9518.42
AEaF64 × 6419.0618.6218.5318.4218.1317.95
AEbF16 × 1620.0819.4319.3819.3518.8718.54
AEbF32 × 3219.6218.8818.8118.7618.1117.96
AEbF64 × 6418.9818.2718.1518.0317.6617.38
Table Table 4. Comparison of the fusion methods for medical images.

Click here to display table

Table 4. Comparison of the fusion methods for medical images.
M1M2M3PCAWaveletProposed method
IE5.44665.76285.75195.88756.10226.0641
SD29.20727.76827.88328.54931.83631.628
AG20.36126.58325.19427.35828.57329.209
PSNR(dB)36.84237.23837.42837.85338.73738.458
Q Index0.96070.96950.97140.98210.98740.9835
MI4.05284.37264.39424.55224.87365.0837
Table Table 5. ESAM values between CT, MRI and fused images.

Click here to display table

Table 5. ESAM values between CT, MRI and fused images.
M1M2PCAM3WaveletProposed method
AEaF16 × 1618.4518.0917.8317.6417.3317.04
AEaF32 × 3218.1317.6717.3217.0816.7916.58
AEaF64 × 6417.7417.2216.9516.8216.5716.12
AEbF16 × 1618.3918.1217.7917.5317.3817.11
AEbF32 × 3218.0817.7417.2117.0916.9116.62
AEbF64 × 6417.7617.3617.0516.8516.3416.17
Table Table 6. Comparison of the fusion methods for visible and infrared images.

Click here to display table

Table 6. Comparison of the fusion methods for visible and infrared images.
M1M2M3PCAWaveletProposed method
IE6.21036.32786.68126.72166.80516.7962
SD23.87622.63825.04124.86525.13725.029
AG3.27463.08333.36953.42763.52343.5428
PSNR(dB)37.09338.26738.72738.97139.76539.021
Q Index0.97610.97840.98120.98360.99560.9903
MI3.82574.26194.31284.55954.63924.7156
Table Table 7. ESAM values between visible, infrared and fused images.

Click here to display table

Table 7. ESAM values between visible, infrared and fused images.
M1M2PCAM3WaveletProposed method
AEaF16 × 1622.5322.1721.8821.6921.1421.03
AEaF32 × 3222.1421.8421.6521.1320.8220.56
AEaF64 × 6421.7521.3620.8320.5220.0619.94
AEbF16 × 1622.4422.1321.7621.3821.0320.87
AEbF32 × 3222.0821.2220.9320.6920.4720.15
AEbF64 × 6421.6920.8720.5520.0719.8919.68
Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert