Next Article in Journal
Compact Dual-Band Rectenna Based on Dual-Mode Metal-Rimmed Antenna
Next Article in Special Issue
Protein Subnuclear Localization Based on Radius-SMOTE and Kernel Linear Discriminant Analysis Combined with Random Forest
Previous Article in Journal
Evaluating the Factors Affecting QoE of 360-Degree Videos and Cybersickness Levels Predictions in Virtual Reality
Previous Article in Special Issue
Intelligent Indexing—Boosting Performance in Database Applications by Recognizing Index Patterns
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis

1
National Pilot School of Software, Yunnan University, Kunming 650091, China
2
Engineering Research Center of Cyberspace, Yunnan University, Kunming 650091, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(9), 1531; https://doi.org/10.3390/electronics9091531
Submission received: 7 August 2020 / Revised: 11 September 2020 / Accepted: 16 September 2020 / Published: 18 September 2020
(This article belongs to the Special Issue Pattern Recognition and Applications)

Abstract

:
Multi-sensor image fusion is used to combine the complementary information of source images from the multiple sensors. Recently, conventional image fusion schemes based on signal processing techniques have been studied extensively, and machine learning-based techniques have been introduced into image fusion because of the prominent advantages. In this work, a new multi-sensor image fusion method based on the support vector machine and principal component analysis is proposed. First, the key features of the source images are extracted by combining the sliding window technique and five effective evaluation indicators. Second, a trained support vector machine model is used to extract the focus region and the non-focus region of the source images according to the extracted image features, the fusion decision is therefore obtained for each source image. Then, the consistency verification operation is used to absorb a single singular point in the decisions of the trained classifier. Finally, a novel method based on principal component analysis and the multi-scale sliding window is proposed to handle the disputed areas in the fusion decision pair. Experiments are performed to verify the performance of the new combined method.

1. Introduction

Multi-sensor image fusion is a synthesis technique that can fuse source images from multiple sensors into a high-quality image with comprehensive information [1,2,3]. The technique is widely used in visual sensor networks, such as military defense, security monitoring, and image inpainting. In digital photography, it is difficult for the single-lens reflex camera to take an image that can present all objects into focus [4,5]. To obtain all-in-focus images, multisource images from the same scene with different focuses are fused into one signal image, which is named the multi-focus image fusion [6]. Most of the existing multi-focus image fusion methods can be classified into two strategies: signal processing-based fusion methods (such as transform domain methods, spatial domain methods, the hybrid methods), and machine learning-based fusion methods (such as artificial neural network, fuzzy system, and support vector machine).
Generally, the transform domain-based fusion methods include three stages: first, the source images are transformed to obtain the decomposed sub-band coefficients of each image; then, a certain fusion rule is performed to integrate the corresponding sub-band coefficients to obtain the fused coefficients; at last, the fused coefficients are used to obtain the fused image by inverse transformation [7,8,9]. The classical signal processing-based fusion methods include principal component analysis (PCA) [10], discrete wavelet transform (DWT) [11], nonsubsampled operation-based transform (such as nonsubsampled shearlet transform, non-subsampled contourlet transform, stationary wavelet transform) [12], multi-resolution singular value decomposition (MSVD) [13], discrete cosine harmonic wavelet transform (DCHWT) [14], and so on. However, the conventional image fusion methods may produce unpredictable errors between the transform and inverse transform, and these errors may produce the problem of image distortion and artifacts.
With the development of neural networks, researchers are devoted to introducing deep learning into image fusion, especially the field of multi-focus image fusion, which can model as a pixel classification task [15,16,17,18,19]. In recent years, image fusion methods based on deep learning models have emerged and shown great development potential in some situations [20,21]. Liu et al. [15], in 2017, applied a deep convolutional neural network (DCNN) to multi-focus image fusion. This method regarded image fusion as a binary classification problem, but it was still a fusion method based on the spatial domain method that may have the block effect. To solve this problem, Mustafa et al. [22] proposed a multi-focus image fusion method, which combined the feature extraction, fusion and reconstruction task together as a complete unsupervised end-to-end model. With the development of generative adversarial networks (GANs), it has shown great capacity in the field of image fusion. Guo et al. [23] proposed a multi-focus image fusion method based on conditional generative adversarial network (cGANs), which achieved good image fusion performance. However, the image fusion methods based on deep learning also have some limitations, for example, a mass of samples and computational resources are needed for training a good model with plenty of time; moreover, many hyper-parameters are adjusted manually [24]. Considering the tradeoff of calculated quantity and fusion performance, shallow machine learning methods also have some superiorities in image fusion because these methods require limited computing resources and fewer training samples. The support vector machine (SVM), which can be regarded as a classical shallow learning model with a hidden layer, is normally trained by using some extracted features to distinguish the focused and unfocused regains that are employed for generating fusion decisions [18,19]. Because of the lack of feature extraction capability for the shallow machine learning model, it is necessary to employ a given feature extraction method to present the image features (such as texture, structure, and edge), which has great significance on the improvement of image fusion performance.
In this work, a novel multi-focus image fusion method based on SVM, multiscale PCA, and the feature extraction method is introduced. The method first uses the sliding window technique to extract the detailed features of different source images. Then, the focused and unfocused areas of source images are extracted by a pre-trained SVM. In the fusion stage, the fusion decisions of different source images are combined with a set of logic operations, and then CV is carried out to optimize the decisions. At last, a new pixel-weighted image fusion scheme is designed based on multi-scale PCA to process the disputed decisions at the same positions of different source images. The contributions of this work are summarized as follows.
  • This work designs a regional feature extraction method based on five image fusion evaluation metrics and the extracted regional features are then employed as the input of an SVM model to produce pixel fusion decisions. This design can avoid inputting the complete image into SVM.
  • An SVM-based spatial image focus detection method is introduced to distinguish the focused and unfocused regions for integrating different source images, and the new method requires a few training samples to identify the focused and unfocused areas.
  • A multi-scale weighted image fusion method based on PCA is proposed to handle the disputed regions that come from the same position of the decision masks of different source images. The proposed multi-scale image fusion method based on PCA has better performance compared to the conventional PCA methods.
The remaining sections of the paper are presented as follows. In Section 2, the basic theories of the SVM and PCA-based image fusion method are briefly reviewed. In Section 3, the proposed image fusion method is reported. The experimental results and analysis are described in Section 4. Section 5 concludes this work.

2. Related Work

The related work and basic theories of the multi-focus image fusion method based on SVM and PCA are briefly reviewed in this sub-section.

2.1. Multi-Focus Image Fusion

The multi-focus image fusion method can fuse the multiple images with different focuses to obtain a fully focused image. In 2016, a multi-focus image fusion method based on SVM and hybrid wavelet was proposed by Yu et al. [19]. In this method, multi-focus image fusion was regarded as a binary classification problem: focus and non-focus. However, this method introduced some noise when obtaining the fused image. In 2018, Siddique et al. [25] proposed an image fusion method based on color-principal component analysis (C-PCA), which was divided into three stages: first, color PCA and enhanced color properties were used to generate the intermediate images; second, the salient features of an image were extracted by Laplacian of Gaussian; third, the spatial frequency was used as the focus measurement to obtain the final fused image. In 2020, Tyagi et al. [26] proposed a hybrid and parallel processing fusion technique for multi-focus images based on stationary wavelet transform (SWT) and principal component analysis (PCA). Recently, more and more researchers have carried out research on multi-focus image fusion methods based on deep learning. In 2018, Tang et al. [20] proposed a pixel-wise CNN (p-CNN) that can recognize the focused and defocused pixels in source images from its neighborhood information for multi-focus image fusion. More recently, the end-to-end modeling of multi-focus image fusion based on U-shape networks was proposed by Li et al. [27] However, multi-focus image fusion based on deep learning usually consumes a lot of computing resources and time, which was the limitation of this method. To solve this problem, a shallow machine learning approach is applied to the proposed method.

2.2. SVM Model and Its Application in Image Fusion

SVM is a generalized linear classifier with a supervised learning style, and its decision boundary is obtained by the maximum-margin hyperplane learned according to the samples [28]. In this work, the multi-focus image fusion problem is handled as a classification task, thus SVM can be employed for the pixel-level image fusion task. The theory of SVM can be defined by:
min ω , b ω 2 2 s . t . y i ( ω T x i + b ) 1 , i = 1 , 2 , m
where ω = ( ω 1 ; ω 2 ; ; ω d ) , ω is the normal vector that determines the hyperplane direction; b represents the displacement term and determines the distance of the hyperplane and origin.
For the nature of linear indivisibility of samples, a kernel function can be employed to map the features of samples from low-dimensional space into high-dimensional space, thus the samples are separable in high-dimensional space. Therefore, radial basis function (RBF) kernel function is employed to address this problem, which is defined as:
κ ( χ i , χ j ) = exp ( χ i χ j 2 2 σ 2 )
In a practical problem, it is very difficult to find a proper kernel function to make samples completely separable in the feature spaces, and it is also difficult to determine whether the samples that are completely separable, which is caused by an overfitting problem. Thus, the soft margin is introduced as sacrificing some samples that must be properly divided for the maximum classification interval. The basic SVM model with a soft margin is defined by:
min ω , b , ξ i ω 2 2 + C i = 1 m ξ i s . t . y i ( ω T x i + b ) 1 ξ i , i = 1 , 2 , , m
where ξ i is the slack variable, which is utilized to record the wrongly classified samples; C ≥ 0, and the constant C is called a penalty parameter, which controls the tolerance for error samples.
In this work, particle swarm optimization (PSO) is employed to obtain the optimized settings of SVM automatically, and the parameters are penalty parameter C and RBF kernel parameter g [29]. PSO is widely used for parameter optimization problems. In the solution space, each particle of PSO describes a solution for a given problem. Moreover, the best solution of all particles in each iteration is called the locally optimal solution. The best solution of the swarm is called the global optimal solution. The particle iteratively adjusts its trajectory to find local and global optimal solutions. As a result, it can find a set of optimized parameters for SVM instead of repeated trials manually.

2.3. PCA-Based Image Fusion

PCA is a popular descending dimension method that can maintain the key features of the input variable, such as the image. In PCA-based image fusion methods, the principal components of two different source images are employed to obtain the global fusion weight [10]. However, the global fusion weight calculated by classical PCA-based image fusion cannot effectively present the detailed features of the source image. In [30], the authors described a hierarchical PCA image fusion method that can take into consideration the window-based image information to obtain regional weights; however, they only consider a single-scale image feature, which is not enough for obtaining a good fusion performance. The processes of conventional PCA-based image fusion method are described in Algorithm 1.
Algorithm 1 PCA-based image fusion
0: Input: two source images: i m 1 , i m 2 .
1: Decentralize input image pixels.
2: Convert each image into column vectors and constitute a new matrix M .
3: Calculate the covariance matrix C O V of the matrix M .
4: Produce a diagonal D of eigenvalues and a full matrix V whose columns are the correspond eigenvectors.
5: Obtain the fused weight, and the calculation process is defined as follows:
6: i f ( D ( 1 , 1 ) > D ( 2 , 2 ) )
7:    a = V ( : , 1 ) . / s u m ( V ( : , 1 ) )
8: e l s e
9:     a = V ( : , 2 ) . / s u m ( V ( : , 2 ) )
10: Fuse two source images, and the calculate process define as follows:
11:    F = a ( 1 ) × i m 1 + a ( 2 ) × i m 2
12: Output: fused image F .

3. The Proposed Image Fusion Method

The scheme of our proposed image fusion algorithm is shown in Figure 1. According to the proposed scheme, the processes of the proposed image fusion method can be divided into three steps: (1) the detailed features of the focused and unfocused regions in the source images are extracted using a given sliding window, which is marked as the red box; (2) an SVM is trained by the extracted features and labels, and then two decision masks are produced by the pre-trained SVM model, which is marked as the blue box; (3) the undisputed decisions of the given source image pair are first extracted, and then the pixels that are corresponding to the undisputed decisions are fused to obtain F1, which is marked as the yellow box; (4) the disputed decisions of a given source image pair are extracted, and then the pixels in the disputed decisions are fused with the proposed multiscale weighted PCA (MWPCA) to obtain F2, which is marked as the green box roughly. Finally, the fused image is obtained by logic operation with F1 and F2.

3.1. Our Proposed Image Fusion Method Based on Pixel Classification

This sub-section introduces our proposed multi-focus image fusion method based on pixel classification, which includes image feature extraction, RBF-SVM training, and parameter settings.

3.1.1. Feature Extraction

According to Figure 1, two source images are traversed by a sliding window which can extract five features. These features, which represent the regional degree of focus around the pixel, constitute the feature vectors that are input into the SVM model. Given a pixel i m ( i , j ) , this method employs a n × n window is used to calculate the regional features of its surrounding pixels. Moreover, the perimeter boundary of the source image is useful to represent the regional features of the boundary pixels, thus a mirroring method is used to expand the boundary area according to the defined window. The size of the expanded area is:
s = ( n 1 ) / 2
where s represents the size of expanding area, n is the sliding window size; and the step size is set as 1 to traverse all pixels of source images.
To present the regional features of the source image and achieve the goal of the descending dimension, five important image fusion metrics are selected based on our repeated trials. The used five metrics are employed to present the detailed features of a given image in the sliding window. When the window slides to a pixel position, five metrics are calculated to form a feature vector. The input features of SVM are formed when the sliding window traverses all pixels of source images. These metrics are standard deviation (STD), spatial frequency (SF), average gradient (AG), energy of image gradient (EIG), and sum-modified Laplacian (SML) [19,31]. In this subsection, i m ( i , j ) represents the value of pixel ( i , j ) , and M and N are source image sizes.
STD can be employed to analyze the statistical distribution and contrast information of a given image, which is presented as follows:
S T D = 1 M × N i = 1 M j = 1 N ( i m ( i , j ) μ ) 2
where μ is the mean value (MV) and defined by:
μ = 1 M × N i = 1 M j = 1 N i m ( i , j )
SF presents the spatial activity of a given image, which is described as follows:
S F = R F 2 + C F 2
R F = 1 M × N i = 1 M 1 j = 1 N 1 [ i m ( i , j ) i m ( i , j 1 ) ] 2
C F = 1 M × N j = 1 N 1 i = 1 M 1 [ i m ( i , j ) i m ( i 1 , j ) ] 2
where R F presents the row frequency of a given image, C F presents the column frequency.
AG evaluates the sharpness of a given image by different directions to show the details and texture information of the image, which is shown as follows:
A G = 1 M × N i = 1 M j = 1 N 1 2 ( i m ( i , j ) i m ( i + 1 , j ) ) 2 + ( i m ( i , j ) i m ( i , j + 1 ) ) 2
EIG can present the gradient information of an image by considering the features between the adjacent pixels, which is shown as follows:
D ( i m ) = j i ( | i m ( i + 1 , j ) i m ( i , j ) | 2 + | i m ( i , j + 1 ) i m ( i , j ) | 2 )
SML is an improved version of the basic definition of energy of Laplacian to present the gradient information of an image, which is shown as follows:
2 M L i m ( i , j ) = | 2 i m ( i , j ) i m ( i β , j ) i m ( i + β , j ) | + 2 i m ( i , j ) i m ( i , j β ) i m ( i , j + β ) |
where β is set as 1 to adjust the variation of features in a given image, and SML is shown as follows:
S M L = x = i N x = i + N y = i N y = i + N 2 M L i m ( x , y ) f o r y = i N y = i + N 2 M L i m ( x , y ) T
where T presents the discrimination threshold value, N is the window size of SML.

3.1.2. SVM Model Training and Fusion Decision Mask

For SVM training, we first cut the focused and unfocused areas from the multi-focus image into blocks. The feature extraction methods are used to build a training dataset. Here, 0 represents unfocused, 1 represents focused. The PSO method is utilized to find the optimized parameters of SVM. The C and g of SVM with the best accuracy will be selected as the optimized parameters of SVM. Thus, a classification model is trained successfully.
The trained SVM model is utilized to judge the focused regions and the unfocused regions in the given source image that should be disposed as the training set. A pair of given source images ( i m 1 , i m 2 ) is disposed in the following steps:
  • Traverse a given source image using sliding windows to get a set of pixel vectors.
  • Calculate five indicators in each sliding window to obtain the regional feature of the central pixel in the given source image.
  • The trained SVM model is used to mask each pixel as “1” or “0”, which means each pixel in the given source image is determined whether it belongs to the focus or the non-focus area.
  • The decision results are reconstituted into the image fusion masks.
The focused and unfocused regions of a source image pair must be complementary, which means the fusion decisions of the corresponding pixels of different source images are complementary. However, the fusion decisions obtained by the SVM model may not be perfectly complementary because the decisions are not quite correct. Figure 2 present two groups of fusion decisions which are obtained by the source image pairs “head” and “wine bottle”. We can find that some fusion decisions from different source images are disputed. Therefore, we cannot decide which pixel should be fused into the final image. For example, some disputed fusion decisions are marked by the red arrows in Figure 2.

3.2. Our Proposed Multiscale Image Fusion Method Based on PCA

In this work, multiscale weighted PCA (MWPCA) is proposed to handle the fusion masks generated by the SVM model. The local features of the source images are regarded as a key factor in multi-focus image fusion. Thus, a novel image fusion method based on PCA joint sliding window is employed to fuse the source images, in which the fusion weight of each pixel in the dispute area is calculated [32]. Since each size of the sliding window only reflects the regional features in a single-scale, the windows with different sizes are simultaneously combined with PCA to get the corresponding fusion weights. Thus, the regional features of the source images can be represented in multi-scales. To enhance the fusion results, MWPCA is used to handle the disputed area by considering the regional feature of the source images. MWPCA is also an integrated fusion method, and the scheme of the MWPCA is shown in Figure 3.
The fused images of our proposed MWPCA are obviously better than those of conventional PCA-based methods, and the experiments are shown in Section 4. The processes of MWPCA are shown in Algorithm 2.
Algorithm 2 MWPCA
0: Input: source images: i m 1 , i m 2
1: Define a group of sliding windows with different sizes D n = { i m w n , 1 , i m w n , 2 } . The i m w n , 1 , i m w n , 1
 are sizes of the sliding window n.
2: Get a defined sliding window.
3:   Traverse two source images with the defined sliding window.
4:      Put the outputs of the window pairs D n into PCA to calculate eigenvector V n .
5:      Select the eigenvector V n which is corresponding to the largest eigenvalue.
6:      Generate weighted vector α n which is calculated as follows:
7:       { α n 1 = V n 1 / V n α n 2 = V n 2 / V n
      where α n 1 and V n 1 is the first value in vector α n and V n ; α n 2 and V n 2 is the
      second value in vector α n and V n .
8:      Output weighted vector α n ; and record α n 1 and α n 2 .
9:      The weighted values are calculated with the corresponding pixel value in images i m 1 and i m 2 , as follows:
10:       Y n = α n 1 × i m 1 ( i , j ) + α n 2 × i m 2 ( i , j )
11:    Repeat the steps above 3–9 until the sliding windows pairs traverse all the pixels in the source images i m 1 and i m 2 , and finally get a weighted image Y n
12: Repeat the steps above 2–8 for different sizes of sliding windows to get n weight images and calculate the second fused image, as:
13: F 2 = 1 n × Y 1 + 1 n × Y 2 + + 1 n × Y n
14: Output: fused image F .

3.3. Our Proposed Multi-Focus Image Fusion Strategy

According to the proposed scheme shown in Figure 1, this sub-section introduces the proposed multi-focus image fusion strategy that consists of three steps. First, the undisputed fusion decisions are directly integrated through the results obtained by SVM. Second, the disputed decisions of a given source image pair are extracted, and then the pixels that correspond to the disputed decisions are fused with the proposed MWPCA. Finally, the fused results obtained from the above two stages are synthesized by a logic operation. Figure 1 shows the image fusion strategy. In the first stage, consistency verification (CV) [33] is employed to remove the single singular decisions to correct the misclassifications of the trained SVM, thus an optimized mask is produced. M 1 and M 2 represent a pair of optimized image fusion decisions, and the integrated results of M 1 and M 2 are denoted as M 3 and M 4 respectively. The size of the decision mask is represented as ( x , y ) . The process is shown in Algorithm 3.
Algorithm 3 Fusion Strategy
0: Input: M 1 and M 2 .
1: Output: M 3 and M 4 .
2: function JUDGEMENT ( M 1 , M 2 .)
3:    for i = 1 →x do
4:    for j = 1 →y do
5:     if M 1 ( i , j ) & & ( 1 M 2 ( i , j ) ) = 1
6:       M 3 ( i , j ) = 1
7:     else
8:      M 3 ( i , j ) = 0
9:    if M 1 ( i , j ) & & ( 1 M 2 ( i , j ) ) = 0
10:      M 4 ( i , j ) = 1
11:   else
12:      M 4 ( i , j ) = 0
13:    end for
14:   end for
15:   return M 3 , M 4
16: end function
Then, the given multi-focus image pair is fused by the corresponding masks as follows:
F 1 ( i , j ) = M 3 ( i , j ) × i m 1 ( i , j ) + M 4 ( i , j ) × i m 2 ( i , j )
where F 1 is the preliminary fused image.
The disputed fusion decisions are integrated by logic “XOR” operation in the second stage, which is introduced as follows:
M 5 ( i , j ) = M 3 ( i , j ) M 4 ( i , j )
where presents the logical “XOR” operation, and M 5 is the fusion decisions of disputed areas.
To dispose disputed area M 5 , two source images are inputted into MWPCA to obtain the secondary fused image F 2 . The fusion decision M 5 and the fused image F 2 are used to produce the tertiary fused image F 3 that is the complementary set of F 1 .
F 3 ( i , j ) = ( 1 M 5 ( i , j ) ) × F 2 ( i , j )
Finally, F 1 and F 3 are integrated to get the fused image F :
F = F 1 + F 3

4. Experimental Results and Analysis

This section first shows two experiments to verify the validity of the proposed MWPCA. Conventional PCA [10,34] and single-scale PCA-based weight (SWPCA) are used to compare with our proposed MWPCA. To further verify the effectiveness of the proposed image fusion method, some popular image fusion algorithms are also employed to compare with our proposed model by six widely-used image metrics. In the feature extraction stage, the sliding window size is set as 9 × 9. In the SVM model training, the libsvm package provided by Professor Lin Zhiren from Taiwan university is used to train and test the performance of the model. The parameters of SVM are optimized by PSO, as g = 400 and c = 0.005. After our repeated experiments, MWPCA with four-scales is suitable for the proposed method. The experimental images are six pairs of popular multi-focus images, which are shown in Figure 4. The evaluation metrics are: edge-based on a similarity measure (QAB/F), mutual information (MI), STD, SF, feature mutual information (FMI), and AG.
The comparison methods are: DWT [35], gradient pyramid (GP) [36], MSVD [11], convolutional sparse representation (CSR) [37], fsd pyramid (FSD) [34], discrete cosine harmonic wavelet transform (DCHWT) [14], multi-scale guided image and video fusion (MGFF) [38], multi-exposure and multi-focus image fusion in gradient domain (MMGD) [39], stationary wavelet transform (SWT) [40], image fusion method with Laplacian pyramid transform and pulse coupled neural networks (LPPCNN) [15], image fusion method with fourth order partial differential equations (FPED) [17], image fusion method with boosted random walks-based algorithm (BRWIF) [16]. The proposed image fusion method is denoted by SVM-MWPCA.
Figure 5, Figure 6, Figure 7, Figure 8, Figure 9 and Figure 10 display the source images and the fused images of different image fusion methods. The experiments show that some previous methods cannot fuse the source images effectively. In Figure 5, DWT, GRP, and MSVD cannot fuse the detailed features of the source images, thus the fused images are distorted to some extent. We can clearly see that a good fusion image is not obtained by the FPED method, especially at the junction of focus and multi-focus images. In Figure 6, the fused images of DWT, MSVD, FSD, DCHWT, MMGD, and FPED have obvious distortion. In particular, the fusion image obtained by the FPED method has a serious loss of details. The images fused by our proposed fusion method are superior to those of other methods in terms of edges, details, and textures, and our fused images are most similar to the source images. The enlarged images confirm the above situations. In Figure 7, the fused images of GRP and MSVD have obvious distortion, and the results are worse than other methods. In Figure 8, we found that apart from the FPED method, it is difficult to judge the difference of the fused images of different methods by human eyes. In Figure 9, the fused images obtained by GRP, MSVD, CSR, FSD, DCHWT, and MMGD cannot effectively represent the details of the source images, especially the clear and fuzzy edges. In Figure 10, the difference among the fused images cannot be recognized very well by human eyes, thus some evaluation metrics are employed to verify the performance of different methods. In general, our proposed image fusion method generally produces better visual effect when compared with these of other comparison methods.
By employing the experimental data in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6, we can find that the proposed MWPCA has the largest values of QAB/F and MI in source images “head”, “office”, “boat”, “wine bottle”, and “bread” when compared with the conventional PCA and SWPCA methods. For the source image “flora”, the MWPCA method has the best values of QAB/F. QAB/F and MI are the two most crucial evaluation metrics in image fusion. MWPCA has the largest values in almost all of the rest evaluated metrics. The fused images obtained by MWPCA have much better clarity than those of conventional PCA methods. Thus, the fusion image obtained by the proposed MWPCA has better visual effects and more superior objective indicators.
The comparison of the evaluation indexes of different image fusion methods is provided in Table 1, Table 2, Table 3, Table 4, Table 5 and Table 6. Generally, four digits are used in the field of image fusion because some indicators are approximate. Among the above evaluation metrics, QAB/F and MI are the most important parameters to evaluate the fused image quality. The QAB/F metrics indicate how much edge information from the source image is retained. The MI metrics indicate how much source image information is transferred to a fused image. Other indicators include metrics as auxiliary indicators. The higher the evaluation metrics value, the higher the fused image quality. Table 1 shows that the QAB/F and MI values of the proposed method are the largest in “head”. Table 2 shows that the QAB/F and MI values of the proposed method are the largest in “office”. Table 3 displays the values of QAB/F are the second largest in “boat”, which is only 0.0062 below the maximum. Table 4 shows our proposed image fusion method can obtain the best values for the source image pair “flora” in QAB/F indexes. Table 5 shows that the proposed image fusion method can obtain the best values for the source image pair “wine bottle” in QAB/F, MI, and SF. Table 6 shows that the proposed method can obtain the best values for the source image pair “bread” in QAB/F, MI, STD, and AG. According to these experiments, we can find that our proposed image fusion method always has the best values of QAB/F and MI. Among other indicators, metric values fluctuate due to the calculation method. The STD, SF, and AG are independent of the source images and only depend on the fused images. Therefore, STD, SF, and AG are not always effective to analyze the fused images. However, the values of STD, SF, and AG are better than most of the other methods. To sum up, our proposed image fusion method has better performance compared with those of other comparison methods.

5. Conclusions

This work proposes a novel multi-focus image fusion method based on SVM and an improved multi-scale PCA-based pixel weighted method. Moreover, the logic operations are also employed to optimize the fusion decisions. The experimental results reveal that the fused images obtained by our proposed method are superior to those of other comparison fusion methods. The used regional feature extraction method can present the important information of the focused and unfocused regions in source images, and the proposed image fusion method can cover the shortage of the misclassification of SVM. Moreover, our new proposed multiscale PCA-based image fusion is used to handle the disputed regions to overcome the weakness of conventional PCA methods, and the experiments confirmed the performance of the new PCA-based method. Our future research will be aimed at exploring some new local feature extraction methods. Moreover, the advanced machine learning methods are also expected to be applied to image fusion.

Author Contributions

Conceptualization, Y.Y. and S.H.; methodology, Y.Y., S.H. and X.J.; software, Y.Y. and Y.Z.; validation, S.H., Y.Z., X.J. and Q.J.; original draft preparation, Y.Y. and S.H.; writing—review and editing, Q.J., S.Y. and S.H.; project administration, X.J.; funding acquisition, S.Y. and. Q.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was funded by the National Natural Science Foundation of China (No. 61863036), China Postdoctoral Science Foundation (2020T130564, 2019M653507), Postdoctoral Science Foundation of Yunnan Province in China.

Acknowledgments

The authors thank to ICNC-FSKD & ICHSA 2019 for their encouragement.

Conflicts of Interest

The authors declare no conflict of interest.

Abbreviations

A1 exp ( ) exponential function based on e
A2 second-order norm
A3 s u m ( ) sum function
A4 i m ( i , j ) any pixel value in an image
A5 n × n size of sliding window
A6 s size of expanding area
A7 μ mean value (MV)
A8 D n the sliding window pairs
A9 V n eigenvector
A10 α n weighted vector
A11 Y n weighted image
A12 F n n-th fused image
A13 × matrix dot product
A14 logical “XOR”
A15 & & logical “AND”
A16 M n fusion decision (n = 1, 2,…, 5)
A17 S T D standard deviation
A18 S F spatial frequency
A19 A G average gradient
A20 E I G energy of image gradient
A21 S M L sum-modified Laplacian
A22 R F row frequency
A23 C F column frequency
A24 Q A B / F edge based on similarity measure
A25 M I mutual information
A26 F M I feature mutual information

References

  1. Liu, Y.; Liu, S.P.; Wang, Z.F. A general framework for image fusion based on multi-scale transform and sparse representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  2. Jin, X.; Hou, J.; Nie, R.; Yao, S.; Zhou, D.; Jiang, Q.; He, K. A lightweight scheme for multi-focus image fusion. Multimed. Tools Appl. 2018, 77, 23501–23527. [Google Scholar] [CrossRef]
  3. Nirmalraj, S.; Nagarajan, G. An adaptive fusion of infrared and visible image based on learning of sparse fuzzy cognitive maps on compressive sensing. J. Ambient Intell. Humaniz. Comput. 2020. [Google Scholar] [CrossRef]
  4. Aymaz, S.; Kose, C.; Aymaz, S. Multi-focus image fusion for different datasets with super-resolution using gradient-based new fusion rule. Multimed. Tools Appl. 2020, 79, 13311–13350. [Google Scholar] [CrossRef]
  5. Amin-Naji, M.; Aghagolzadeh, A.; Ezoji, M. CNNs hard voting for multi-focus image fusion. J. Ambient Intell. Humaniz. Comput. 2020, 11, 1749–1769. [Google Scholar] [CrossRef]
  6. Vakaimalar, E.; Mala, K.; Babu, S.R. Multifocus image fusion scheme based on discrete cosine transform and spatial frequency. Multimed. Tools Appl. 2019, 78, 17573–17587. [Google Scholar] [CrossRef]
  7. Jin, X.; Chen, G.; Hou, J. Multimodal sensor medical image fusion based on nonsubsampled shearlet transform and S-PCNNs in HSV space. Signal Process. 2018, 153, 379–395. [Google Scholar] [CrossRef]
  8. Kinoshita, Y.; Kiya, H. Scene Segmentation-Based Luminance Adjustment for Multi-Exposure Image Fusion. IEEE Trans. Image Process. 2019, 28, 4101–4116. [Google Scholar] [CrossRef] [Green Version]
  9. Liu, S.; Chen, J.; Rahardja, S. A New Multi-Focus Image Fusion Algorithm and Its Efficient Implementation. IEEE Trans. Circuits Syst. Video Technol. 2019, in press. [Google Scholar] [CrossRef]
  10. María, G.A.; José, L.S. Fusion of multispectral and panchromatic images using improved HIS and PCA mergers based on wavelet decomposition. IEEE Trans. Geosci. Remote Sens. 2004, 42, 1291–1299. [Google Scholar]
  11. Ebenezer, D. Optimum Wavelet-Based Homomorphic Medical Image Fusion Using Hybrid Genetic-Grey Wolf Optimization Algorithm. IEEE Sens. J. 2018, 18, 6804–6811. [Google Scholar]
  12. Zhaobin, W.; Yide, M.; Jason, G. Multi-focus image fusion using PCNN. Pattern Recognit. 2010, 43, 2003–2016. [Google Scholar]
  13. Naidu, V.P.S. Image fusion technique using multi-resolution singular value decomposition. Def. Sci. J. 2011, 61, 479–484. [Google Scholar] [CrossRef] [Green Version]
  14. Kumar, B.K.S. Multifocus and multispectral image fusion based on pixel significance using discrete cosine harmonic wavelet transform. Signal Image Video Process. 2013, 7, 1125–1143. [Google Scholar] [CrossRef]
  15. Liu, Y.; Chen, X.; Peng, H.; Wang, Z.F. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 26, 191–207. [Google Scholar] [CrossRef]
  16. Ma, J.; Zhou, Z.; Wang, B.; Miao, L.; Zong, H. Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps. Neurocomputing 2019, 335, 9–20. [Google Scholar] [CrossRef]
  17. Bavirisetti, D.P. Multi-sensor image fusion based on fourth order partial differential equations. In Proceedings of the 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; IEEE: New York, NY, USA, 2017. [Google Scholar]
  18. Atencio-Ortiz, P.; Snchez-Torres, G.; Branch-Bedoya, J.W. Evaluating supervised learning approaches for spatial-domain multi-focus image fusion. Dyna 2017, 84, 137–146. [Google Scholar] [CrossRef]
  19. Yu, B.T.; Jia, B.; Lu, D. Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion. Neurocomputing 2016, 182, 1–9. [Google Scholar] [CrossRef]
  20. Tang, H.; Xiao, B.; Li, W.S.; Wang, G.Y. Pixel convolutional neural network for multi-focus image fusion. Inf. Sci. 2018, 433, 125–141. [Google Scholar] [CrossRef]
  21. Zhao, W.D.; Wang, D.; Lu, H.C. Multi-Focus Image Fusion with a Natural Enhancement via a Joint Multi-Level Deeply Supervised Convolutional Neural Network. IEEE Trans. Circuits Syst. Video Technol. 2019, 29, 1102–1115. [Google Scholar] [CrossRef]
  22. Mustafa, H.T.; Yang, J.; Zareapoor, M. Multi-scale convolutional neural network for multi-focus image fusion. Image Vis. Comput. 2019, 85, 26–35. [Google Scholar] [CrossRef]
  23. Guo, X.; Nie, R.; Cao, J.; Zhou, D.; Mei, L.; He, K. FuseGAN: Learning to Fuse Multi-Focus Image via Conditional Generative Adversarial Network. IEEE Trans. Multimed. 2019, 21, 1982–1996. [Google Scholar] [CrossRef]
  24. Guo, X.P.; Nie, R.C.; Cao, J.D. Fully Convolutional Network-Based Multifoucs Image Fusion. Neural Comput. 2018, 30, 1775–1800. [Google Scholar] [CrossRef] [PubMed]
  25. Siddique, A.; Xiao, B.; Li, W.; Nawaz, Q.; Hamid, I. Multi-focus Image Fusion Using Block-Wise Color-Principal Component Analysis. In Proceedings of the 2018 3rd IEEE International Conference on Image, Vision and Computing, Chongqing, China, 27–29 June 2018. [Google Scholar]
  26. Tyagi, T.; Gupta, P.; Singh, P. A Hybrid Multi-focus Image Fusion Technique using SWT and PCA. In Proceedings of the 2020 10th International Conference on Cloud Computing, Data Science & Engineering (Confluence), Noida, India, 29–31 January 2020. [Google Scholar]
  27. Li, H.; Nie, R.; Cao, J.; Guo, X.; Zhou, D.; He, K. Multi-focus Image Fusion using U-shaped Networks with a Hybrid Objective. IEEE Sens. J. 2019, 19, 9755–9765. [Google Scholar] [CrossRef]
  28. Cherkassky, V. The nature of statistical learning theory. IEEE Trans. Neural Netw. 1997, 8, 1564. [Google Scholar] [CrossRef] [Green Version]
  29. Lin, S.; Ying, K.; Chen, S.; Lee, Z. Particle swarm optimization for parameter determination and feature selection of support vector machines. Expert Syst. Appl. 2008, 35, 1817–1824. [Google Scholar] [CrossRef]
  30. Patil, U.; Mudengudi, U. Image fusion using hierarchical PCA. In Proceedings of the 2011 International Conference on Image Information Processing (ICIIP 2011), Shimla, India, 3–5 November 2011. [Google Scholar]
  31. Eskicioglu, A.M.; Fisher, P.S. Image quality measures and their performance. IEEE Trans. Commun. 1995, 43, 2959–2965. [Google Scholar] [CrossRef] [Green Version]
  32. Yang, Y.K.; Jiang, Q.; Yao, S.W.; Xue, G.; Wu, L.W.; Jin, X. A Spatial Fusion Scheme of Multi-focus Image Combining SVM-Based Classification and PCA-Based Weight. In Proceedings of the 2019 15th International Conference on Natural Computation, Fuzzy Systems and Knowledge Discovery (ICNC-FSKD 2019), Kunming, China, 20–22 July 2019. [Google Scholar]
  33. Jiang, Q.; Jin, X.; Lee, S.J.; Yao, S. A Novel multi-focus image method based on stationary wavelet transform and local features of fuzzy sets. IEEE Access 2017, 5, 20286–20302. [Google Scholar] [CrossRef]
  34. Oliver, R. Pixel-Level Image Fusion and the Image Fusion Toolbox. Available online: http://www.metapix/toolbox.htm (accessed on 30 December 1999).
  35. Li, H.; Manjunath, B.S.; Mitra, S.K. Multisensor image fusion using the wavelet transform. In Proceedings of the 1st International Conference on Image Processing, Austin, TX, USA, 6 August 2002; pp. 235–245. [Google Scholar]
  36. Petrovic, V.S.; Xydeas, C.S. Gradient-Based Multiresolution Image Fusion; IEEE Press: Piscataway, NJ, USA, 2004; pp. 228–237. [Google Scholar]
  37. Liu, Y.; Chen, X.; Ward, R.K. Image fusion with convolutional sparse representation. IEEE Signal Process. Lett. 2016, 23, 1882–1886. [Google Scholar] [CrossRef]
  38. Bavirisetti, D.P.; Xiao, G.; Zhao, J.; Dhuli, R.; Liu, G. Multi-scale Guided Image and Video Fusion: A Fast and Efficient Approach. Circuits Syst. Signal Process. 2019, 38, 5576–5605. [Google Scholar] [CrossRef]
  39. Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-exposure and Multi-focus Image Fusion in Gradient Domain. J. Circuits Syst. Comput. 2016, 25, 1650123. [Google Scholar] [CrossRef] [Green Version]
  40. Liu, K.; Guo, L.; Hui-Hui, L.I. Image fusion algorithm using stationary wavelet transform. Comput. Eng. Appl. 2007, 43, 59–61. [Google Scholar]
Figure 1. The proposed scheme in this work.
Figure 1. The proposed scheme in this work.
Electronics 09 01531 g001
Figure 2. The decision masks and disputable areas.
Figure 2. The decision masks and disputable areas.
Electronics 09 01531 g002
Figure 3. The diagram of the multiscale pixel weighted fusion method based on principal component analysis (PCA).
Figure 3. The diagram of the multiscale pixel weighted fusion method based on principal component analysis (PCA).
Electronics 09 01531 g003
Figure 4. Experimental image sets (a) source image set named “head”, (b) source image set named “office”, (c) source image set named “boat”, (d) source image set named “medicine bottle”, (e) source image set named “wine bottle”, (f) source image set named “bread”.
Figure 4. Experimental image sets (a) source image set named “head”, (b) source image set named “office”, (c) source image set named “boat”, (d) source image set named “medicine bottle”, (e) source image set named “wine bottle”, (f) source image set named “bread”.
Electronics 09 01531 g004
Figure 5. Fused images obtained by different methods in the image pair “head”. (a) principal component analysis (PCA); (b) single-scale PCA-based weight (SWPCA); (c) multiscale weighted PCA (MWPCA); (d) discrete wavelet transform (DWT); (e) gradient pyramid (GP); (f) multi-resolution singular value decomposition (MSVD); (g) convolutional sparse representation (CSR); (h) fsd pyramid (FSD); (i) discrete cosine harmonic wavelet transform (DCHWT; (j) multi-scale guided image and video fusion (MGFF); (k) multi-exposure and multi-focus image fusion in gradient domain (MMGD); (l) stationary wavelet transform (SWT); (m) Laplacian pyramid transform and pulse coupled neural networks (LPPCNN); (n) boosted random walks-based algorithm (BRWIF); (o) fourth order partial differential equations (FPED); (p) support vector machine (SVM)-MWPCA.
Figure 5. Fused images obtained by different methods in the image pair “head”. (a) principal component analysis (PCA); (b) single-scale PCA-based weight (SWPCA); (c) multiscale weighted PCA (MWPCA); (d) discrete wavelet transform (DWT); (e) gradient pyramid (GP); (f) multi-resolution singular value decomposition (MSVD); (g) convolutional sparse representation (CSR); (h) fsd pyramid (FSD); (i) discrete cosine harmonic wavelet transform (DCHWT; (j) multi-scale guided image and video fusion (MGFF); (k) multi-exposure and multi-focus image fusion in gradient domain (MMGD); (l) stationary wavelet transform (SWT); (m) Laplacian pyramid transform and pulse coupled neural networks (LPPCNN); (n) boosted random walks-based algorithm (BRWIF); (o) fourth order partial differential equations (FPED); (p) support vector machine (SVM)-MWPCA.
Electronics 09 01531 g005aElectronics 09 01531 g005b
Figure 6. Fused images obtained by different methods in the image pair “office”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Figure 6. Fused images obtained by different methods in the image pair “office”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Electronics 09 01531 g006
Figure 7. Fused images obtained by different methods in the image pair “boat”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Figure 7. Fused images obtained by different methods in the image pair “boat”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Electronics 09 01531 g007
Figure 8. Fused images obtained by different methods in the image pair “flora”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Figure 8. Fused images obtained by different methods in the image pair “flora”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Electronics 09 01531 g008aElectronics 09 01531 g008b
Figure 9. Fused images obtained by different methods in the image pair “wine bottle”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Figure 9. Fused images obtained by different methods in the image pair “wine bottle”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Electronics 09 01531 g009aElectronics 09 01531 g009b
Figure 10. Fused images obtained by different methods in the image pair “bread”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Figure 10. Fused images obtained by different methods in the image pair “bread”. (a) PCA; (b) SWPCA; (c) MWPCA; (d) DWT; (e) GP; (f) MSVD; (g) CSR; (h) FSD; (i) DCHWT; (j) MGFF; (k) MMGD; (l) SWT; (m) LPPCNN; (n) BRWIF; (o) FPED; (p) SVM-MWPCA.
Electronics 09 01531 g010aElectronics 09 01531 g010b
Table 1. Evaluation indexes of the fused images in source image pair “head”.
Table 1. Evaluation indexes of the fused images in source image pair “head”.
QAB/FMISTDSFFMIAG
PCA0.47064.886156.648217.63890.78809.2269
SWPCA0.75565.379860.005527.82970.791214.3047
MWPCA0.76815.480060.285728.72830.791014.7699
DWT0.70375.599158.973134.38820.784717.9566
GP0.72404.716757.145532.03160.782016.7318
MSVD0.58394.609558.186431.51580.775916.4240
CSR0.77635.020859.300931.44080.786716.0126
FSD0.7230 4.7195 57.1382 32.0335 0.7812 16.7339
DCHWT0.74135.339061.966233.30220.786317.3192
MGFF0.76974.709467.408032.27920.785716.5804
MMGD0.77073.548658.331842.69040.783023.0448
SWT0.7716 4.9537 59.8963 34.3623 0.7870 17.9258
LPPCNN0.79456.117260.305834.43430.788517.9337
BRWIF0.7985 7.4392 62.9315 33.5246 0.789417.2717
FPED0.4409 4.5751 57.2435 23.4089 0.7419 11.7576
SVM-MWPCA0.80398.011763.239434.23340.787117.7114
Table 2. Evaluation indexes of the fused images in source image pair “office”.
Table 2. Evaluation indexes of the fused images in source image pair “office”.
QAB/FMISTDSFFMIAG
PCA0.52385.286859.365914.24660.88915.6931
SWPCA0.64065.030762.180818.26870.88926.9143
MWPCA0.64855.047662.523218.78370.88967.0665
DWT0.61424.973464.015224.74960.88579.3293
GP0.62614.711458.212520.04190.88397.5131
MSVD0.44904.776459.676119.01340.87686.9902
CSR0.66885.098963.317722.65440.88968.0580
FSD0.6247 4.6958 58.1493 20.0650 0.8841 7.5303
DCHWT0.64465.058666.001523.98740.88738.9441
MGFF0.64154.786971.144022.92850.88608.2877
MMGD0.65193.841858.399618.45920.88007.9927
SWT0.6685 5.0893 65.7365 25.1233 0.8897 9.2708
LPPCNN0.69936.153066.377325.35970.89079.3527
BRWIF0.7030 6.8735 67.8244 24.9213 0.8898 9.0464
FPED0.2969 3.6333 61.9545 29.88770.7831 14.6584
SVM-MWPCA0.71237.340468.376125.59760.88999.3341
Table 3. Evaluation indexes of the fused images in source image pair “boat”.
Table 3. Evaluation indexes of the fused images in source image pair “boat”.
QAB/FMISTDSFFMIAG
PCA0.54315.115346.633312.51580.86205.7731
SWPCA0.67425.206947.569515.26520.86627.0489
MWPCA0.68255.233847.672715.56330.86667.1840
DWT0.68455.036850.113820.79180.86399.7672
GP0.68304.794145.159216.79960.86407.8353
MSVD0.59004.860847.381417.23830.85667.9434
CSR0.72275.380948.838318.71000.86558.4201
FSD0.6792 4.7847 45.1333 16.8615 0.8635 7.8745
DCHWT0.69945.311749.115919.14250.86388.8718
MGFF0.66565.232050.174517.49310.86027.8530
MMGD0.69993.194953.484024.37520.858512.0170
SWT0.7229 5.6105 50.3197 20.8033 0.8665 9.6473
LPPCNN0.74816.695450.218420.82520.86829.6278
BRWIF0.7484 7.408550.3348 20.6533 0.8678 9.4998
FPED0.5229 4.6411 47.1768 15.7169 0.8354 7.6312
SVM-MWPCA0.74197.059650.233220.38440.86789.3424
Table 4. Evaluation indexes of the fused images in source image pair “flora”.
Table 4. Evaluation indexes of the fused images in source image pair “flora”.
QAB/FMISTDSFFMIAG
PCA0.58196.033953.538610.59070.92813.3497
SWPCA0.76126.269954.859117.44450.93134.7632
MWPCA0.76536.276154.870317.76900.93114.8457
DWT0.73456.770256.959820.11600.92675.8923
GP0.72315.870055.221717.37430.92614.9874
MSVD0.49975.591353.729013.82940.91383.9497
CSR0.73485.642755.116218.66460.92925.0100
FSD0.7218 5.8701 55.2136 20.34450.92386.6982
DCHWT0.76216.489155.129120.04330.92995.6974
MGFF0.69545.137459.422316.27730.92195.2609
MMGD0.7753 6.4414 57.2308 20.12400.92376.6162
SWT0.57058.159455.315920.2009 0.9296 5.8092
LPPCNN0.79577.260666.237426.36720.895210.8548
BRWIF0.7563 9.554955.0321 20.5896 0.9261 5.5549
FPED0.5738 5.7000 53.5262 10.9073 0.9274 3.4974
SVM-MWPCA0.80788.595555.228720.82300.92945.8873
Table 5. Evaluation indexes of the fused images in source image pair “wine bottle”.
Table 5. Evaluation indexes of the fused images in source image pair “wine bottle”.
QAB/FMISTDSFFMIAG
PCA0.47715.368661.320714.20790.89095.8415
SWPCA0.73935.783663.627720.99280.89378.5928
MWPCA0.74455.838063.793621.40300.89368.7532
DWT0.74105.558666.201926.29380.893810.8986
GP0.71395.202961.261520.61400.88848.5945
MSVD0.38934.980261.447819.65390.87818.2799
CSR0.75875.672464.476821.01880.90627.5273
FSD0.7125 5.1914 61.2383 20.6312 0.8884 8.6113
DCHWT0.74445.797965.457425.06660.893810.2544
MGFF0.69414.900268.397223.00080.88289.2334
MMGD0.74213.921554.757622.40130.875710.0876
SWT0.7698 5.8674 66.2809 26.2314 0.8936 10.9360
LPPCNN0.79577.260666.237426.36720.895010.8548
BRWIF0.7974 8.1721 66.5152 26.3142 0.8950 10.7810
FPED0.4536 5.2547 61.3751 14.8585 0.8840 6.8271
SVM-MWPCA0.79908.227366.531226.37260.895110.8262
Table 6. Evaluation indexes of the fused images in source image pair “bread”.
Table 6. Evaluation indexes of the fused images in source image pair “bread”.
QAB/FMISTDSFFMIAG
PCA0.55236.874271.065612.59510.90274.8738
SWPCA0.67836.958971.610514.97990.90545.7183
MWPCA0.68676.987771.687515.36060.90555.8529
DWT0.69686.802872.630422.82240.90348.7466
GP0.69146.361070.146520.18810.90267.7378
MSVD0.45276.472871.155715.98450.88676.2920
CSR0.72497.032472.316221.01880.90627.5273
FSD0.6869 6.3925 70.0854 18.7237 0.9037 7.0463
DCHWT0.72107.345872.508720.46940.91027.5167
MGFF0.68766.639973.539818.18950.90506.5866
MMGD0.70944.161566.188220.24280.89418.4643
SWT0.7424 7.2949 72.9396 21.4359 0.9083 8.0236
LPPCNN0.75447.982473.034021.72140.90848.1477
BRWIF0.7627 8.9794 73.6095 23.06420.9071 8.7368
FPED0.5816 6.8390 71.1643 14.3646 0.8889 5.7833
SVM-MWPCA0.76348.992173.593223.05130.90708.7515

Share and Cite

MDPI and ACS Style

Huang, S.; Yang, Y.; Jin, X.; Zhang, Y.; Jiang, Q.; Yao, S. Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis. Electronics 2020, 9, 1531. https://doi.org/10.3390/electronics9091531

AMA Style

Huang S, Yang Y, Jin X, Zhang Y, Jiang Q, Yao S. Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis. Electronics. 2020; 9(9):1531. https://doi.org/10.3390/electronics9091531

Chicago/Turabian Style

Huang, Shanshan, Yikun Yang, Xin Jin, Ya Zhang, Qian Jiang, and Shaowen Yao. 2020. "Multi-Sensor Image Fusion Using Optimized Support Vector Machine and Multiscale Weighted Principal Component Analysis" Electronics 9, no. 9: 1531. https://doi.org/10.3390/electronics9091531

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop