Next Article in Journal
Hybrid DAER Based Cross-Modal Retrieval Exploiting Deep Representation Learning
Previous Article in Journal
A Machine Learning Approach to Simulate Gene Expression and Infer Gene Regulatory Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Gaussian of Differences: A Simple and Efficient General Image Fusion Method

Department of Computer Engineering, Abdullah Gul University, 38080 Kayseri, Turkey
Entropy 2023, 25(8), 1215; https://doi.org/10.3390/e25081215
Submission received: 3 July 2023 / Revised: 8 August 2023 / Accepted: 14 August 2023 / Published: 15 August 2023
(This article belongs to the Section Signal and Data Analysis)

Abstract

:
The separate analysis of images obtained from a single source using different camera settings or spectral bands, whether from one or more than one sensor, is quite difficult. To solve this problem, a single image containing all of the distinctive pieces of information in each source image is generally created by combining the images, a process called image fusion. In this paper, a simple and efficient, pixel-based image fusion method is proposed that relies on weighting the edge information associated with each pixel of all of the source images proportional to the distance from their neighbors by employing a Gaussian filter. The proposed method, Gaussian of differences (GD), was evaluated using multi-modal medical images, multi-sensor visible and infrared images, multi-focus images, and multi-exposure images, and was compared to existing state-of-the-art fusion methods by utilizing objective fusion quality metrics. The parameters of the GD method are further enhanced by employing the pattern search (PS) algorithm, resulting in an adaptive optimization strategy. Extensive experiments illustrated that the proposed GD fusion method ranked better on average than others in terms of objective quality metrics and CPU time consumption.

1. Introduction

The objective of image fusion is to merge the complementary information derived from multiple source images into a unified image [1,2,3,4]. In multi-modal medical image fusion, two or more images from different imaging modalities are combined [5]. Magnetic resonance (MR) and computed tomography (CT) are two different medical imaging modalities that have complementary strengths and weaknesses. CT images have high spatial resolution, which makes bones more visible, while MR images have high contrast resolution, which reveals soft tissues such as organs [6]. Visible and infrared image fusion is a computational technique that includes combined information from infrared and visible spectrum images to improve the visibility of objects and enhance the contrast of images, especially for enhanced night vision, remote sensing and pan-sharpening [7,8,9,10,11,12]. Multi-exposure image fusion involves the integration of multiple images, each captured at varying exposure levels, to generate a high-dynamic-range (HDR) image. HDR images retain details in both the dark and bright regions, which enhances image quality, increases visual fidelity, and improves image analysis in computer vision tasks [13,14]. Multi-focus image fusion is employed to merge multiple images exhibiting distinct focus levels into a singular composite image [15,16,17,18,19]. This results in improved overall sharpness, enhanced depth of field, and enhanced visual perception [20]. These benefits enable more accurate analysis and interpretation of the fused image in computer vision applications.

1.1. Related Work

Image fusion methods in the literature can be basically divided into two categories: pixel domain and transformation domain [21]. Pixel-domain (or spatial-domain) techniques combine the source images directly using their gray-level or color pixel values. The best-known example of this technique is the arithmetic averaging of source images. Arithmetic averaging can be used to combine both multi-sensor and multifocal images, but the biggest disadvantage of this method is that it reduces image contrast [22]. The basic idea of multi-scale, transform-based image fusion methods is applying a multi-resolution decomposition to each source image, combining the decomposition results with various rules to create a unified representation, and finally, applying an inverse multi-resolution transform [23]. Well-known examples of these approaches include principal component analysis (PCA), discrete wavelet transform (DWT), Laplacian pyramid (LP), and other pyramid-based transformations [24]. In recent years, several image fusion algorithms based on machine learning and deep learning approaches have been proposed [3,25,26,27,28]. These methods are robust and demonstrate superior performance. However, the training phase requires powerful, high-performance computing systems and plenty of input training data. Moreover, the trained models can be time-consuming for real-time applications [29].
Pixel level, feature level, and decision level are the three levels at which image fusion can take place. Pixel-level fusion directly integrates the original data from the source images to produce a fused image that is more informative for both computer processing and human visual perception. Compared to other fusion approaches, this approach strives to improve the visual quality and computing efficiency of the fused image. Li et al. proposed a pixel-based method by calculating the pixel visibility for each pixel in the source images [30]. Yang and Li proposed a multi-focus image fusion method based on spatial-frequency and morphologic operators [31]. Typically, in pixel-level image fusion, the weights are determined based on the activity level of various pixels [32]. In these studies, neural networks [33] and support vector machines [34] are employed to select pixels with the most significant activity, using wavelet coefficients as the input features. Ludusan and Lavialle proposed a variational pixel-based method for image fusion based on error estimation theory and partial differential equations to mitigate the noise of images [35]. In [36], a technique for multi-exposure image fusion is introduced which involves two primary stages: image features, including local contrast, brightness, and color dissimilarity, are computed to generate weight maps that are further improved using recursive filtering. Subsequently, the fused image is formed by combining the source images using a weighted sum based on these refined weight maps. Besides the many pixel-level methods available, region-based spatial methods that use blocks [37] or adaptive regions [38,39] have also been proposed to outperform existing methods.
Within the framework of anisotropic diffusion filter (ADF)-based image fusion algorithms, weight map layers are formed via image smoothing, which employs an edge protection method. These weight map layers undergo subsequent processing prior to the application of the fusion rule, culminating in the attainment of the final output [40]. Kumar has introduced the cross-binary filter (CBF) method, which takes into account both the gray-level similarity and geometric closeness of neighboring pixels without anti-aliasing. The source images are combined according to the weighted average, using the weights calculated from the detailed images extracted from the source images by the CBF method [41]. The fourth-order partial differential equations (FDPE) method first applies differential equations to each source image to obtain approximate images. Then, PCA is used to obtain optimum weights for the detailed images, which are then combined to obtain the final, detailed image. The ultimate approximation of the image is derived by performing an averaging operation on the set of approximate images. Subsequently, the fused image is computed by merging the final approximation with the detailed images [42]. The context enhancement (GFCE)-based method preserves the details in the visible input image and the background scene. Thus, it can successfully transfer important IR information to the composite image [43]. The gradient transfer fusion (GTF) method, which is based on gradient transmission and total variation (TV) minimization, tries to maintain appearance information and thermal radiation simultaneously [44]. The hybrid multi-scale decomposition method (HMSD) decomposes the source image into very distant texture details and edge features using a combination of bilateral filters and the versatile Gaussian method. This offset allows us to better capture important very sensitive IR spectral features and separate fine texture details from large edges [45]. The infrared feature extraction and visual information preservation (IFEVIP) method provides a simple, fast, but effective fusion of infrared and visual images. Firstly, the reconstruction of the infrared background is accomplished by leveraging quadtree decomposition and Bézier interpolation. Subsequently, the extraction of bright infrared features is performed by subtracting the reconstructed background from the infrared image, followed by a refinement process that reduces redundant background information [46]. The multi-resolution singular value decomposition (MSVD) method is an image fusion technique based on a process that bears a resemblance to wavelet transform and involves filtering the signal independently using low-pass and high-pass finite impulse response (FIR) filters, followed by the decimation of the output of each filter by a factor of two to achieve the first level of decomposition [47]. The VSMWLS approach, designed to enhance the transfer of significant visual details while minimizing the inclusion of irrelevant infrared (IR) details or noise in the merged image, represents a multi-scale fusion technique that incorporates visual salience maps (VSM) and weighted least square (WLS) optimization [48]. Liu et al. proposed an approach based on deep convolutional neural networks (CNN) for both infrared–visible image fusion [49] and multi-focus image fusion [50]. They successfully addressed the crucial issues of activity level measurement and weight assignment in image fusion by using a Siamese convolutional network to construct a weight map by integrating pixel activity information from two source images [49]. On the other hand, because focus estimation and image fusion are two distinct problems, traditional image fusion techniques sometimes struggle to perform satisfactorily. Liu et al. suggest a deep learning method that avoids the requirement for separate focus estimation by learning a direct mapping between source images and a focus map [50].

1.2. Contributions of This Study and Advantages of the Proposed Method

To overcome the limitations of the existing image fusion methods, a simple and efficient general image fusion technique named Gaussian of differences (GD) is proposed. The unique aspects of the proposed GD image fusion method can be listed as follows:
  • The proposed algorithm does not use any transformations and works directly in the pixel domain. Also, it is based on basic image convolution and linear weighting, which makes it simple and efficient. It can be implemented on real-time systems and is suitable for parallel processing.
  • The method enhances the high-frequency components of each input image using simple first-order derivative edge detection. It then uses a Gaussian filter to weight the contributions of neighboring pixels to the center pixel, with the weight decreasing with distance.
  • The proposed GD method has only two control parameters: the size of the filter and the standard deviation of the distribution. In addition to making use of predefined parameters, an optimal solution using the pattern search (PS) algorithm is also proposed to investigate the adaptability capability of the GD method.
  • The method is a general-purpose image fusion algorithm that can be used in a variety of applications, including multi-modal medical image fusion, infrared and visible image fusion for enhanced night vision or remote sensing, multi-focus image fusion for extending the depth of field, and multi-exposure image fusion for high dynamic range imaging.
  • It can combine single-band (gray-level), color (RGB), multi-spectral, and hyperspectral images due to its generalized structure.
The rest of this paper is organized as follows: the proposed GD fusion method is briefly introduced, illustrated, and demonstrated in Section 2. Section 3 outlines extensive experiments with 48 pairs of test images (in total) belonging to four different image fusion applications. Finally, Section 4 concludes the paper.

2. Proposed Method

Speed and performance are crucial features of imaging systems. Therefore, one of the primary factors considered in designing the proposed image fusion method was keeping the computational complexity low. Another significant concern was the generation of a single composite image that incorporates meaningful information from images captured at multiple or diverse wavelengths [51]. The resulting combined image should be suitable for both human interaction and computer vision applications [52].
Many of the existing fusion methods in the literature employ multi-resolution transforms such as DWT, LP, and discrete cosine transform (DCT) to mitigate the impact of image misalignments [53]. However, these transformations increase the computational complexity of the methods. Edge information, which typically contains high-frequency components, plays a crucial role in determining the importance of pixels in an image.
For the method proposed in this paper, at first, the gradients of each source image based on the first-degree derivative information are computed. These gradients are then evaluated along with the neighboring pixels. Linearly, the contribution of each pixel from different input images to the resulting pixel in the final fused image is determined. The block diagram of the proposed GD image fusion method is presented in Figure 1.
The steps of the proposed GD fusion method can be summarized as follows:
  • Edge information is generally related with the information content of an image. The first-order derivation (difference of adjacent pixels) of an image simply emphasizes the edges. The column and row differences of each input image are calculated:
    C D k i , j = I k i , j I k i , j + 1 2 R D k i , j = I k i , j I k i + 1 , j 2
where i and j are row and column indexes, CD and RD indicate the column and row differences, respectively, and k is the input image index. In Figure 2, a face image in the visible spectrum is given as I1 and an infrared image of the same scene is given as I2. The column and row differences of the input images are also visualized.
2.
Column and row differences emphasize the edges along vertical and horizontal axes, respectively. To combine them into a single representation (D), the Euclidian distance is used, and features related with each pixel based on the edge content are calculated (visualized in Figure 3):
D k i , j = C D k i , j + R D k i , j
3.
Linear weighting is a well-known approach used to determine the information transfer of each input image to the output fused image. To determine the contributions of neighbors of pixels in each image at different input images to the information content of the respective pixel, the differences are filtered (i.e., weighted) using a 2D Gaussian filter and the Gaussian of the Differences is obtained (GD), which is visualized in Figure 4. This representation will be used to calculate the weighting factor of each pixel:
G D k i , j = p = s s r = s s w p , r · D k i + p , j + r
where s is the window size, w is a 2D Gaussian filter with a standard deviation of σ :
w = 1 2 π σ 2 e x 2 + y 2 / 2 σ 2
4.
Weighting factors (fw) are determined for the pixels in each input image using GD proportional to their values, as visualized in Figure 5. Therefore, the sum of the weighting coefficients of a specific pixel is always equal to one, regardless of how many input images exist:
f w k i , j = G D k i , j G D k i , j
5.
The fused image (F), as demonstrated in Figure 6, is created with the linear weighting method using weighting factors. Assume that there are two input images in an application, and for a specific pixel, let the fws be 0.4 and 0.6, respectively. The fusion result of that specific pixel is summation of 40% of the first input image’s pixel value I 1 ( i , j ) and 60% of the second input image I 2 ( i , j ).
F i , j = f w k i , j · I k i , j
In the prosed GD fusion method, before calculating the contribution of pixels to the fused image, the placement of the Gaussian filter (7 × 7 for s = 3) is used to contribute to the edge information of each pixel. This is given in Figure 7. The pixel of interest in the center is weighted with the highest coefficient w(0,0) in the Gaussian kernel, and the neighbors are weighted with smaller coefficients as they move away from the center due to the nature of the Gaussian kernel.
The fusion results are promising, as shown in the visual steps of the proposed GD method. In Step 1, the column and row differences are calculated, and the edge content, which exhibits the high-frequency components of the input images, is obtained, as shown in Figure 2. In Step 2, the row and column differences are combined with the help of the Euclidean distance, and the results for the sample images are given in Figure 3. In the third step of the method, the edge information, obtained using the differences of each pixel, is convolved with the Gaussian kernel with s = 10 in order to include the contribution of the neighbors of the relevant pixel. The GDs obtained are shown in Figure 4. In Step 4, weighting factors are obtained using GDs and visualized in Figure 5 using the jet coloring map. Here, the red color indicates that the numerical value of the weighting factor for the relevant pixel is one, which is the highest ratio, and the blue color indicates that the lowest value is zero. When the weighting factor matrix (fw1) of the visible image is examined, the outer edges of the lips, nose, and eyes are enhanced. On the other hand, when the weighting factor matrix (fw2) of the near-infrared image is examined, details such as the iris and nostrils seem to have higher factors. The fused image (F), obtained in the fifth step of the method with the weighted average using the weighting factors, is given in Figure 6. When the final fused image is examined, it can be seen that the details that are present in the visible image but not in the infrared image, and vice versa, are combined into a single composite image.

Optimization of GD Parameters

A Gaussian filter is defined by two parameters, as given in Equation (4): the size of the filter (s) and the standard deviation of the Gaussian distribution σ . Using predefined values for s and σ may not be suitable for all images. Therefore, an optimal approach to determine the best parameter set for any input image is proposed in this section.
A block diagram of the proposed optimal scheme is illustrated in Figure 8. As can be seen in the figure, pattern search (PS) is chosen as the optimizer due to its simplicity and robustness. Also, PS is a well-known, derivative-free algorithm that does not require a gradient [55]. The steps of the proposed Gaussian of differences with pattern search (GDPS) method can be summarized as follows:
  • Define the maximum iteration number of PS and set the initial values of GD parameters.
  • Evaluate the initial solution and calculate its fitness value (overall quality of the fused image):
    a.
    Apply all steps of the proposed GD fusion method explained in the previous section (Equations (1)–(6)).
    b.
    Calculate the fused image quality using an image metric (see Section 3.3).
f i t n e s s = Q F s , σ
where Q is the image quality metric to be maximized, F is the fused image, s is the size of the Gaussian filter, and σ is the standard deviation of the Gaussian distribution.
3.
Apply the operators of PS to find a better GD parameter solution that maximizes the fused image quality.
4.
Repeat Steps 2 and 3 until the maximum iteration number or a predefined stopping condition is reached.

3. Experimental Results

For this section, a comprehensive series of experiments were conducted to assess the performance of the proposed GD method. As explained in Section 2, the GD method has only two control parameters: the size of the Gaussian kernel (s) and the standard deviation of the Gaussian distribution ( σ ). In the experiments, two types of cases were evaluated:
  • First, a predefined parameter set for GD was used. s values of 5, 10, and 15 values, named GD5, GD10, and GD15, respectively, were evaluated. In this case, the second parameter σ was defined according to the value of the filter size, σ = s / 3 .
  • Second, the parameters of GD were adaptively determined by using the pattern search optimization algorithm to maximize the image quality. Unreported intensive experiments have shown that using Qabf, Qcb, and Qcv as fitness functions generates the best results. Therefore, the versions of this case were named GDPSQABF, GDPSQCB, and GDPSQCV, respectively.

3.1. Image Dataset

To validate the performance of the proposed GD method, four different types of image fusion cases were selected: multi-modal medical images [56], multi-sensor infrared and visible images [45], multi-focus images [57], and multi-exposure images [58]. The specifications of the images used the experiments are summarized in Table 1.
The multi-modal medical image dataset had eight pairs of images, which are shown in Figure 9. The multi-sensor infrared and visible image dataset had 14 pairs of images, which are shown in Figure 10. The multi-focus dataset had 20 pairs of images, which are shown in Figure 11. And the multi-exposure image dataset had six pairs of images, which are shown in Figure 12.

3.2. Experimental Setup

The environmental features of the experiments are summarized in Table 2. Since there is no training phase in the proposed method, a standard workstation could be sufficient. In the experiments, the MATLAB library developed by Zhang et al., published openly on GitHub, was used [59].
The configuration parameters of the fusion methods used in the experiments for comparison are summarized in Table 3. For the comparison methods, the default parameters of the original authors were used. For the proposed GD method, the parameters were determined by trial and error. Therefore, six different cases of the proposed GD method were included in the experiments (GD5, GD10, GD15, GDPSQABF, GDPSDQCB, and GDPSQCV) to emphasize the stability and adaptability of our method.
The experiments were conducted on 48 pairs of images. However, due to lack of space, only eight image pairs were selected to be visualized and compared in detail in the following sections. To investigate all results, please see the Supplementary Materials section at the end of the paper.

3.3. Objective Quality Metrics

Except for the visual analysis of the fusion results, objective quality metrics were utilized to compare the proposed method with other methods quantitatively [60]. The evaluation of a fused image by visual inspection included steps such as assessing the clarity and sharpness of the output image and identifying the amount of information transferred from input images to the source image. Visual evaluation is a very helpful method for comparing performances; however, visual interpretation is highly subjective. In order to make a fair comparison, the following image quality criterions were used in the experiments:
Entropy (EN) is a metric that is used the measure the information content of an image [61]:
EN I f = x = 0 L h I f ( χ ) log h I f ( χ )
where L is the number of gray levels and h I f (i) is the normalized histogram of the fused image.
Mutual information (MI) is a numerical metric that measures the interdependence of two variables. It is used to measure the amount of information shared by two images. The MI for two discrete random variables U and V is defined by [62]:
MI ( U , V ) = v ϵ V u ϵ U p u , v log p u , v p u p v
where p u , v indicates the probability density function of U and V, and p u and p v are the marginal probability density functions of U and V, respectively.
The peak signal-to-noise ratio (PSNR) represents the logarithmic decibel scale ratio between the maximum potential power of a signal and the power of the noise that introduces distortion to said signal. A high PSNR value indicates high image quality. L is the number of colors in the gray level and is taken as 255 [63]:
PSNR   ( f , g ) = 10 log 10 L 2 1 M × N f = 1 M g = 1 N R f , g F f , g 2
Edge-based similarity (Qabf) is obtained by weighting the normalized edge information of both source images [64]:
Qabf = n = 1 N m = 1 M Q A F n , m w A n , m + Q B F n , m w B n , m i = 1 N j = 1 M   w A i , j + w B i , j
The structure similarity index method (SSIM) is a metric with the purpose of measuring how much of the structure of the input image is preserved in the fused image [65]:
SSIM ( x , y ) = 2 μ x μ y + c 1 2 σ x y + c 2 μ x 2 + μ y 2 + c 1 σ x 2 + σ y 2 + c 2
The Chen–Blum metric (Qcb) is a referenceless image quality metric inspired by human perception [66]. The Qcb value is obtained by calculating the average value of the global quality map:
Q cb ( x , y ) = λ A ( x , y )   Q A F ( x , y ) + λ B ( x , y )   Q B F ( x , y )
Cross entropy (CE) serves as a metric to assess the congruity of the information content between the input images and the fused image. Reference and fused images including the same information will have a low CE value [67]:
CE ( I 1 ,   I 2 : I f ) = C E I 1 , I f + C E I 2 , I f 2
Root mean square error (RMSE) is a measure of accuracy used to realize differences in estimation errors from different estimators for a variable and is desired to be as low as possible [63]:
RMSE = 1 M N i = 1 M j = 1 N I a i , j I b i , j 2
Chen Varshney (Qcv) is a quality metric used in image fusion based on regional information inspired by human perception [68]. The lower the Qcv, the better the fusion result:
Q cv = I = 1 N I = 1 L λ X I W 1 D X I W 1 , X F W 1 I = 1 N I = 1 L ( λ X I W 1 )
where X = [ X 1 , X 2 …, X N ] input images and X F is the fused image.
For the EN, MI, PSNR, Qabf, SSIM, and Qcb metrics, higher values indicate better results. And for CE, RMSE, and Qcv, lower values indicate good performance. In the following tables, the best result is colored in green, second-best result is colored in dark red, and the third-best result is indicated by a blue color.

3.4. Medical Image Fusion

For this sub-section, medical images M#2 and M#5, shown in Figure 9, were selected from eight candidates among the dataset and tested. The visual fusion results of image set M#2 are given in Figure 13. Input Image A is a computed tomography (CT) slice image of the human brain, and Image B is a magnetic resonance (MR) image of the same section. In an ideal case, the bright bone features shown in the CT image and the tissue features shown in the MR image should be included in the fused image. As can be seen from the visual results, the GFCE image has obvious noise in the background. The FPDE and MSVD images lack contrast. The IFEVIP and VSMWSL images resemble mostly Input A (CT) and ignore Input B (MR). As a result, the ADF, CBF, GTF, HMSD, and proposed GD methods show better visual performance than others.
In Table 4, the numerical results of the quality metrics of the comparison methods for M#2 are given. As can be seen in the table, the VSMWLS, proposed GD15, and proposed GDPSQCV methods show better performance according to the numerical metrics. On the other hand, GFGC, ADF, and IFEVIP show the worst performance compared to the others.
The results of the image set M#5 are given in Figure 14. As can be seen from the results, ADF, FPDE, GFCE and MSVD show poor visual performance. On the other hand, the CBF, VSMWLS, and proposed GD methods show better visual performance than other techniques.
In Table 5, the numerical results of the quality metrics of the comparison methods for M#5 are given. As can be seen in Table 5, the CNN, proposed GD10, and proposed GDPSQCV methods show better performance according to the numerical results. On the other hand, MSVD, FPDE, and GFCE show the worst performance compared to the others.

3.5. Infrared and Visible Image Fusion

Infrared images acquired at wavelengths of 750 nm–1 mm reveal the thermal radiation of objects in a scene. On the other hand, RGB color images are captured at 400 nm–750 nm wavelengths, a range which is called the visible spectrum. For this sub-section, infrared and visible images IV#4 and IV#5, shown in Figure 10, from 14 candidates among the dataset were selected and tested. The visual fusion results of image set IV#4 are given in Figure 15. Input Image A is an infrared image of a scene that depicts three people, with a gun being held by the person on the right. Image B is a visible image of the same scene. Ideally, both thermal and visible features should be included in the fused image. As can be seen from the visual results, the contrast of the GFCE image is saturated. The result of the GTF method is blurry and includes very few features from the visible image input. The result of the MSVD method has low contrast. On the other hand, the CBF, ADF, VSMWSL, CNN, and proposed GDPS methods show better performance than the others.
From Table 6, it can be seen that CBF, VSMWSL, and the proposed GD15 and GDPSQCB methods show better performance according to the objective metrics. On the other hand, GFCE, GTF, and MSVD show the worst performance compared to the other methods.
The results of image set IV#5 are given in Figure 16. As can be seen from the results, CBF, GTF, and all of the GD methods except GDPSQCB show poor visual performance. On the other hand, the HMSD and MSVD methods show better visual performance than the other techniques.
In Table 7, the quantitative fusion results are given. As can be seen, HMSD, MSVD, FPDE, and GDPSQABF show better performance according to the objective metrics. On the other hand, GFCE, GTF, and the proposed GD5, GD10, GD15, and GDPSQCV methods show the worst performance compared to the other methods.

3.6. Multi-Focus Image Fusion

Images captured using a single lens of scenes containing objects at different distances have blurry regions. To extend the depth of field, images with different focal lengths are fused.
For this sub-section, multi-focus images F#11 and F#15, shown in Figure 11, from 20 candidates among the dataset were selected and tested. In Figure 17, the fusion results of test image F#11 are given. In Input Image A, the near objects (hand and camera) are in focus, while in Input Image B, the far object (globe) is in focus. An everywhere-in-focus image is desired, which the fused image provides.
The visual results show that the contrasts of the GFCE and IFEVIP images are saturated. The GTF result is blurry (hand and camera). The MSVD, ADF, and FPDE results are also not sharp (globe). On the other hand, CBF, HMSD, VSMWSL, CNN, and the proposed GDPS methods show better performance than the others.
In Table 8, the numerical results of the quality metrics of the comparison methods for F#11 are given. As can be seen in the table, CBF, CNN, and the proposed GD15, GD10, GDPSQCV, and GDPSQCB methods show better performance according to the numerical results. On the other hand, GFCE, IFEVIP, and MSVD show the worst performance compared to the others.
The results of image set F#15 are given in Figure 18. As can be seen from the results, IFEVIP and GFCE show very poor visual performance. The results of MSVD and GTF contain blurry regions. On the other hand, CBF, VSMWLS, HMSD, ADF, CNN, and the proposed GDPSQCB methods show better visual performance than the other techniques.
From Table 9, it can be seen that GTF, CBF, CNN, and the proposed GDPSQCB, GD15, and GD10 methods show better performance according to the objective metrics. On the other hand, GFCE, IFEVIP, and MSVD show the worst performance compared to other methods.

3.7. Multi-Exposure Image Fusion

In the last case, image fusion algorithms were compared with regard to their use on multi-exposure images selected from six candidates among the dataset (images E#5 and E#6 of Figure 12). For a first example, the visual results of image E#5 are given in Figure 19. In Input Image A, the inside of the oven is visible, and the remaining objects are saturated. However, in Input Image B, the background details are in good contrast. Multi-exposure image fusion helps us create a high-dynamic-range image in which whole regions have balanced contrast. As can be seen from the results, CBF, HMSD, VSMWLS, CNN, and the proposed GD methods exhibit good visual performance. Moreover, the IFEVIP, GFCE, and GTF methods show poorer visual performance than the other techniques.
In Table 10, the numerical results of the quality metrics of the comparison methods are given for image set E#5. As can be seen in the table, ADF, FPDE, and the proposed GD15 and GDPSQCV methods show better performance according to the numerical results. On the other hand, GFCE, IFEVIP, and GTF show the worst performance compared to the others.
The results of image set E#6 are given in Figure 20. As can be seen from the results, CBF, GTF, and GD5 show poor visual performance. Otherwise, GFCE, VSMWLS, HMSD, ADF, CNN, and the proposed GDPSQCV method show better visual performance than the other techniques.
In Table 11, the quantitative results of the comparison methods are given for image set E#6. As can be seen in the table, ADF, FPDE, and GDPSQCV show better performance according to the numerical results. On the other hand, GFCE, IFEVIP, and GTF show the worst performance compared to the others.

3.8. Overall Comparison

To evaluate the numerical results more easily, the average rankings of the methods with regards to all of the quality metrics were calculated for all 48 images used in the experiments. The best ranking was set to first, and the worst ranking was set to sixteenth according to the quality metric value of each method, as we have sixteen methods in total. Each fusion application type is given in a separate table.
Table 12 shows the ranking of each method for the fusion of multi-modal medical images, including M#1 to M#8. At the bottom of the table, the average ranking of each method compared to all of the images for medical image fusion is indicated. As can be seen in Table 12, overall better results in average ranking were obtained with GD10, GD15, and GDPSQCB, whose average ranking was around sixth. GFCE and MSVD were the two worst methods with an average ranking of ~12th.
Table 13 shows the ranking of each method for the fusion of infrared and visible images, including IV#1 to IV#14. As can be seen in Table 13, overall better average rankings were obtained with HMSD, GDPSQCV, GDPSQABF, and CNN, whose average ranking was around seventh. ~GTF was the worst method an average ranking of ~11th.
The ranking of each method for the fusion of multi-focus images, including F#1 to F#20, are given in Table 14. As can be seen from the results, overall better average rankings were obtained with GD15, GDPSQCV, GD10, CBF, and CNN, whose average ranking was around sixth. GFCE and IFEVIP were the worst methods with an average ranking of ~14th average ranking.
The ranking of each method for the fusion of multi-exposure images, including E#1 to E#6, are given in Table 15. As can be seen from the results, overall better average rankings were obtained with GDPSQCV, GDPSQABF, and ADF, whose average ranking was around fifth. GFCE was the worst method with an average ranking of ~13th.
The global average rankings and average CPU time consumptions of the methods for all 48 images are given in Table 16. As can be seen from the table, the proposed GD methods take the first three best rankings. The methods can be ordered from best to worst as GDPSQCV, GD15, GDPSQABF, GDPSQCB, GD10, HMSD, CNN, VSMWLS, ADF, FPDE, GD5, CBF, MSVD, GTF, IFEVIP, and GFCE. Table 16 also shows the global average CPU time consumptions of the methods in seconds. The execution time of an image processing method is directly affected by its complexity and the CPU capacity it is run on, as shown in [69]: the lower the CPU time, the faster the execution time of the method. According to the numerical results, IFEVIP, GD5, and GD10 are the fastest methods compared to the others.

4. Conclusions

In this paper, a general image fusion method based on the GD, linear weighting, and PS optimization is proposed. The main advantages of the proposed GD method can be summarized as follows:
  • It is based on basic image convolution and linear weighting. Thus, the main algorithm is very simple and can be implemented on embedded systems and PCs and easily parallelized on multiple CPU or GPU cores.
  • It is a pixel-based image fusion method, and the method does not utilize an image transform. Moreover, it does not require a training phase. Therefore, the proposed method is pretty fast compared to state-of-the-art fusion methods.
  • The method relies on transferring information from each input image by enhancing the high-frequency components using simple, first-order derivative edge detection. Neighboring pixels also contribute to the center pixel’s weighting, proportional to their distance, using a Gaussian filter.
  • The method has only two control parameters. In this paper, we define some predefined parameter sets and explore their performance. And a simple optimal solution to determine the adaptively control parameters is also proposed and compared.
  • It can be used in any kind of image fusion application, such as multi-modal medical image fusion, infrared and visible image fusion for enhanced night vision, multi-focus image fusion for extending the depth of field, and multi-exposure image fusion for high-dynamic-range imaging.
  • It can fuse more than two input images with the help of its generalized structure. Therefore, it can be used in future studies to fuse multi-spectral and hyperspectral images with 10–200 input images corresponding to different wavelengths in the visible and non-visible spectrum.
The proposed GD method with its six different versions has been compared with 10 state-of-the-art image fusion methods by utilizing qualitative and quantitative evaluation. In total, 48 pairs of test images were used in the experiments. However, only two pairs of test images were detailed and visualized for each of the four different types of image fusion in the experiments. The fusion results of all images in the dataset can be found at the Supplementary Materials section. In addition to visual subjective evaluations, nine objective quality metrics were utilized to compare the proposed GD method with other fusion methods.
Extensive experiments have shown that the proposed GDPSQCV method attained an average rank of 6.44th among 16 methods, when considering all quality metrics and all test images, which is the best ranking of all of the methods. Moreover, the average CPU consumption time of GD15, which is the second best in overall ranking, is about 0.20 s, which is only 0.05 s slower than IFEVIP (revealed as the fastest method in the experiments). However, it must be noted that IFEVIPs average ranking is 11.41th. In addition to this, the proposed GD15 is ~115× faster than the CNN method in terms of average CPU consumption time for the fusion of 48 image pairs on an Intel i7 CPU clocked @ 4 GHz PC without parallel programming. Increasing the Gaussian filter size increases the success of the proposed method. Namely, GD15 obtained better results than GD10, and GD10 obtained better results than GD5. However, unreported experiments showed that increasing the filter size causes undesirable visual effects on the fused image. Optimal versions of GD have better performance compared to their non-adaptive versions such as GD5, GD10, and GD15. However, the CPU computing times of GDPS versions are much higher.
The main limitation of the proposed method is that it does not guarantee the best result in a particular application. However, it is capable of being a general fusion scheme and gives better results in average for any kind of fusion application. In future studies, optimization algorithm and the fitness function to be optimized may be improved. Meta-heuristic algorithms are very promising, and multi-objective versions can improve the overall performance by optimizing two or more quality metrics together. In addition to this, GPU computing techniques may be utilized to speed up the optimization process. As a result, although it may not achieve the overall best result in all tests, the proposed GD method can be used as a simple and effective general image fusion method.

Supplementary Materials

The following supporting information can be downloaded at: https://github.com/rifatkurban/GDfusion, fused images and numerical results of input image pairs in the dataset.

Funding

This research received no external funding.

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Publicly available datasets were analyzed in this study. These data and MATLAB codes of the proposed GD method will be released at: https://github.com/rifatkurban/GDfusion.

Conflicts of Interest

The author declares no conflict of interest.

References

  1. Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
  2. Tang, L.; Yuan, J.; Ma, J. Image fusion in the loop of high-level vision tasks: A semantic-aware real-time infrared and visible image fusion network. Inf. Fusion 2022, 82, 28–42. [Google Scholar] [CrossRef]
  3. Zhang, H.; Xu, H.; Tian, X.; Jiang, J.; Ma, J. Image fusion meets deep learning: A survey and perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar] [CrossRef]
  4. Civicioglu, P.; Besdok, E. Contrast stretching based pansharpening by using weighted differential evolution algorithm. Expert Syst. Appl. 2022, 208, 118144. [Google Scholar] [CrossRef]
  5. James, A.P.; Dasarathy, B.V. Medical image fusion: A survey of the state of the art. Inf. Fusion 2014, 19, 4–19. [Google Scholar] [CrossRef] [Green Version]
  6. Li, Y.; Zhao, J.; Lv, Z.; Li, J. Medical image fusion method by deep learning. Int. J. Cogn. Comput. Eng. 2021, 2, 21–29. [Google Scholar] [CrossRef]
  7. Lu, Q.; Han, Z.; Hu, L.; Tian, F. An Infrared and Visible Image Fusion Algorithm Method Based on a Dual Bilateral Least Squares Hybrid Filter. Electronics 2023, 12, 2292. [Google Scholar] [CrossRef]
  8. Ma, J.; Liang, P.; Yu, W.; Chen, C.; Guo, X.; Wu, J.; Jiang, J. Infrared and visible image fusion via detail preserving adversarial learning. Inf. Fusion 2020, 54, 85–98. [Google Scholar] [CrossRef]
  9. Li, L.; Ma, H. Saliency-Guided Nonsubsampled Shearlet Transform for Multisource Remote Sensing Image Fusion. Sensors 2021, 21, 1756. [Google Scholar] [CrossRef]
  10. Jinju, J.; Santhi, N.; Ramar, K.; Sathya Bama, B. Spatial frequency discrete wavelet transform image fusion technique for remote sensing applications. Eng. Sci. Technol. Int. J. 2019, 22, 715–726. [Google Scholar] [CrossRef]
  11. Wang, L.; Hu, Z.M.; Kong, Q.; Qi, Q.; Liao, Q. Infrared and Visible Image Fusion via Attention-Based Adaptive Feature Fusion. Entropy 2023, 25, 407. [Google Scholar] [CrossRef] [PubMed]
  12. Ayas, S.; Gormus, E.T.; Ekinci, M. An Efficient Pan Sharpening via Texture Based Dictionary Learning and Sparse Representation. IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens. 2018, 11, 2448–2460. [Google Scholar] [CrossRef]
  13. Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 502–518. [Google Scholar] [CrossRef] [PubMed]
  14. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Qi, G.; Mazur, N. A Novel Fast Single Image Dehazing Algorithm Based on Artificial Multiexposure Image Fusion. IEEE Trans. Instrum. Meas. 2021, 70, 1–23. [Google Scholar] [CrossRef]
  15. Aslantaş, V.; Kurban, R.; Toprak, A.; Bendes, E. An interactive web based toolkit for multi focus image fusion. J. Web Eng. 2015, 14, 117–135. [Google Scholar]
  16. Li, J.; Guo, X.; Lu, G.; Zhang, B.; Xu, Y.; Wu, F.; Zhang, D. DRPL: Deep Regression Pair Learning for Multi-Focus Image Fusion. IEEE Trans. Image Process. 2020, 29, 4816–4831. [Google Scholar] [CrossRef]
  17. Liu, Y.; Wang, L.; Cheng, J.; Li, C.; Chen, X. Multi-focus image fusion: A Survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar]
  18. Skuka, F.; Toprak, A.N.; Karaboga, D. Extending the depth of field of imaging systems using depth sensing camera. Signal Image Video Process. 2023, 17, 323–331. [Google Scholar] [CrossRef]
  19. Wei, B.; Feng, X.; Wang, K.; Gao, B. The Multi-Focus-Image-Fusion Method Based on Convolutional Neural Network and Sparse Representation. Entropy 2021, 23, 827. [Google Scholar] [CrossRef]
  20. Çıtıl, F.; Kurban, R.; Durmuş, A.; Karaköse, E. Fusion of Multi-Focus Images using Jellyfish Search Optimizer. Eur. J. Sci. Technol. 2022, 14, 147–155. [Google Scholar]
  21. Zhang, Y.; Liu, Y.; Sun, P.; Yan, H.; Zhao, X.; Zhang, L. IFCNN: A general image fusion framework based on convolutional neural network. Inf. Fusion 2020, 54, 99–118. [Google Scholar] [CrossRef]
  22. Aslantas, V.; Kurban, R. Fusion of multi-focus images using differential evolution algorithm. Expert Syst. Appl. 2010, 37, 8861–8870. [Google Scholar] [CrossRef]
  23. Cheng, H.; Zhang, D.; Zhu, J.; Yu, H.; Chu, J. Underwater Target Detection Utilizing Polarization Image Fusion Algorithm Based on Unsupervised Learning and Attention Mechanism. Sensors 2023, 23, 5594. [Google Scholar] [CrossRef]
  24. Kurban, T. Region based multi-spectral fusion method for remote sensing images using differential search algorithm and IHS transform. Expert Syst. Appl. 2022, 189, 116135. [Google Scholar] [CrossRef]
  25. Diwakar, M.; Tripathi, A.; Joshi, K.; Memoria, M.; Singh, P.; kumar, N. Latest trends on heart disease prediction using machine learning and image fusion. Mater. Today: Proc. 2021, 37, 3213–3218. [Google Scholar] [CrossRef]
  26. Belgiu, M.; Stein, A. Spatiotemporal Image Fusion in Remote Sensing. Remote Sens. 2019, 11, 818. [Google Scholar] [CrossRef] [Green Version]
  27. Vivone, G. Multispectral and hyperspectral image fusion in remote sensing: A survey. Inf. Fusion 2023, 89, 405–417. [Google Scholar] [CrossRef]
  28. Kaur, M.; Singh, D. Fusion of medical images using deep belief networks. Clust. Comput. 2020, 23, 1439–1453. [Google Scholar] [CrossRef]
  29. Piao, J.; Chen, Y.; Shin, H. A New Deep Learning Based Multi-Spectral Image Fusion Method. Entropy 2019, 21, 570. [Google Scholar] [CrossRef]
  30. Zhenhua, L.; Zhongliang, J.; Gang, L.; Shaoyuan, S.; Henry, L. Pixel visibility based multifocus image fusion. In Proceedings of the International Conference on Neural Networks and Signal Processing, Nanjing, China, 14–17 December 2003; Volume 2, pp. 1050–1053. [Google Scholar]
  31. Yang, B.; Li, S. Multi-focus image fusion based on spatial frequency and morphological operators. Chin. Opt. Lett. 2007, 5, 452–453. [Google Scholar]
  32. Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
  33. Li, S.; Kwok, J.T.; Wang, Y. Multifocus image fusion using artificial neural networks. Pattern Recognit. Lett. 2002, 23, 985–997. [Google Scholar] [CrossRef]
  34. Li, S.; Kwok, J.T.Y.; Tsang, I.W.H.; Wang, Y. Fusing images with different focuses using support vector machines. IEEE Trans. Neural Netw. 2004, 15, 1555–1561. [Google Scholar] [CrossRef] [Green Version]
  35. Ludusan, C.; Lavialle, O. Multifocus image fusion and denoising: A variational approach. Pattern Recognit. Lett. 2012, 33, 1388–1396. [Google Scholar] [CrossRef]
  36. Li, S.; Kang, X. Fast multi-exposure image fusion with median filter and recursive filter. IEEE Trans. Consum. Electron. 2012, 58, 626–632. [Google Scholar] [CrossRef] [Green Version]
  37. Banharnsakun, A. Multi-focus image fusion using best-so-far ABC strategies. Neural Comput. Appl. 2019, 31, 2025–2040. [Google Scholar] [CrossRef]
  38. Aslantas, V.; Bendes, E.; Kurban, R.; Toprak, A.N. New optimised region-based multi-scale image fusion method for thermal and visible images. IET Image Process. 2014, 8, 289–299. [Google Scholar] [CrossRef]
  39. Li, S.; Yang, B. Multifocus image fusion using region segmentation and spatial frequency. Image Vis. Comput. 2008, 26, 971–979. [Google Scholar] [CrossRef]
  40. Bavirisetti, D.P.; Dhuli, R. Fusion of Infrared and Visible Sensor Images Based on Anisotropic Diffusion and Karhunen-Loeve Transform. IEEE Sens. J. 2016, 16, 203–209. [Google Scholar] [CrossRef]
  41. Shreyamsha Kumar, B.K. Image fusion based on pixel significance using cross bilateral filter. Signal Image Video Process. 2015, 9, 1193–1204. [Google Scholar] [CrossRef]
  42. Bavirisetti, D.P.; Xiao, G.; Liu, G. Multi-sensor image fusion based on fourth order partial differential equations. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; pp. 1–9. [Google Scholar]
  43. Zhou, Z.; Dong, M.; Xie, X.; Gao, Z. Fusion of infrared and visible images for night-vision context enhancement. Appl. Opt. 2016, 55, 6480–6490. [Google Scholar] [CrossRef]
  44. Ma, J.; Chen, C.; Li, C.; Huang, J. Infrared and visible image fusion via gradient transfer and total variation minimization. Inf. Fusion 2016, 31, 100–109. [Google Scholar] [CrossRef]
  45. Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
  46. Zhang, Y.; Zhang, L.; Bai, X.; Zhang, L. Infrared and visual image fusion through infrared feature extraction and visual information preservation. Infrared Phys. Technol. 2017, 83, 227–237. [Google Scholar] [CrossRef]
  47. Naidu, V.P.S. Image fusion technique using multi-resolution singular value decomposition. Def. Sci. J. 2011, 61, 479–484. [Google Scholar] [CrossRef] [Green Version]
  48. Ma, J.; Zhou, Z.; Wang, B.; Zong, H. Infrared and visible image fusion based on visual saliency map and weighted least square optimization. Infrared Phys. Technol. 2017, 82, 8–17. [Google Scholar] [CrossRef]
  49. Liu, Y.; Chen, X.; Cheng, J.; Peng, H.; Wang, Z. Infrared and visible image fusion with convolutional neural networks. Int. J. Wavelets Multiresolution Inf. Process. 2018, 16, 1850018. [Google Scholar] [CrossRef]
  50. Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207. [Google Scholar] [CrossRef]
  51. Zhao, Z.; Su, S.; Wei, J.; Tong, X.; Gao, W. Lightweight Infrared and Visible Image Fusion via Adaptive DenseNet with Knowledge Distillation. Electronics 2023, 12, 2773. [Google Scholar] [CrossRef]
  52. Jie, Y.; Li, X.; Wang, M.; Tan, H. Multi-Focus Image Fusion for Full-Field Optical Angiography. Entropy 2023, 25, 951. [Google Scholar] [CrossRef]
  53. Hao, S.; Li, J.; Ma, X.; Sun, S.; Tian, Z.; Cao, L. MGFCTFuse: A Novel Fusion Approach for Infrared and Visible Images. Electronics 2023, 12, 2740. [Google Scholar] [CrossRef]
  54. Kang, D.; Han, H.; Jain, A.K.; Lee, S.-W. Nighttime face recognition at large standoff: Cross-distance and cross-spectral matching. Pattern Recognit. 2014, 47, 3750–3766. [Google Scholar] [CrossRef]
  55. Dolan, E.D.; Lewis, R.M.; Torczon, V. On the Local Convergence of Pattern Search. SIAM J. Optim. 2003, 14, 567–583. [Google Scholar] [CrossRef]
  56. Liu, Y.; Chen, X.; Cheng, J.; Peng, H. A medical image fusion method based on convolutional neural networks. In Proceedings of the 2017 20th International Conference on Information Fusion (Fusion), Xi’an, China, 10–13 July 2017; IEEE: Piscataway, NJ, USA, 2017; pp. 1–7. [Google Scholar]
  57. Nejati, M.; Samavi, S.; Shirani, S. Multi-focus image fusion using dictionary-based sparse representation. Inf. Fusion 2015, 25, 72–84. [Google Scholar] [CrossRef]
  58. Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
  59. Zhang, X.; Ye, P.; Xiao, G. VIFB: A Visible and Infrared Image Fusion Benchmark. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Seattle, WA, USA, 14–19 June 2020; pp. 468–478. [Google Scholar]
  60. Varga, D. No-Reference Image Quality Assessment Using the Statistics of Global and Local Image Features. Electronics 2023, 12, 1615. [Google Scholar] [CrossRef]
  61. Roberts, J.W.; Van Aardt, J.A.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote Sens. 2008, 2, 023522. [Google Scholar]
  62. Qu, G.; Zhang, D.; Yan, P. Information measure for performance of image fusion. Electron. Lett. 2002, 38, 1. [Google Scholar] [CrossRef] [Green Version]
  63. Jagalingam, P.; Hegde, A.V. A review of quality metrics for fused image. Aquat. Procedia 2015, 4, 133–142. [Google Scholar] [CrossRef]
  64. Xydeas, C.S.; Petrovic, V.S. Objective pixel-level image fusion performance measure. In Proceedings of the Sensor Fusion: Architectures, Algorithms, and Applications IV, Orlando, FL, USA, 3 April 2000; pp. 89–98. [Google Scholar]
  65. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Process. 2004, 13, 600–612. [Google Scholar]
  66. Chen, Y.; Blum, R.S. A new automated quality assessment algorithm for image fusion. Image Vis. Comput. 2009, 27, 1421–1432. [Google Scholar] [CrossRef]
  67. Bulanon, D.; Burks, T.; Alchanatis, V. Image fusion of visible and thermal images for fruit detection. Biosyst. Eng. 2009, 103, 12–22. [Google Scholar] [CrossRef]
  68. Chen, H.; Varshney, P.K. A human perception inspired quality metric for image fusion based on regional information. Inf. Fusion 2007, 8, 193–207. [Google Scholar] [CrossRef]
  69. Kilickaya, F.; Okdem, S. Performance Analysis of Image Processing Techniques for Memory Usage and CPU Execution Time. In Proceedings of the International Conference on Engineering Technologies (ICENTE’21), Konya, Turkey, 18–20 November 2021; pp. 126–129. [Google Scholar]
Figure 1. Proposed general image fusion method based on pixel-based linear weighting using the Gaussian of differences (GD).
Figure 1. Proposed general image fusion method based on pixel-based linear weighting using the Gaussian of differences (GD).
Entropy 25 01215 g001
Figure 2. Sample input images (I1 and I2) [54] and their column and row differences.
Figure 2. Sample input images (I1 and I2) [54] and their column and row differences.
Entropy 25 01215 g002
Figure 3. Combined difference images (D) of the input images.
Figure 3. Combined difference images (D) of the input images.
Entropy 25 01215 g003
Figure 4. Gaussian of differences (GD) of the input images.
Figure 4. Gaussian of differences (GD) of the input images.
Entropy 25 01215 g004
Figure 5. Weighting factors (fw) for the input images.
Figure 5. Weighting factors (fw) for the input images.
Entropy 25 01215 g005
Figure 6. Fused image (F).
Figure 6. Fused image (F).
Entropy 25 01215 g006
Figure 7. Gaussian kernel (w) for s = 3 and σ = 1 .
Figure 7. Gaussian kernel (w) for s = 3 and σ = 1 .
Entropy 25 01215 g007
Figure 8. Optimization of the parameters of the proposed GD fusion method.
Figure 8. Optimization of the parameters of the proposed GD fusion method.
Entropy 25 01215 g008
Figure 9. Multi-modal medical images used in the experiments.
Figure 9. Multi-modal medical images used in the experiments.
Entropy 25 01215 g009
Figure 10. Multi-sensor infrared and visible images used in the experiments.
Figure 10. Multi-sensor infrared and visible images used in the experiments.
Entropy 25 01215 g010
Figure 11. Multi-focus images used in the experiments.
Figure 11. Multi-focus images used in the experiments.
Entropy 25 01215 g011
Figure 12. Multi-exposure images used in the experiments.
Figure 12. Multi-exposure images used in the experiments.
Entropy 25 01215 g012
Figure 13. Medical image set M#2 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 13. Medical image set M#2 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g013
Figure 14. Medical image set M#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 14. Medical image set M#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g014
Figure 15. Infrared and visible image set IV#4 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 15. Infrared and visible image set IV#4 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g015
Figure 16. Infrared and visible image set IV#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 16. Infrared and visible image set IV#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g016
Figure 17. Multi-focus image set F#11 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 17. Multi-focus image set F#11 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g017
Figure 18. Multi-focus image set F#15 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 18. Multi-focus image set F#15 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g018
Figure 19. Multi-exposure image set E#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 19. Multi-exposure image set E#5 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g019
Figure 20. Multi-exposure image set E#6 (Images A and B) and their fusion image results, obtained using comparison methods.
Figure 20. Multi-exposure image set E#6 (Images A and B) and their fusion image results, obtained using comparison methods.
Entropy 25 01215 g020
Table 1. Specifications of the image dataset used in the experiments.
Table 1. Specifications of the image dataset used in the experiments.
ApplicationImages in DatasetImage TypeResolution
Multi-modal medical8Graylevel TIF256 × 256
Multi-sensor infrared and visible14Graylevel PNG360 × 270, 430 × 340, 512 × 512, 632 × 496
Multi-focus20RGB JPG520 × 520
Multi-exposure6RGB JPG340 × 230, 230 × 340, 752 × 500
Total48--
Table 2. Specifications of the implemented environment for experiments.
Table 2. Specifications of the implemented environment for experiments.
Environmental FeatureDescription
Operating systemWindows 10 Pro
CPUIntel i7-4790K @ 4 GHz
GPUNvidia GeForce GTX 760
RAM16 GB
Programming languageMATLAB 2023a
Table 3. Configuration parameters of the fusion methods used in the experiments.
Table 3. Configuration parameters of the fusion methods used in the experiments.
Fusion MethodConfiguration Parameters
ADFnum_iter = 10, delta_t = 0.15, kappa = 30, option = 1
CBFcov_wsize = 5, sigmas = 1.8, sigmar = 25, ksize = 11
FPDEn = 15, dt = 0.9, k = 4
GFCEnLevel = 4, sigma = 2, k = 2, r0 = 2, eps0 = 0.1, l = 2
GTFadapt_epsR = 1, epsR_cutoff = 0.01, adapt_epsF = 1, epsF_cutoff = 0.05, pcgtol_ini = 1 × 10−4, loops = 5, pcgtol_ini = 1 × 10−2, adaptPCGtol = 1,
HMSDnLevel = 4, lambda = 30, sigma = 2.0, sigma_r = 0.05, k = 2,
IFEVIPQuadNormDim = 512, QuadMinDim = 32, GaussScale = 9, MaxRatio = 0.001, StdRatio = 0.8,
MSVD-
VSMWLSsigma_s = 2, sigma_r = 0.05
CNNtype = siamese network, weights_b1_1 = 9 ∗ 64, weights_b1_2 = 64 ∗ 9 ∗ 128, weights_b1_3 = 128 ∗ 9 ∗ 256, weights_output= 512 ∗ 64 ∗ 2
Proposed GD5s = 5, σ = 1.6
Proposed GD10s = 10, σ = 3.3
Proposed GD15s = 15, σ = 5
Proposed GDPSQABFoptimizer = pattern search, algorithm = classic, init_sol = [10; 3.3], lb = [5; 1], ub = [80; 100], max_iter = 20, fit_fun = −1 ∗ Qabf
Proposed GDPSQCBoptimizer = pattern search, algorithm = classic, init_sol = [10; 3.3], lb = [5; 1], ub = [80; 100], max_iter = 20, fit_fun = −1 ∗ Qcb
Proposed GDPSQCVoptimizer = pattern search, algorithm = classic, init_sol = [10; 3.3], lb = [5; 1], ub = [80; 100], max_iter = 20, fit_fun = Qcv
Table 4. Quality metric scores of medical images set M#2, obtained using comparison methods.
Table 4. Quality metric scores of medical images set M#2, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF4.7832.30859.2980.4671.4980.3631.2810.076858.898
CBF5.0152.49458.9790.5311.4960.4071.1980.082858.355
FPDE4.8362.33959.3970.4331.5050.3481.1900.075840.941
GFCE7.6152.19053.8490.4740.4630.3894.5020.2681643.875
GTF4.8132.24858.7700.5741.4860.6370.8310.0861154.964
HMSD4.8312.28658.6280.5501.4880.4420.8520.089999.258
IFEVIP5.1532.45757.5280.4841.4950.3651.3520.1151242.540
MSVD4.8232.36857.3270.4710.6900.2015.9330.120813.834
VSMWLS5.0242.35259.0330.5291.5300.4690.6670.081964.498
CNN4.9322.33758.4840.5541.5050.6030.7050.0921016.499
GD54.9012.47859.2090.5061.5190.3891.2470.078805.711
GD104.8542.46359.3000.4791.5220.3931.2200.076780.445
GD154.8192.45259.3420.4641.5220.3911.2080.076773.312
GDPSQABF4.9342.47159.1450.5161.5140.3861.2360.079841.810
GDPSQCB4.8622.47259.2800.4851.5210.3901.2400.077785.127
GDPSQCV4.7962.44359.3690.4561.5240.3921.1450.075781.420
Table 5. Quality metric scores of medical images set M#5, obtained using comparison methods.
Table 5. Quality metric scores of medical images set M#5, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF5.9752.28856.4590.4081.1700.5030.3290.147845.674
CBF5.9622.57156.5920.5121.2890.5110.2930.143523.914
FPDE6.4082.12256.0070.3051.0190.4750.4040.163896.311
GFCE7.3112.38254.9530.4470.9640.4922.9200.208751.841
GTF6.0062.38655.9320.4041.2750.4310.2870.1661677.168
HMSD6.4002.43556.3760.5131.3240.5190.5640.150549.287
IFEVIP6.3482.55455.2660.5281.3380.5080.7980.193628.882
MSVD5.7522.40556.8370.4041.1830.3863.9350.135694.471
VSMWLS6.1702.65956.5880.5121.3550.5240.3440.143495.160
CNN6.9132.58556.0010.5711.2770.5331.4270.163449.774
GD55.8222.55756.9060.4731.3520.4670.3290.133500.209
GD105.7912.57456.9660.4531.3710.4790.3380.131454.096
GD155.7762.56256.9970.4361.3740.4700.3400.130454.362
GDPSQABF5.8202.55356.9010.4741.3490.4830.3270.133511.725
GDPSQCB5.7962.56456.9540.4571.3670.4810.3310.131455.932
GDPSQCV5.7822.56856.9830.4441.3730.4740.3390.130452.181
Table 6. Quality metric scores of infrared and visible images set IV#4, obtained using comparison methods.
Table 6. Quality metric scores of infrared and visible images set IV#4, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF6.1321.46860.9470.4701.0450.4340.7900.052118.387
CBF6.7301.91959.7150.6321.1020.4730.7820.069211.646
FPDE6.1591.32560.9300.4811.0270.4340.7400.052119.226
GFCE7.6441.23155.4270.3910.4650.3932.0650.186646.551
GTF6.1611.01660.1220.3110.8630.3100.5750.063160.482
HMSD6.0701.48560.3560.5790.9980.4620.3910.060165.976
IFEVIP6.8692.14359.5030.6701.1290.4690.8910.073206.815
MSVD6.0241.57860.7790.3090.9440.3395.2850.054154.207
VSMWLS6.2971.40360.6210.6171.0720.4370.4940.056145.077
CNN5.7351.35060.2440.5620.9560.4240.2820.061178.710
GD56.6721.79160.0060.6281.1350.4690.8120.065152.836
GD106.6701.76160.0270.6321.1480.4700.7980.065146.837
GD156.6651.72360.0520.6291.1510.4690.7870.064135.276
GDPSQABF6.6711.76960.0230.6321.1470.4690.8020.065148.808
GDPSQCB6.6721.76360.0290.6301.1460.4700.7960.065145.011
GDPSQCV6.4951.47960.4700.5641.1270.4630.7630.05878.475
Table 7. Quality metric scores of infrared and visible images set IV#5, obtained using comparison methods.
Table 7. Quality metric scores of infrared and visible images set IV#5, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF5.9812.09158.4380.5881.4220.4153.6770.093649.629
CBF6.8962.82257.1780.6001.2270.4922.1570.125639.983
FPDE5.9722.14958.4390.5591.4220.4183.2750.093625.309
GFCE7.2302.12457.0750.5581.3270.3753.7060.12880.675
GTF5.5201.99758.2100.1831.3800.3232.8770.0982764.969
HMSD6.7222.09258.2500.6131.4120.3681.3180.097237.620
IFEVIP6.4093.89857.7000.5511.3620.3660.9180.110246.528
MSVD6.8702.73558.2100.6251.3340.4593.0070.098609.813
VSMWLS6.1291.80058.4000.6471.4080.4035.8960.094667.889
CNN6.7812.05957.6860.7431.3450.4114.0800.111256.092
GD56.7382.33457.4970.5671.2850.4973.7690.116524.444
GD106.6932.34957.5280.6251.3510.5193.9830.115503.300
GD156.6772.30357.5590.6521.3700.5411.1770.114496.323
GDPSQABF6.6472.16957.6160.6581.3830.5431.3070.113492.907
GDPSQCB6.3951.50257.9540.6181.4210.5351.8520.104729.452
GDPSQCV6.6772.31257.5460.6301.3600.5364.0010.114487.171
Table 8. Quality metric scores of multi-focus images set F#11, obtained using comparison methods.
Table 8. Quality metric scores of multi-focus images set F#11, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF7.6694.51363.8180.6101.6540.6430.0170.027101.099
CBF7.6815.31963.3830.7521.6470.7580.0190.03020.292
FPDE7.6614.40163.9140.5701.6630.6220.0210.02691.789
GFCE6.9622.86158.5900.6001.4190.5270.5430.090130.958
GTF7.6704.58563.4640.7081.6370.6600.0190.02965.129
HMSD7.6504.99963.1730.7381.6420.7420.0200.03115.926
IFEVIP7.0192.66159.7200.4491.5000.5050.3610.069321.098
MSVD7.6694.14963.4210.4271.6330.6160.0200.03094.007
VSMWLS7.6664.42463.4980.6741.6550.6640.0150.02939.009
CNN7.6685.40463.1060.7571.6350.7690.0300.03214.200
GD57.6884.75463.6550.7241.6650.7100.0230.02832.352
GD107.6854.74763.6960.7231.6670.7120.0220.02827.732
GD157.6844.74563.7140.7221.6670.7130.0220.02826.936
GDPSQABF7.6884.75663.6510.7251.6650.7090.0230.02833.095
GDPSQCB7.6854.74763.6960.7221.6670.7120.0220.02827.694
GDPSQCV7.6834.70963.7810.7141.6690.7070.0220.02726.191
Table 9. Quality metric scores of multi-focus images set F#15, obtained using comparison methods.
Table 9. Quality metric scores of multi-focus images set F#15, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF7.6115.75368.8640.7481.8560.7550.0090.0083.640
CBF7.6286.44568.3940.8051.8400.8150.0110.0093.873
FPDE7.6145.61768.8060.7441.8540.7250.0130.0093.734
GFCE7.6363.14057.9580.6101.3960.6250.9710.10594.969
GTF7.6236.54069.0360.7911.8370.7860.0110.0085.307
HMSD7.6285.95868.0600.7891.8360.7790.0120.0104.031
IFEVIP7.6323.66360.8910.6271.6740.6160.3210.053158.223
MSVD7.5794.97266.5070.5201.7840.7110.0100.0156.843
VSMWLS7.6265.82868.2170.7871.8380.7510.0120.0103.528
CNN7.6266.82968.0880.8111.8370.8290.0110.0103.618
GD57.6245.94168.6130.7891.8470.7840.0100.0093.195
GD107.6235.94068.6290.7871.8480.7860.0100.0093.211
GD157.6235.93768.6360.7871.8480.7870.0100.0093.225
GDPSQABF7.6245.93868.6090.7891.8470.7830.0100.0093.194
GDPSQCB7.6245.94068.6170.7891.8470.7850.0100.0093.194
GDPSQCV7.6245.93968.6130.7891.8470.7840.0100.0093.207
Table 10. Quality metric scores of multi-exposure images set E#5, obtained using comparison methods.
Table 10. Quality metric scores of multi-exposure images set E#5, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF6.5303.44058.7300.7001.7190.5780.5440.08769.401
CBF6.7043.06458.3700.6741.6410.5930.5370.09599.078
FPDE6.4983.43358.7320.6971.7200.5760.5470.08769.466
GFCE5.1332.76457.6410.5691.6070.4691.6150.112165.118
GTF6.0272.95058.2220.6381.6700.5090.5920.098112.981
HMSD6.6833.31758.3870.7031.6560.6750.6690.09498.335
IFEVIP5.5342.47157.8220.5511.6010.4770.9930.108188.610
MSVD6.5243.32958.6900.6911.7010.5820.5550.08870.008
VSMWLS6.5413.27858.6630.7031.7000.6070.5930.08974.676
CNN6.5392.89358.4000.7021.6900.6181.2410.09492.188
GD56.6763.34258.6180.7131.6930.6000.5320.08976.043
GD106.6653.33458.6360.7161.6990.6170.5360.08973.182
GD156.6553.32858.6470.7161.7030.6220.5390.08972.057
GDPSQABF6.6433.34958.6760.7141.7080.6170.5470.08871.193
GDPSQCB6.6553.31658.6470.7151.7020.6240.5390.08972.086
GDPSQCV6.6063.43958.7160.7071.7160.6080.5330.08768.797
Table 11. Quality metric scores of multi-exposure images set E#6, obtained using comparison methods.
Table 11. Quality metric scores of multi-exposure images set E#6, obtained using comparison methods.
ENMIPSNRQabfSSIMQcbCERMSEQcv
ADF6.3823.91257.5410.6601.5100.5200.7920.11588.447
CBF6.6743.30856.8440.6801.3770.5500.8810.135168.641
FPDE6.3813.90457.5410.6591.5090.5230.8680.11588.193
GFCE6.7492.49754.4570.6441.1230.5103.6770.233241.984
GTF5.6643.06557.0350.5941.4310.5550.6090.129201.843
HMSD6.6613.28957.1300.6911.4610.5211.0650.126132.652
IFEVIP6.1003.71657.1120.6191.4580.4681.4090.126126.541
MSVD6.3853.82957.5180.6371.4980.5210.8000.11589.599
VSMWLS6.4693.65057.4670.6691.4770.5400.8990.11788.157
CNN6.3723.14157.0940.7041.4490.5381.9120.127125.477
GD56.5973.44257.2000.7091.4520.5500.9020.124119.147
GD106.6083.48957.2320.7161.4750.5670.9240.123114.830
GD156.6133.49257.2630.7161.4850.5700.8290.122111.714
GDPSQABF6.6163.48657.2940.7151.4900.5660.8510.121108.474
GDPSQCB6.6193.48857.2770.7171.4880.5700.8370.122110.358
GDPSQCV6.4873.61957.4660.6871.5100.5360.9030.11795.170
Table 12. Average rankings of the methods with regard to their quality metrics for multi-modal medical images.
Table 12. Average rankings of the methods with regard to their quality metrics for multi-modal medical images.
ADFCBFFPDEGFCEGTFHMSDIFEVIPMSVDVSMWLSCNNGD5GD10GD15GDPSQABFGDPSQCBGDPSQCV
Img. M#1 Rank.8.787.007.7812.8911.446.789.0010.447.228.788.007.117.788.006.788.22
Img. M#2 Rank.11.006.788.3313.009.339.4411.0012.895.788.007.115.896.677.566.676.56
Img. M#3 Rank.6.789.567.5611.5612.227.567.3315.228.226.568.896.447.006.786.567.78
Img. M#4 Rank.7.676.898.6713.7811.675.008.2213.339.677.898.676.896.897.336.676.78
Img. M#5 Rank.10.566.5612.3312.0011.788.229.2212.675.786.447.336.007.007.226.446.44
Img. M#6 Rank.15.228.2210.1111.5610.116.005.449.785.568.118.117.566.897.448.787.11
Img. M#7 Rank.9.567.009.3313.5612.448.449.8912.565.677.567.336.445.677.446.336.78
Img. M#8 Rank.7.3310.229.6713.5612.226.117.3313.676.117.899.567.115.897.115.676.56
Avg. Ranking9.617.789.2212.7411.407.198.4312.576.757.658.136.686.727.366.747.03
Table 13. Average rankings of the methods with regard to their quality metrics for infrared and visible images.
Table 13. Average rankings of the methods with regard to their quality metrics for infrared and visible images.
Infrared and Visible ImagesADFCBFFPDEGFCEGTFHMSDIFEVIPMSVDVSMWLSCNNGD5GD10GD15GDPSQABFGDPSQCBGDPSQCV
Img. IV#1 Rank.5.6711.337.1111.1110.005.5610.3310.117.225.6710.338.678.449.117.447.89
Img. IV#2 Rank.8.0011.228.3310.2210.565.7810.118.787.336.2210.899.337.897.227.336.78
Img. IV#3 Rank.6.1111.677.1110.4410.895.3311.0010.006.445.6710.118.568.009.008.117.56
Img. IV#4 Rank.7.897.338.1113.6711.568.788.3311.117.3310.448.226.786.227.336.116.78
Img. IV#5 Rank.8.339.447.7810.6711.336.447.897.119.448.6710.228.896.676.228.568.33
Img. IV#6 Rank.8.789.897.228.229.786.339.339.007.569.7810.008.566.896.6711.226.78
Img. IV#7 Rank.7.1111.567.449.0012.225.229.228.679.895.0011.5610.008.566.009.115.44
Img. IV#8 Rank.8.6710.5610.7810.8910.786.228.4410.116.786.3310.009.008.006.676.566.22
Img. IV#9 Rank.7.8910.117.1111.2215.007.3310.677.895.786.4410.567.677.007.006.677.67
Img. IV#10 Rank.7.5611.228.338.789.338.569.2211.787.007.5611.229.568.005.336.226.33
Img. IV#11 Rank.8.1111.338.1112.787.569.676.786.675.336.7810.449.788.336.1111.336.89
Img. IV#12 Rank.10.448.3310.1112.0012.117.1110.119.229.897.229.446.896.006.565.565.00
Img. IV#13 Rank.7.7812.337.897.8913.007.115.899.006.446.4410.118.787.897.5610.897.00
Img. IV#14 Rank.8.1112.118.117.8911.225.675.339.896.227.2210.118.898.677.8910.678.00
Avg. Ranking7.8910.608.1110.3411.106.798.769.247.337.1010.238.677.617.058.276.91
Table 14. Average rankings of the methods with regard to their quality metrics for multi-focus images.
Table 14. Average rankings of the methods with regard to their quality metrics for multi-focus images.
Multi-Focus ImagesADFCBFFPDEGFCEGTFHMSDIFEVIPMSVDVSMWLSCNNGD5GD10GD15GDPSQABFGDPSQCBGDPSQCV
Img. F#1 Rank.9.445.678.8911.679.896.8914.1112.898.116.227.566.897.446.676.896.78
Img. F#2 Rank.7.786.899.7813.569.337.4413.899.567.896.787.117.226.447.007.567.78
Img. F#3 Rank.9.446.569.7813.7810.897.0013.449.567.897.897.676.675.897.006.336.22
Img. F#4 Rank.8.115.789.2215.4411.7810.1113.6710.567.447.117.115.675.566.336.116.00
Img. F#5 Rank.7.116.3310.0014.899.677.6715.899.569.337.006.445.786.227.446.676.00
Img. F#6 Rank.7.676.899.7813.789.568.0013.449.008.678.117.446.786.447.226.566.67
Img. F#7 Rank.9.116.789.4415.2210.567.4414.228.677.337.007.566.225.678.447.115.22
Img. F#8 Rank.8.896.229.4414.007.339.5613.7812.448.446.336.677.225.896.896.896.00
Img. F#9 Rank.9.006.789.8912.679.897.1113.448.899.338.228.006.786.337.006.116.56
Img. F#10 Rank.8.116.789.2214.2210.116.8913.4413.226.676.897.226.676.007.566.446.56
Img. F#11 Rank.8.226.009.0015.339.447.4415.3312.009.117.786.565.785.676.675.895.78
Img. F#12 Rank.8.006.118.6715.339.788.3313.7810.228.787.897.896.115.447.566.335.78
Img. F#13 Rank.8.677.569.8913.679.008.1113.7810.229.228.227.566.115.566.006.785.67
Img. F#14 Rank.9.006.229.6715.338.786.8914.6710.449.117.567.226.336.336.566.115.78
Img. F#15 Rank.7.226.789.4414.006.569.4413.6713.2210.006.676.566.336.227.115.787.00
Img. F#16 Rank.7.227.898.3315.229.679.8915.448.447.449.447.115.445.786.676.115.89
Img. F#17 Rank.9.116.679.5614.449.897.2215.569.569.226.787.785.676.226.445.786.11
Img. F#18 Rank.7.787.789.5614.339.3310.1113.568.227.337.116.676.676.568.116.566.33
Img. F#19 Rank.8.676.229.6714.119.897.1115.4410.789.117.227.676.225.006.565.566.78
Img. F#20 Rank.8.895.6710.2215.3310.227.1113.788.677.676.677.676.336.677.116.677.33
Avg. Ranking8.376.589.4714.329.587.9914.2210.318.407.347.276.346.077.026.416.31
Table 15. Average rankings of the methods with regard to their quality metrics for multi-exposure images.
Table 15. Average rankings of the methods with regard to their quality metrics for multi-exposure images.
Multi-Exposure ImagesADFCBFFPDEGFCEGTFHMSDIFEVIPMSVDVSMWLSCNNGD5GD10GD15GDPSQABFGDPSQCBGDPSQCV
Img. E#1 Rank.6.1114.005.7813.7812.447.4410.568.007.227.8911.118.566.784.786.335.22
Img. E#2 Rank.5.3310.445.4413.2211.5610.7815.897.676.569.568.677.676.895.336.334.67
Img. E#3 Rank.6.3313.226.7811.1110.448.339.896.007.2210.3311.678.898.115.567.334.78
Img. E#4 Rank.5.0013.675.1113.5611.226.119.117.119.007.5612.339.898.336.336.675.00
Img. E#5 Rank.5.4410.336.0015.5613.339.0015.337.898.5610.677.006.005.335.446.223.89
Img. E#6 Rank.5.4410.785.7813.8912.2210.4412.336.336.5612.009.117.675.675.785.336.67
Avg. Ranking5.6112.075.8213.5211.878.6812.197.177.529.679.988.116.855.546.375.04
Table 16. Global average rankings of the methods with regard to their quality metrics and average CPU time consumptions (s) for all images.
Table 16. Global average rankings of the methods with regard to their quality metrics and average CPU time consumptions (s) for all images.
All ImagesADFCBFFPDEGFCEGTFHMSDIFEVIPMSVDVSMWLSCNNGD5GD10GD15GDPSQABFGDPSQCBGDPSQCV
Avg. Ranking8.098.648.5812.7910.617.5911.419.987.717.628.627.306.726.907.006.44
Avg. CPU Time0.5614.081.761.465.636.280.150.572.3022.990.160.180.2019.6515.4021.72
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Kurban, R. Gaussian of Differences: A Simple and Efficient General Image Fusion Method. Entropy 2023, 25, 1215. https://doi.org/10.3390/e25081215

AMA Style

Kurban R. Gaussian of Differences: A Simple and Efficient General Image Fusion Method. Entropy. 2023; 25(8):1215. https://doi.org/10.3390/e25081215

Chicago/Turabian Style

Kurban, Rifat. 2023. "Gaussian of Differences: A Simple and Efficient General Image Fusion Method" Entropy 25, no. 8: 1215. https://doi.org/10.3390/e25081215

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop