Next Article in Journal
Study of Kinetic Freeze-Out Parameters as a Function of Rapidity in pp Collisions at CERN SPS Energies
Next Article in Special Issue
S2A: Scale-Attention-Aware Networks for Video Super-Resolution
Previous Article in Journal
Information Theory and Symbolic Analysis: Theory and Applications
Previous Article in Special Issue
A New Variational Bayesian-Based Kalman Filter with Unknown Time-Varying Measurement Loss Probability and Non-Stationary Heavy-Tailed Measurement Noise
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary

1
College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
2
College of Computer and Information Science, Chongqing Normal University, Chongqing 401331, China
3
College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
*
Author to whom correspondence should be addressed.
Entropy 2021, 23(10), 1362; https://doi.org/10.3390/e23101362
Submission received: 10 August 2021 / Revised: 12 October 2021 / Accepted: 14 October 2021 / Published: 19 October 2021
(This article belongs to the Special Issue Advances in Image Fusion)

Abstract

:
Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.

1. Introduction

Due to the focal lengths of optical lenses, the images obtained by the camera include focused and defocused parts. Focused parts are sharper in the image, while defocused parts appear blurry. In order to obtain full-focus images, a common solution is utilizing multi-focus image fusion technology, to combine the focused parts of different images in the same scene. The combined full-focus image contains global clarity and rich details, and is more suitable for visual perception and computer processing. As an important branch of image fusion, multi-focus image fusion can be studied on three different levels, i.e., pixel level, feature level, and decision level [1]. Compared with the other two levels, pixel-level image fusion can maximally reserve the original information in the source image, giving it an edge over the other two in accuracy and robustness. Accordingly, it has become a common fusion method for multi-focus images. The method proposed in this paper is based on pixel-level multi-focus image fusion.
Multi-scale decomposition (MSD) is a technique usually applied in pixel-level multi-focus image fusion, and it was proven to be a very useful image analysis tool. The MSD-based fusion method can extract image feature information on different scales for image fusion. The mechanism of the MSD fusion method is as follows. Firstly, the source images are decomposed into multi-scale spaces by MSD, where there is one approximate component containing contours and several detail components storing salient features. Then, the decomposed coefficients of all scale spaces are fused, following the designed fusion strategies. Finally, the inverse multi-scale decomposed is used to reconstruct the final fused image. Undoubtedly, the choice of the multi-scale decomposition method and fusion strategy are two important factors of image fusion.
The selection of the multi-scale decomposition method needs to factor in the following aspects: firstly, the desirable feature extraction performance. One advantage of the MSD-based method is to separate spatially overlapping features in scales. Secondly, the decomposition algorithm. In practical applications, the execution efficiency of the algorithm is a key indicator. Finally, good generalization. It means being able to handle various types of images, including anisotropic blur and unregistration.
Since the 1980s and 1990s, various multi-scale decomposition methods have been applied in multi-focus image fusion [2], mainly containing Laplacian pyramid (LP) [3], gradient pyramid (GP) [3], discrete wavelet transform (DWT) [4], and so on. Although, DWT improves computational efficiency compared with LP and GP, it does not reflect shift invariance and direction selectivity, which undermine feature extraction. To address these problems, the dual-tree complex wavelet (DTCWT) [5] is proposed, which has shift invariance and direction selectivity, and is successfully applied for multi-focus image fusion [6]. Compared with the pyramid and wavelet transform, the multi-scale geometric analysis (MGA) [7,8,9] methods better reflect the inherent geometric structure of the image, and outperform in feature extraction, but the calculation is more complex and time-consuming.
In recent years, scholars have proposed new and efficient multi-scale decomposition methods, which show good performance in multi-focus image fusion. Typical methods include the following: Li et al. [10] proposed a two-scale decomposition method for multi-focus fusion with the guided filtering technique. Through simple average filtering, each source image is decomposed into a basic layer with large-scale variations and a detail layer containing small-scale details. The method is superior to many traditional MSD-based methods in terms of fusion performance and computational efficiency. Xiao et al. [11] proposed that the multi-scale hessian matrix can decompose the source images into small-scale feature components and large-scale background components, and effectively remove the pseudo-edges, which are introduced by image blurring and unregistered. The method shows good feature extraction and generalization capabilities. Zhang et al. [12] proposed a multi-scale decomposition scheme by changing the size of the structural elements, and extracting the morphological gradient information of the image on different scales to achieve multi-focus image fusion. The method shows the best execution efficiency. NaiduIn et al. [13] and Wan et al. [14] proposed multi-scale analysis and singular value decomposition are combined to perform multi-focus image fusion. This method achieves the stability and orthogonality equivalent of that achieved by SVD. Since no convolution operations are required, the fast decomposition speed means high execution efficiency of the algorithm.
In addition to developing novel methods for MSD, fusion rules also play a key role in image fusion. Advanced fusion rules and MSD methods form a complementary whole in image fusion, which promotes fusion performance. The fusion rules of multi-focus images are usually designed based on the focus measure between pixels. Commonly used focus measures incorporate spatial frequency (SF) [15], sum-modified-Laplacian (SML) [16], standard deviation (STD) [17], energy of gradient (EOG) [18], etc. Generally, simple pixel-based fusion rules are insensitive to anisotropic blur and misregistration. For example, fusion rules, such as direct comparison of decomposition coefficients and weighted average values based on spatial context. To improve fusion results, some complex fusion rules are proposed. Among these are block-based and area-based methods [19,20]. Firstly, the original images are divided into a number of blocks or regions. Then, the focus level and sharpness of each block or region is measured by image intensity. Finally, a block or region with a higher degree of focus as part of the fusion image is selected. However, the quality of image fusion depends on the selection of the image block sizes or the segmentation algorithms. When the image block is not selected correctly or the segmentation algorithm cannot correctly segment the area, the focus area cannot be correctly determined and extracted, and the boundary between the focus and the defocus area is prone to blur. Zhou et al. [21] proposed a new focus measure fusion method based on a multi-scale structure, which uses large-scale and small-scale focus measures to determine the clear focus area and weight map of the transition area, respectively. This method can reduce the influence of anisotropic blur and unregistration on image fusion. However, the transition area is artificially set and cannot accurately reflect the focus of the boundary. Ma et al. [22] proposed a random walk-based with two-scale focus measure for multi-focus image fusion. The method estimates a focus map directly from the two-scale imperfect observations obtained using small and large-scale focus measures. Since the random walk algorithm is used to model the estimation from the perspective of probability, this method is relatively time-consuming. In addition to the commonly used linear model fusion rules mentioned above, there are also some fusion rules based on non-linear methods. Dong et al. [23] proposed a multi-focus image fusion scheme by memristor-based PCNN. Hao et al. [24] review the state-of-the-art on the use of deep learning in various types of image fusion scenarios. The Generative Adversarial Network (GANS) proposed by Guo et al. [25] has also been successfully applied to multi-focus image fusion. When it comes to the deep learning model of multi-focus image fusion, the measurement of pixel activity level is obtained through the model. However, the difficulties in training a large number of parameters and large datasets have directly affected the image fusion efficiency and quality. Compared with deep learning methods, the conventional fusion methods are more extensible and repeatable, facilitating real-world applications. Thus, the paper mainly aims to improve the conventional multi-focus image fusion algorithms.
According to the above analyses, decomposition schemes and focus measures involved in the fusion strategy play important roles in multi-focus image fusion. In recent years, many novel algorithms have been proposed to improve the image fusion quality, but some existing problems still need to be addressed. Firstly, due to the diversity of fused images, the contour and detailed information of images cannot be fully expressed when images are decomposed by fixed basis and filter functions. Secondly, the boundary between the focused and defocused areas of the image gives rise to false edges, mainly due to the fact that the boundary between the two areas are not clearly distinguished, or that the two images are not registered. Finally, the artifacts are easily generated between the focused and unfocused flat regions, since the image details in those regions are extremely scanty [11].
In order to solve the problems, a novel multi-focus image fusion method based on multi-scale singular value decomposition (MSVD) is proposed in this paper. The method obtains low-frequency and high-frequency components with complementary information through two groups of double-layer decompositions with complementary structures and scales, and these components contain rich image structure and detailed information. The proposed fusion rules are then applied to fuse each component to obtain the final fusion image. Concretely, different fusion strategies and focusing measures are used to fuse the high-frequency and low-frequency sub-images, respectively, and two initial decision diagrams with complementary information are obtained. Hence, a definite focus area and a non-definite focus area are obtained. After that, the non-definite focus area is refined and transformed into a definite focus area, and the final decision map to complete the image fusion is obtained. The proposed method has the following advantages. Firstly, two groups of decomposition schemes with complementary structure and scale are designed to accurately obtain the focus of the boundary. Secondly, the proposed method combines multi-scale analysis and singular value decom-position for multi-focus image fusion. Singular value decomposition diagonalizes the image matrix according to the size of eigenvalues, so there is no redundancy between the decomposed images, which is suitable for different fusion rules for each component. Finally, by exploiting the image feature information contained in each decomposition layer of low-frequency components and high-frequency components, selecting different focus measures can better extract the image feature information.
Compared with the existing multi-focus image fusion method, the main innovations of the proposed method are as follows:
  • The paper uses MSVD decomposition with a complementary structure and size for the first time, enhances the complementarity of the extracted image feature information and improves the ability to detect the focus area, in order to fully extract the structure and detailed information of the image.
  • To fully extract the structure and details of the image, the complementary features extracted by different focus measures are developed as the external stimulus input of the PA-PCNN.
  • Experiments are performed to verify the efficiency of the proposed method. The results show that the proposed method can effectively eliminate the pseudo edges caused by anisotropic blur or unregistration.
The structure of this paper is organized as follows. Section 2 proposes the multi-focus image fusion model based on multi-scale decomposition of information complementary. Section 3 analyses and discusses the results of the comparison with the latest methods. Finally, conclusions for this paper are provided in Section 4.

2. Proposed Multi-Focus Image Fusion Algorithm

Due to object displacement or camera shake during image acquisition, multi-focus images will produce unregistration and anisotropic blur. These factors can lead to erroneous focus judgment in the focus map obtained by the focus measure (FM), which make the fusion image appear blurred and distorted. In order to solve the above problems, Zhou et al. [21] proposed a two-scale fusion scheme. A large scale can better reduce blur and unregistration, and a small scale can better retain some details, so that the Halo effect of the fused image can be mitigated. However, this algorithm calculates its saliency map based on the covariance matrix of the region, and the fusion effect is not good for images without obvious edges or corners. In addition, an unknown area is defined near the boundary pixels of the focus area, and its width is set as 4 δ 1 . This artificially set unknown area cannot accurately reflect the focus of the boundary and will affect fusion. In response to the above problems, we propose a multi-focus image decomposition strategy based on a multi-scale singular value decomposition. In this strategy, two groups of low-frequency and high-frequency components with complementary information are obtained by two-level decomposition of the complementary structure and scale. According to the proposed fusion rules, each component is fused to obtain the final fusion image.
Figure 1a shows the first group of decomposition schemes. The first layer is to divide the source image into blocks in the size of 3 × 5 to achieve large-scale decomposition of the image. In the second layer, the low-frequency components obtained from the first layer are divided into blocks in the size of 2 × 3 to achieve small-scale image decomposition. Figure 1b shows the second group of decomposition schemes. The first layer is to divide the source image into blocks in the size of 5 × 3 to achieve the large-scale decomposition of the image. In the second layer, the low-frequency components obtained from the first layer are divided into blocks in the size of 3 × 2 to achieve mall-scale image decomposition (in Section 2.1.2 for details of image segmentation method).
The multi-scale decomposition scheme proposed in this paper uses block operation to achieve large-scale and small-scale decomposition of the image. Large-scale decomposition can better retain image structure information, and small-scale decomposition can better retain image detail information. Through the proposed fusion rule, the high and low frequency components obtained by the two decomposition schemes are fused, and two fusion decision maps with complementary information are obtained. These two fusion decision maps can make up for the poor fusion effect of images without giving rise to obvious edges and corners. It can also determine the blur region near the pixels of the focus region boundary. Figure 2 shows the two complementary information fusion decision maps obtained through the two decomposition schemes show in Figure 1 and the initial decision map determined through them. The initial decision map contains the definite focus area and the non-definite focus area.

2.1. Multi-Scale Singular Value Decomposition of Multi-Focus Image

2.1.1. Multi-Scale Singular Value Decomposition

MSVD is an image decomposition method with simple calculations and is suitable for real-time applications. In image decomposition, it uses singular value decomposition (SVD) to perform a similar function to the FIR filter in wavelet transform, but MSVD is not like wavelet transform, which has a fixed set of basis vectors to decompose images, and its basis vectors depend on the image itself [13].
X is the matrix form of image f ( x , y ) , X R m × n . When orthogonal matrixes U R m × m and V R n × n exist, we can obtain:
U T X   V = Λ r 0 0 0   Λ R m × n
According to the transformation of Equation (1), the singular value decomposition of X can be obtained as:
X = U Λ r 0 0 0 ( V ) T
In Equation (2), Λ r = d i a g { λ 1 , λ 2 , , λ r } , λ 1 λ 2 λ r , . r is the rank of the matrix X, λ i ( 1 i r ) is the singular value of X. The matrix singular value has strong stability, and will not change with image scaling and rotation. U and V are the eigenvectors corresponding to the singular values, and they depend on the image X. The amount of image information represented by eigenvector is positively related to the size of the corresponding singular value. The larger the singular value, the more image information it contains, which corresponds to the approximate part of the image. The smaller singular values correspond to the detailed parts of the image, which is the high frequency part of the image. Therefore, the image can be separated into approximate and detailed information according to the size of the singular values.

2.1.2. Decomposition of Multi-Focus Image

In order to achieve multi-scale decomposition of the multi-focus images, they are divided into non-overlapping m × n blocks, and each sub-block is arranged into an mn × 1 vector. By combining these column vectors, a matrix X with a size of m n × M N / m n can be obtained. The singular value decomposition of X is:
X = U Λ ( V ) T
U and V are orthogonal matrices, according to Equation (3):
S = ( U ) T X = Λ ( V ) T
The size of the matrix S is m n × M N / m n .
According to the singular value decomposition mentioned above, the first column vector of U corresponds to the maximum singular value. When it is left multiplied by the matrix X , the first row S ( 1 , : ) of S carries the main information from the original image, which can be regarded as the approximate or smooth component of the original image. Similarly, the other row S ( 2 : m n , : ) of S corresponds to smaller singular values, which retains such detailed information as the texture and edges of the original image. Therefore, through singular value decomposition, the image can be decomposed into low-frequency and high-frequency subimages by the singular value to achieve the multi-scale decomposition of the image. The schematic diagram of the multi-focus image MSVD scheme proposed in this paper is illustrated in Figure 1. In order to clearly illustrate the image decomposition process, it is assumed that there is a source image with a size of 300 × 300. According to the decomposition scheme in Figure 1a and the above mentioned image decomposition steps, the source image is divided into blocks of size 3 × 5 to achieve the first-layer large-scale decomposition. After that, 1 low-frequency component and 14 high-frequency components are obtained, and the size of each component is 100 × 60. The second-layer of decomposition is to divide the low-frequency components of the first-layer into blocks of size 2 × 3 to achieve small-scale decomposition. Moreover, 1 low-frequency component and 5 high-frequency components are obtained, and the size of each component is 50 × 20. After fusion of the components, the final fusion image is acquired through the inverse MSVD transformation.

2.2. Low-Frequency Component Fusion

The low-frequency sub-image of the multi-focus image obtained by the MSVD decomposition scheme proposed in this paper reflects the overall characteristics of the image, and mainly contains contour and energy information. In this paper, we use the algebraic operations and spatial characteristics of quaternions to calculate the local energy of low-frequency components. Joint bilateral filter (JBF) is used to get the structure information of low-frequency components, combine the energy and structure information to calculate the weight to obtain the fusion decision map. The fused low-frequency components are obtained according to the decision map.

2.2.1. Quaternion

Quaternions were first introduced in 1843 by British mathematician Hamilton [26]. They can be considered an extension of complex numbers. The general form of a quaternion is expressed as follows:
Q = a + b i + c j + d k
where
i 2 = j 2 = k 2 = i j k = 1 , i j = j i = k , j k = k j = i , k i = i k = j .
and where a is the real part, bi, cj, and dk are three imaginary parts. If the real part a is zero, Q is called a pure quaternion.
The modulus of a quaternion is defined as:
Q = Q Q = a 2 + b 2 + c 2 + d 2
where Q is defined as the conjugate of the quaternion Q, Q = a b i c j d k .
The unit vector of a quaternion Q is defined as:
q = Q Q
Define two quaternions as
Q 1 ( Q 1 = a + q v 1 )   and   Q 2 ( Q 2 = b + q v 2 )
In Equation (8), quaternion multiplication can be represented using the cross and dot product.
Q 1 Q 2 = ( a + q v 1 ) ( b + q v 2 ) = ( a b q v 1 . q v 2 ) + ( a q v 2 + b q v 1 + q v 1 × q v 2 )
where q v 1 and q v 2 are the vector parts of each quaternion. q v 1 q v 2 and q v 1 × q v 2 represent the dot product and cross product of the two vectors, respectively.

2.2.2. Joint Bilateral Filter

Bilateral filter (BF) is a nonlinear filtering method, which combines the spatial proximity and pixel value similarity of the image. BF can achieve edge preservation and denoising during image fusion. However, the weights of the bilateral filter are not stable enough, and the joint bilateral filter (JBF) introduces the guiding image on the basis of the bilateral filter, making the weights more stable. JBF can be expressed as follows:
J ( x ) = 1 W y Ω G ( x , y , δ s ) G ( O ( x ) , O ( y ) , δ r ) I ( y )
W is the regularization factor, defined as:
W = y Ω G ( x , y , δ s ) G ( O ( x ) , O ( y ) , δ r )
The Gaussian kernel function G is expressed as:
G ( x , y , δ ) = e x y 2 2 δ 2
In Equation (9), the set of adjacent pixels is denoted as Ω , δ s , and δ r are the parameters of two Gaussian kernel functions, which are used to control the influence of Euclidean distance and pixel similarity. The Gaussian kernel function will attenuate as the distance between x and y increases. When the distance between x and y is less than δ s or the difference between two pixel values is less than δ r , the pixel value I ( y ) of y has a greater impact on the value of J ( x ) . Different from the bilateral filter, O ( x ) and O ( y ) are the guiding pixel values of x and y, respectively. The guiding image O can provide more reliable information for the structure of the output image and obtain a more optimized similarity Gaussian kernel weight.

2.2.3. Low-Frequency Component Fusion Rule

The low-frequency component contains most of the energy and contour information of the image. Therefore, in the low-frequency fusion process, the energy and contour information of the image should be taken into account. In this paper, a new low-frequency component fusion method is proposed. Firstly, the local energy of low-frequency component is calculated using the neighborhood of pixels represented by quaternions. Secondly, we use JBF to get the edge contour information of the low frequency component. Then, we combine the local energy and the edge energy to calculate the weight of the low-frequency component to obtain the fusion decision map. Finally, the fused low-frequency component is obtained according to the decision map. The detailed fusion process is as follows:
  • Select the pixel in the 3 × 3 domain of the target pixel to construct quaternion Q I 1 , Q I 2 , and calculate the local energy E I L of the low-frequency component:
    E I L ( x , y )   = Q I 1 Q I 2 f I L ( x , y ) Q I 1 = f ( x , y + 1 ) + i f ( x , y 1 ) + j f ( x 1 , y ) + k f ( x + 1 , y ) Q I 2 = f ( x + 1 , y + 1 ) + i f ( x + 1 , y 1 ) + j f ( x 1 , y 1 ) + k f ( x 1 , y + 1 )
In Equation (10), I = A , B , ( x , y ) represent the position of the low-frequency component pixel. Q I 1 is the quaternion formed by the front, back, left, and right pixels in the neighborhood of pixel ( x , y ) . Q I 2 is the quaternion formed by diagonal pixels in the neighborhood of pixel ( x , y ) . In the calculation of E I L , Q I 1 , Q I 2 is constructed as a unit vector according to Equation (7).
2.
JBF is used to process the local energy map E I L of low frequency components to get the energy map S I L of edge pixels:
S I L = J B F ( E I L , f I L , w , δ s , δ r )
In Equation (11), E I L is the local energy of the low-frequency component, with low-frequency component f I L as a guide map, w represents the local window radius, δ s is the standard deviation of the spatial domain kernel, and δ r the standard deviation of the range kernel.
3.
According to the local energy E I L and edge energy S I L of the low-frequency component, the weight of the low-frequency component is calculated.
d A L = 1 , E A L S A L     E B L S B L , 0 , o t h e r w i s e , d B L = 1 d A L .
4.
The fusion image of the low-frequency component is obtained by the following formula:
f F L = d A L f A L + d B L f B L

2.3. High-Frequency Component Fusion

The high frequency component corresponds to the sharply changing part of the image, including the texture, details, and edge information of the image, which impacts the clarity and visual effects of the fused image. Pulse coupled neural network (PCNN) is a simplified artificial neural network constructed by Eckhorn based on the cat’s eye vision principle. Its signal form and processing mechanism are more in line with the physiological characteristics of the human visual nervous system. In order to improve the quality of the fused image, this paper proposes to use an adaptive PCNN strategy to fuse high-frequency components. The first layer of image decomposition selects the local spatial frequency (SF) as the external stimulus input of the PCNN, and the second layer selects the local standard deviation (STD) as the external stimulus input of PCNN.

2.3.1. PA-PCNN

PCNN can capture image edge and detailed information without any training process. It is a feedback single-layer network composed of several neurons connected with each other. It has three functional units: feedback input domain, connection input domain, and pulse generation domain. The traditional PCNN model needs to determine parameters, such as link strength, various amplitudes, and attenuation coefficients. In order to avoid the insufficiency of manually setting parameters, a simplified PCNN model [27,28] is proposed, which is described as follows:
F i j [ n ] = S i j
L i j [ n ] = V L k l W i j k l Y k l [ n 1 ]
U i j [ n ] = e a f U i j [ n 1 ] + F i j [ n ] ( 1 + β L i j [ n ] )
Y i j [ n ] = 1 ,   i f   U i j [ n ] > E i j [ n 1 ] 0 ,   o t h e r w i s e
E i j [ n ] = e a e E i j [ n 1 ] + V E Y i j [ n ]
F i j [ n ] and L i j [ n ] are the external stimulus input and link input of the pixel at position (i, j) during the nth iteration, and S i j is the input image. The parameter V L is the amplitude of the link input L i j [ n ] , which controls L i j [ n ] together with W i j k l and Y k l [ n 1 ] , and W i j k l = 0.5 1 0.5 1 0 1 0.5 1 0.5 is the synaptic weight matrix. The internal activity item U i j [ n ] consists of two parts: the first part e a f U i j [ n 1 ] is the exponential decay part of the internal activity of the previous iteration, and a f is the exponential decay coefficient. The second part F i j [ n ] ( 1 + β L i j [ n ] ) is the nonlinear modulation of F i j [ n ] and L i j [ n ] , where the parameter β is the link strength. Y i j [ n ] depends on the current internal activity item U i j [ n ] and the dynamic threshold E i j [ n 1 ] during the last iteration. When U i j [ n ] > E i j [ n 1 ] , Y i j [ n ] = 1 , PCNN is in an ignition state. By contrast, Y i j [ n ] = 0 , PCNN is in an unfired state. Y i j [ n ] = 0 and V E are the exponential decay coefficient and amplitude of E i j [ n ] , respectively. There are 5 free parameters in the parameter adaptive PCNN model: a f , β , V L , a e , V E . These parameters can be calculated by the following formula [27,28]:
a f = log 1 δ ( s ) , β V L = S max S 1 6
V E = e a f + 1 + 6 β V L
a e = ln V E S ( 1 e 3 a f 1 e a f + 6 β V L e a f )
The smaller the value of a f , the greater the dynamic range of U i j [ n ] . δ ( s ) is the standard deviation of normalized image S. β and V L are the weights of β V L , it can be regarded as a whole as the weighted link strength. The maximum intensity value Smax of the input image and the optimal histogram threshold S jointly determine the value of β V L . β V L and a f are combined to get V E and a e . Figure 3 shows the PA-PCNN model used in the multi-focus image fusion method proposed in this paper.

2.3.2. Space Frequency and Standard Deviation

The spatial frequency (SF) and standard deviation (STD) of an image are two important indicators of the details of the image.
Spatial frequency is defined as:
S F = R F 2 + C F 2 R F = 1 M × N i = 1 M j = 2 N [ f ( i , j ) f ( i , j 1 ) ] 2 C F = 1 M × N i = 2 M j = 1 N [ f ( i , j ) f ( i 1 , j ) ] 2
RF is the row frequency and CF is the column frequency. The spatial frequency (SF) of the image indicates the clarity of the spatial details of the image.
Standard deviation is defined as:
S T D = 1 M × N i = 1 M j = 1 N [ f ( i , j ) μ ] 2 μ = 1 M × N i = 1 M j = 1 N f ( i , j )
The image standard deviation represents the statistical distribution and contrast of the image. The larger the standard deviation, the more scattered the gray level distribution, the greater the contrast, and the more prominent the image details. μ is the mean value of the image.
Spatial frequency and standard deviation reflect the details of the image from different aspects, and the two indicators are complementary.

2.3.3. High-Frequency Component Fusion Rule

The high-frequency components of the source image obtained through multi-scale and multi-layer decomposition contain important details of the image. As the number of decom-position layers increases, the detailed features of high-frequency components become more prominent. In order to make the image fusion effect better meet the physiological characteristics of the human visual nervous system, in the first layer and second layer decomposition of high-frequency components, local spatial frequency (SF) and local standard deviation (STD) are, respectively, selected as external stimulus inputs of PA-PCNN, and to achieve the fusion of high-frequency components. The fusion procedure of high-frequency components is as follows:
  • In the first layer of decomposition, SF is used as the external stimulus input of PA-PCNN, and the number of ignitions of high-frequency components is obtained by
    T S 1 [ n ] = T S 1 [ n 1 ] + Y S 1 [ n ] , ( S = A , B )
  • Weight coefficient of high-frequency components is obtained by:
    d A H 1 = 1 ,   i f   T A 1 [ n ]   > T B 1 [ n ] ,   0 ,   o t h e r w i s e , d B H 1 = 1 d A H 1 .
  • High-frequency components after fusion is obtained by:
    f F H 1 = d A H 1 f A H 1 + d B H 1 f B H 1
In the same way, STD is used as the external stimulus input of PA-PCNN to obtain the fused high-frequency components of the second layer decomposition.
f F H 2 = d A H 2 f A H 2 + d B H 2 f B H 2
H1 represents the high-frequency component decomposed in the first layer, and H2 represents the high-frequency component decomposed in the second.

2.4. Non-Definite Focus Region Fusion

A multi-focus image fusion method is commonly used to obtain the final fusion image based on the decision maps. However, the decision maps are often inaccurate, especially at the boundary between the focus and defocus regions. To better determine the focus attribute of the boundary, we propose to define the aliasing region of the two complementary initial decision graph boundaries as the undetermined focus region (the red region in Figure 2e). On this basis, the measurement method combining local spatial frequency (SF) and local standard deviation (STD) (Section 2.3.2) is used to convert the non-definite focus region into a definite focus region, and accurate fusion decision map is obtained, and can effectively address an out-of-focus blur caused by anisotropic blur and unregistration. The specific fusion process is as follows:
  • Based on the two complementary decision maps, an initial decision map DF containing the definite focus region and the non-definite focus region is obtained.
    D F = ( D 1 + D 2 ) · / 2 D F i , j D I d e n , D F i , j = 1   or   D F i , j = 0 D F i , j D U n i d e n , D F i , j = 0.5
    where D1 is the fusion decision map obtained by the first group of decomposition scheme (Figure 2c), D2 is the fusion decision map obtained by the second group of the decomposition scheme (Figure 2d), DF is the initial decision map (Figure 2e). When D F ( i , j ) = 1 or D F ( i , j ) = 0 , D F ( i , j ) belongs to the definite focus region DIden; when D F ( i , j ) = 0.5 , D F ( i , j ) belongs to the definite focus region DUniden (the red region in Figure 2e).
  • The weight coefficient of the non-definite focus region is calculated by
    Q U U n i d e n = 1 m × n m = 0 w n = 0 w ( S F U ( i + m , j + n ) S T D U ( i + m , j + n ) ) , U = ( A   o r   B )
    d A U n i d e n = 1 , i f   Q A U n i d e n · f A U n i d e n > Q B U n i d e n · f B U n i d e n , 0 , o t h e r w i s e , d B U n i d e n = 1 d A U n i d e n .
  • The non-definite focus region fusion is calculated by
    f F U n i d e n = d A U n i d e n ʷ f A U n i d e n + d B U n i d e n · f B U n i d e n
    where f A U n i d e n and f B U n i d e n are non-definite focus regions of the source multi-focus images.

2.5. The Proposed Multi-Focus Image Fusion Method

Step 1: the two-layer MSVD decomposition with the complementary structure and scale (Figure 1) is performed on two multi-focus images, A and B, respectively, and two groups of information complementary low-frequency components and high-frequency components are obtained. In each group of decomposition, the source image is decomposed into a low-frequency component L and multiple high-frequency components H i c .
Step 2: different fusion rules are used to fuse the low-frequency components L and high-frequency components H i c respectively, and the information complementary decision map D1 and D2 are obtained.
Step 3: the complementary decision maps in Step 2 are exploited, and the initial decision map DF containing the definite focus region and the non-definite focus region is obtained. The non-definite focus region DUniden in DF is the aliasing area at the boundary of the complementary decision maps. With the adoption of the proposed focus measurement method (in Section 2.4), the non-definite focus region DUniden is transformed into the definite focus region, and the final fusion decision map DFF is obtained.
Step 4: according to the fusion decision map DFF obtained in Step 3, the final fusion image is obtained.
Figure 4 illustrates the principle diagram of the method in this paper, which corresponds to the above fusion steps.

3. Experiments and Discussion

In order to verify the effectiveness of the proposed method, we first compare the proposed method with some classic and state-of-the-art methods, which are fusion methods based on traditional ideas. They are the curvelet transform (CVT) [29], the singular value decomposition in discrete cosine transform (DCT_SVD) [30], the dual-tree complex wavelet transform (DTCWT) [5,29], the image matting for fusion of multi-focus images (IFM) [31], the Laplacian pyramid (LP) [29], the multi-resolution singular value decomposition (MSVD) [13], the multi-scale weighted gradient-based fusion (MWGF) [21], the nonsubsampled contourlet transform (NSCT) [29,32]. The codes for the eight methods for comparison are provided by the authors of the corresponding papers, the MATLAB programs are all available online, and the parameters are the default values presented in the original papers. In addition, we select 13 pairs of multi-focus images commonly used in image fusion for comparative experiments, where 6 pairs of source images are provided by Lu et al. [1], and 4 pairs of source images are provided by Zhang et al. [33], and 3 other pairs of source images are obtained from the website [34]. In order to verify the performance of the proposed method, unregistered and pre-registered multi-focus images are specially selected for experimental analyses. Then, the proposed method is also compared with FuseGAN and CNN [25] methods, which are related to deep learning. The data sets, objective metrics, and fusion results used in the FuseGAN and CNN all derive from [25]. Finally, an ablation experiment is also carried out to test the effect of eliminating the PCNN method from the fusion result.
The decomposition parameters setting of the proposed method are: in the first group, the first layer is divided into 3 × 5 blocks, and the second layer is divided into 2 × 3 blocks; in the second group, the first layer is divided into 5 × 3 blocks, and the second layer is divided into 3 × 2 blocks (in Section 2.1.2 and Figure 1 for details of the parameters setting).

3.1. Comparative Analysis of Fusion Results Based on Traditional Methods

3.1.1. Subjective Analysis of Pre-Registered Image Fusion Results

Figure 5 shows the fusion results of the “wine” source image obtained by different multi-focus image fusion methods. Figure 5a,b are the source images of the front focus and the back focus, respectively. Figure 5c–j are the fusion results obtained by the curvelet, DCT_SVD, DTCWT, IFM, LP, MSVD, MWGF, and NSCT methods. Figure 5k is the fusion results achieved by the proposed method. Figure 6 and Figure 7 are enlarged regions of the local details of Figure 5. In Figure 6, the part marked by the red frame shows that the fused image is introduced; the artefacts and blurred edges are produced by the fusion method of curvelet, DCT_SVD, DTCWT, MSVD, MWGF, and NSCT, respectively. In Figure 7, the red regions near the gear also produce the pseudo-edges, and are generated by curvelet, DCT_SVD, DTCWT, IFM, LP, MSVD, MWGF, and NSCT. It is found that the proposed method achieves the best fusion results among these methods. Figure 8 shows the fusion results of the “newspaper” source images obtained by different fusion methods. Figure 8a,b are two source images of the left focus image and the right focus image, respectively. Figure 8c–j are the fusion comparative results of the eight methods, and (k) is the fusion result of the proposed method. Figure 9 presents the local detail magnified regions of Figure 8. The red regions are the boundaries between the focus regions and the defocus regions. The fusion result suggests the proposed method is clearer at the boundary, and that the characteristics of the source image are better preserved than other methods, whose fusion results have blurred edges and artifacts.

3.1.2. Subjective Analysis of Unregistered Images Fusion Results

Figure 10 shows the fusion results of the “temple” source images obtained by nine different multi-focus image fusion methods. Figure 10a,b are two source images of the front focus image and the back focus image, respectively. From the stones in the lower left corners of the source images (a) and (b), it can be see that the two images have been displaced and have not been registered. Figure 10c,j are the fusion results obtained by the curvelet, DCT_SVD, DTCWT, IFM, LP, MSVD, MWGF, and NSCT methods. Figure 10k is the fusion result obtained by the proposed method. Figure 11 is the local detail magnified regions of Figure 10. Although source images have misregistration, it can be seen from the part marked by the red regions in Figure 11 that the fusion result of the proposed method is very clear at the boundary between the stone lion and the background with fonts. The fusion results of other methods have produced varying degrees of edge blur and artifacts. Obviously, due to the precise detection of the pixel-focus, the proposed method obtains the best fusion results.

3.1.3. Subjective Analysis of More Image Fusion Results

In order to further verify the effectiveness of the proposed method, we selected 10 pairs of popular multi-focus source images for comparative experiments, and the source images are shown in Figure 12. Figure 13 shows the fusion results obtained by the proposed method and the other eight methods for comparison. In contrast, the proposed method achieves desirable results in the fusion of 10 pairs of multi-focus images. The proposed method obtains a precise fusion boundary in the fusion results of “book”, “clock”, “flower”, “hoed”, and “lytro”. In the fusion results of “craft”, “grass”, and “seascape” images, clear fusion details are also obtained. In the case where there is a significant difference between the student’s eyes in the “lab” source image and the girl’s body posture in the “girl” source image, the proposed method also obtains a satisfactory fusion result.

3.1.4. Objective Analysis of Fusion Results

The quantitative evaluation of the fusion images has been acknowledged as a challenging task, since, in practice, it lacks of reference images for the source images. In this paper, we selected the edge similarity metric QAB/F [25], the normalized mutual information metric QMI [1], the phase congruency based fusion metric QPC [33], and gradient-based fusion performance metric QG [35] to evaluate the fusion results. For all four objective evaluation indicators, the larger the value, the better the fusion results. The highest value in the evaluation is bolded in all tables.
Table 1 shows the objective evaluation values of the fusion results of the nine methods, and the evaluation objects are the “wine” in Figure 5, the “newspaper” in Figure 8, and the “temple” in Figure 10. We can see that the MWGF method has the largest QAB/F value in the “newspaper”, and the proposed method fares the best in other evaluation indicators. The method obtains the largest values among the other objective evaluation indicators, which is consistent with the subjective visual effect of the fusion result.
Table 2 shows the QAB/F objective evaluation values of the fusion results of 10 pairs of source images with different methods. The proposed method fares the best in other evaluation indicators. The method gets the best fusion results in “book”, “craft”, “flower”, “girl”, “grass”, “lab”, “lytro”, and “hoed”. IFM and MWGF get the best fusion results in “clock” and “seascape”, respectively. This means that, in most cases, the proposed method can incorporate important edge information into the fusion image.
Table 3 shows the QMI objective evaluation of the fusion results of 10 pairs of source images with different methods. The proposed method obtains the best fusion results among the nine methods. Although the DCT_SVD method has the highest evaluation values in “flower” and “hoed”, the evaluation value of the proposed method is very close to it, and the variation is less than 0.04.
Table 4 shows the QPC objective evaluation values of the fusion results of 10 pairs of source images with different methods. Except for the MWGF method, to obtain the best fusion result in “seascape”, the proposed method has the highest values in other evaluation indicators. This means that the proposed method can well retain important source image feature information of the fused image.
Table 5 shows the QG objective evaluation of the fusion results of 10 pairs of source images with different methods. The IFM method achieves the best fusion results in “clock” and “craft”, and the DCT_SVD method in “hoed”. The proposed method fares the best in other evaluation indicators. These mean that the fused image obtained by the proposed method has high sharpness.
Figure 14a–d show the score line graphs of 9 methods on 4 evaluation indicators of 10 pairs of multifocal images, respectively. Obviously, the proposed method fares the best in other evaluation indicators and shows a better scoring trend, compared with other methods. This means that the proposed method fares the best in other evaluation indicators. The method not only suggests better performance in terms of visual perception, but also in quantitative analysis.

3.1.5. Comparison of Computational Efficiency

To compare the computational efficiency, we calculate and list the average fusion time of the nine methods in Table 6. Noticeably, the proposed method takes less fusion time than the IFM and the MWGF methods. The IFM method consumes the most fusion time and the LP method consumes the least fusion time. Comparing the fusion results, it is worthwhile to improve the fusion quality at the cost of the time.

3.2. Comparative Analysis of Fusion Results Based on Deep Learning Methods

Deep learning, with powerful feature extraction capabilities, has been widely used in multi-focus image fusion. The fusion model obtained through the learning of a large amount of data generalizes well. In order to further verify the effectiveness of the proposed method, it is compared with the deep learning-based multi-focus image fusion methods FuseGAN and CNN proposed in [25]. The comparative experiment in this paper inherits all of the experimental data in [25], including the source images and the fusion results of deep learning methods. The source images in Figure 15 and Figure 17 are from [36] and the lytro dataset [37].

3.2.1. Subjective Analysis of Image Fusion Results

Figure 15 shows the fusion results obtained by the deep learning fusion methods and the proposed fusion method. (a) and (b) are respectively the source images of the front focus and the back focus. (c) and (d) are the fusion results obtained by the CNN and FuseGAN methods. (e) is the fusion result achieved by the proposed method.
Figure 16 shows an enlarged region of the local details marked with a yellow frame in Figure 15. In Figure 16, the part marked by the red frame shows that the fused image introduce the blurred edges, which are, respectively, produced by the fusion method of CNN and FuseGAN. The results show that among these methods, the proposed method best preserves the edge information of the source image.
To further verify the effectiveness of the proposed method, 16 pairs of multi-focus source images are selected for comparative experiments. Figure 17 shows the source images and the fusion results. The results reveal that both the proposed method and deep learning method have achieved satisfactory fusion results. Figure 17c,g are the fusion results of the FuseGAN; (d) and (h) are the fusion results achieved by the proposed method.

3.2.2. Objective Analysis of Image Fusion Results

This article selects four evaluation metrics in [25] to evaluate the fusion results, to compare with deep learning methods. They are the edge similarity metric QAB/F, the spatial frequency metric QSF, the structural similarity metric QY, the feature contrast metric QCB. For the above four evaluation metrics, the larger the value, the better the fusion results.
Table 7 shows the mean values of objective evaluations and the average fusion time corresponding to the four metrics when the fusion methods are applied to 29 pairs of source images, with evaluation values of FuseGAN and CNN derived from [25]. The evaluation results show that the proposed method has the best average values in QSF and QCB. Although the QAB/F and Qy values of the proposed method are smaller than the other two, the difference between them is not greater than 0.015. In summary, the proposed method shows good performance in both visual perception and quantitative analysis.
Table 7 lists the computation efficiency of various methods. As it can be seen, FuseGAN and CNN respectively consume the least and most running times. The running time of the proposed method is slightly longer than that of FuseGAN. Compared with the depth learning method, the proposed method does not need to train the model and parameters in advance and, therefore, is more feasible.

3.3. More Analysis

3.3.1. Ablation Research

The parameter-adaptive pulse coupled neural network (PA-PCNN) model can effectively extract image edge and detail information without any training, and all the parameters can be adaptively estimated through the input frequency band. In order to fully investigate the role of PA-PCNN played in the proposed algorithm, the proposed method performs image fusion without it. Specifically, the PA-PCNN fusion strategy is not used in the high-frequency component fusion, but a conventional fusion strategy based directly on the high-frequency decomposition coefficients. This article selects two pairs of images from the lytro dataset for ablation research. In Figure 18c is the fusion result with PA-PCNN, and (d) is the fusion result without PA-PCNN. The upper right corners of (c) and (d) are detailed enlarged views of the area marked with red boxes. In the enlarged detail, the edge of the boy’s hat in (d) and the edge of the Sydney Opera House model have obvious edge blurs, while the same area in (c) is clear. The analyses show that PA-PCNN plays a role in enhancing the fusion effect in the proposed fusion approach.

3.3.2. Series Multi-Focus Image Fusion

The proposed method can also realize image fusion with more than two multi-focus source images. Figure 19 shows the fusion results of a sequence of three multi-focal sources images. The fusion process of the proposed method goes as follows. Firstly, two of the three source images are selected for fusion; the fused image in the previous step is then fused with the remaining source image to obtain the final fusion images. It can be seen that the focus information of the three source images is well preserved in the final fusion image, with good visual effects.

4. Conclusions

In this paper, a multi-focus image fusion method based on multi-scale decomposition with complementary information is proposed. The proposed method achieves the multi-scale double-layer decomposition by constructing an image decomposition scheme with complementary structures and directions. The decomposition method can accurately extract the structure and detailed information of the source images. In order to further ameliorate the fusion quality, different fusion rules are designed according to the characteristics of each decomposition component. In addition, through decomposition and fusion, a decision map with complementary information can be obtained. According to the complementary decision maps, the focus regions and the non-focus regions can be accurately determined, which help solve the fusion problems caused by the anisotropic blur and unregistration of the multi-focus image. The experimental results show that the fusion result of the proposed method is slightly better than the existing methods in terms of image pre-registration and unregistration. Nevertheless, the approach has some limitations and needs refinement. Firstly, in the settings of the method parameters are mainly based on empirical values, and the choice of decomposition scale is one example. The adaptive selection of parameters will be the focus of future research. Moreover, the application of the proposed method to other areas, such as medical image processing and infrared-visible image processing should be part of future exploration.

Author Contributions

Conceptualization, H.W.; Funding acquisition, X.T.; Methodology, X.T.; Software, H.W. and Z.Z.; Supervision, W.L.; Visualization, Z.Z.; Writing original draft, H.W.; Writing review & editing, X.T. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported in part by the National Nature Science Foundation of China under Grant 61673079; in part by the Natural Science Foundation of Chongqing under Grant cstc2018jcyjAX0160; in part by the Educational Commission Foundation of Chongqing under Grant grant KJQN201900547 and KJ120611.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Data sharing not applicable.

Acknowledgments

The authors thank the editors and the anonymous reviewers for their careful works and valuable suggestions for this study.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Yu, L.; Lei, W.; Juan, C.; Chang, L.; Xun, C. Multi-focus image fusion: A Survey of the state of the art. Inf. Fusion 2020, 64, 71–91. [Google Scholar]
  2. Bin, X.; Bo, X.; Xiu, B.; Wei, L. Global-feature Encoding U-Net (GEU-Net) for Multi-focus Image Fusion. IEEE Trans. Image Process. 2021, 30, 163–175. [Google Scholar]
  3. Petrovi, V.S.; Xydeas, C.S. Gradient-based multiresolution image fusion. IEEE Trans. Image Process. Publ. IEEE Signal Process. Soc. 2004, 13, 228–237. [Google Scholar] [CrossRef]
  4. Liu, S.; Chen, J. A Fast Multi-Focus Image Fusion Algorithm by DWT and Focused Region Decision Map. In Proceedings of the Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA), Jeju, Korea, 13–16 December 2016; pp. 1–7. [Google Scholar]
  5. Wan, T.; Canagarajah, N.; Achim, A. Segmentation-driven image fusion based on alpha-stable modeling of wavelet coefficients. IEEE Trans. Multimed. 2009, 11, 624–633. [Google Scholar] [CrossRef] [Green Version]
  6. Yu, B.; Jia, B.; Ding, L.; Cai, Z.; Wu, Q.; Law, R.; Huang, J.; Song, L.; Fu, S. Hybrid dual-tree complex wavelet transform and support vector machine for digital multi-focus image fusion. Neurocomputing 2016, 182, 1–9. [Google Scholar] [CrossRef]
  7. Abas, A.I.; Baykan, N.A. Multi-Focus Image Fusion with Multi-Scale Transform Optimized by Metaheuristic Algorithms. Trait. Signal 2021, 38, 247–259. [Google Scholar] [CrossRef]
  8. Li, Y.; Sun, X.; Huang, G.; Qi, M.; Zheng, Z.Z. An image fusion method based on sparse representation and sum modified-laplacian in nsct domain. Entropy 2018, 20, 522. [Google Scholar] [CrossRef] [Green Version]
  9. Jia, Y.; Dong, H.; Liu, B.; Rong, L.; Zheng, A. Multi-focus Image Fusion Method Based on Laplacian Eigenmaps Dimension Reduction in NSCT Domain. Int. Core J. Eng. 2021, 7, 210–222. [Google Scholar]
  10. Li, S.; Kang, X.; Hu, J. Image fusion with guided filtering. IEEE Trans. Image Process. 2013, 22, 2864–2875. [Google Scholar]
  11. Xiao, B.; Ou, G.; Tang, H.; Bi, X.; Li, W. Multi-Focus Image Fusion by Hessian Matrix based decomposition. IEEE Trans. Multimed. 2020, 22, 285–297. [Google Scholar] [CrossRef]
  12. Zhang, X.; Yan, H.; He, H. Multi-focus image fusion based on fractional-order derivative and intuitionistic fuzzy sets. Front. Inf. Technol. Electron. Eng. 2020, 21, 834–843. [Google Scholar] [CrossRef]
  13. Naidu, V. Image fusion technique using multi-resolution singular value decomposition. Def. Sci. J. 2011, 61, 479–484. [Google Scholar] [CrossRef] [Green Version]
  14. Wan, H.; Tang, X.; Zhu, Z.; Xiao, B.; Li, W. Multi-Focus Color Image Fusion Based on Quaternion Multi-Scale Singular Value Decomposition. Front. Neurorobot. 2021, 15, 695960. [Google Scholar] [CrossRef]
  15. Cao, L.; Jin, L.; Tao, H.; Li, G.; Zhuang, Z.; Zhang, Y. Multi-focus image fusion based on spatial frequency in discrete cosine transform domain. IEEE Signal Process. Lett. 2015, 22, 220–224. [Google Scholar] [CrossRef]
  16. Liu, S.; Shi, M.; Zhu, Z.; Zhao, J. Image fusion based on complex-shearlet domain with guided filtering. Multidimens. Syst. Signal Process. 2017, 28, 207–224. [Google Scholar] [CrossRef]
  17. Vishwakarma, A.; Bhuyan, M.K. Image fusion using adjustable non-subsampled shearlet transform. IEEE Trans. Instrum. Meas. 2019, 68, 3367–3378. [Google Scholar] [CrossRef]
  18. Zhang, Y.; Wei, W.; Yuan, Y. Multi-focus image fusion with alternating guided filtering. Signal Image Video Process. 2019, 13, 727–735. [Google Scholar] [CrossRef]
  19. Li, X.; Li, H.; Yu, Z.; Kong, Y. Multifocus image fusion scheme based on the multiscale curvature in nonsubsampled contourlet transform domain. Opt. Eng. 2015, 54, 1–15. [Google Scholar] [CrossRef]
  20. Kou, L.; Zhang, L.; Zhang, K.; Sun, J.; Han, Q.; Jin, Z. A multi-focus image fusion method via region mosaic on laplacian pyramids. PLoS ONE 2018, 13, e0191085. [Google Scholar] [CrossRef]
  21. Zhou, Z.; Li, S.; Wang, B. Multi-scale weighted gradient-based fusion for multi-focus images. Inf. Fusion 2014, 20, 60–72. [Google Scholar] [CrossRef]
  22. Ma, J.; Zhou, Z.; Wang, B.; Miao, L.; Zong, H. Multi-focus image fusion using boosted random walks-based algorithm with two-scale focus maps. Neurocomputing 2019, 335, 9–20. [Google Scholar] [CrossRef]
  23. Dong, Z.; Lai, C.; Qi, D.; Xu, Z.; Li, C.; Duan, S. A general memristor-based pulse coupled neural network with variable linking coefficient for multi-focus image fusion. Neurocomputing 2018, 308, 172–183. [Google Scholar] [CrossRef]
  24. Hao, Z.; Han, X.; Xin, T.; Jun, J.; Jia, M. Image fusion meets deep learning: A survey and perspective. Inf. Fusion 2021, 76, 323–336. [Google Scholar]
  25. Guo, X.; Nie, R.; Cao, J.; Zhou, D.; Mei, L.; He, K. FuseGAN: Learning to fuse multi-focus image via conditional generative adversarial network. IEEE Trans. Multimed. 2019, 21, 1982–1996. [Google Scholar] [CrossRef]
  26. Wang, C.; Wang, X.; Li, Y.; Xia, Z.; Zhang, C. Quaternion polar harmonic Fourier moments for color images. Inf. Sci. 2018, 450, 141–156. [Google Scholar] [CrossRef]
  27. Chen, Y.; Park, S.K.; Ma, Y.; Rajeshkanna, A. A new automatic parameter setting method of a simplified PCNN for image segmentation. IEEE Trans. Neural Netw. 2011, 22, 880–892. [Google Scholar] [CrossRef]
  28. Yin, M.; Liu, X.; Liu, Y.; Chen, X. Medical image fusion with parameter-adaptive pulse coupled neural network in nonsubsampled shearlet transform domain. IEEE Trans. Instrum. Meas. 2019, 68, 49–64. [Google Scholar] [CrossRef]
  29. Liu, Y.; Liu, S.; Wang, Z. A General Framework for Image Fusion Based on Multi-scale Transform and Sparse Representation. Inf. Fusion 2015, 24, 147–164. [Google Scholar] [CrossRef]
  30. Mostafa, A.; Pardis, R.; Ali, A. Multi-Focus Image Fusion Using Singular Value Decomposition in DCT Domain. In Proceedings of the 10th Iranian Conference on Machine Vision and Image Processing (MVIP), Isfahan, Iran, 22–23 November 2017; pp. 45–51. [Google Scholar]
  31. Li, S.; Kang, X.; Hu, J.; Yang, B. Image matting for fusion of multi-focus images in dynamic scenes. Inf. Fusion 2013, 14, 147–162. [Google Scholar] [CrossRef]
  32. Zhang, Q.; Guo, B. Multifocus image fusion using the nonsubsampled contourlet transform. Signal Process. 2009, 89, 1334–1346. [Google Scholar] [CrossRef]
  33. Zhang, Y.; Bai, X.; Wang, T. Boundary finding based multi-focus image fusion through multi-scale morphological focus-measure. Inf. Fusion 2017, 35, 81–110. [Google Scholar] [CrossRef]
  34. Available online: https://github.com/sametaymaz/Multi-focus-Image-Fusion-Dataset (accessed on 24 January 2021).
  35. Zhu, Z.; Wei, H.; Hu, G.; Li, Y.; Guanqiu, Q.; Mazur, N. A novel fast single image dehazing algorithm based on artificial multiexposure image fusion. IEEE Trans. Instrum. Meas. 2020, 70, 1–23. [Google Scholar] [CrossRef]
  36. Saeedi, J.; Faez, K. A classification and fuzzy-based approach for digital multi-focus image fusion. Pattern Anal. Appl. 2013, 16, 365–379. [Google Scholar] [CrossRef]
  37. Available online: https://mansournejati.ece.iut.ac.ir/content/lytro-multi-focus-dataset (accessed on 16 October 2020).
Figure 1. Two groups of double-layer multi-scale singular value decomposition schemes with complementary structure and scale. In the two decomposition schemes, H 1 14 1 are the high-frequency components of the first layer after decomposition, H 1 5 2 are the high-frequency components of the second layer after decomposition, L is the low-frequency component of the second layer after decomposition. (a) The first group of the decomposition scheme; (b) the second group of the decomposition scheme.
Figure 1. Two groups of double-layer multi-scale singular value decomposition schemes with complementary structure and scale. In the two decomposition schemes, H 1 14 1 are the high-frequency components of the first layer after decomposition, H 1 5 2 are the high-frequency components of the second layer after decomposition, L is the low-frequency component of the second layer after decomposition. (a) The first group of the decomposition scheme; (b) the second group of the decomposition scheme.
Entropy 23 01362 g001
Figure 2. (a) the left focus source image; (b) the right focus source image; (c) fusion decision map obtained by the first group of the decomposition scheme; (d) fusion decision map obtained by the second group of the decomposition scheme; (e) the initial fusion decision map determined by (c,d), the black area corresponding to the decision value “0”, the white area corresponding to the decision value “1”, and the black and white areas are definite focus areas. The red area is the aliasing area of (c,d), which is the non-definite focus area.
Figure 2. (a) the left focus source image; (b) the right focus source image; (c) fusion decision map obtained by the first group of the decomposition scheme; (d) fusion decision map obtained by the second group of the decomposition scheme; (e) the initial fusion decision map determined by (c,d), the black area corresponding to the decision value “0”, the white area corresponding to the decision value “1”, and the black and white areas are definite focus areas. The red area is the aliasing area of (c,d), which is the non-definite focus area.
Entropy 23 01362 g002
Figure 3. The diagram of a neuron in PA-PCNN model.
Figure 3. The diagram of a neuron in PA-PCNN model.
Entropy 23 01362 g003
Figure 4. Schematic diagram of the proposed method of multi-focus image fusion. D1 is the first scheme decision map; D2 is the second scheme decision map; the red region in the initial decision map DF is the non-definite focus region DUniden. H i c is the i-th high-frequency component in the c-th layer decom-position, where c = 1 or 2.
Figure 4. Schematic diagram of the proposed method of multi-focus image fusion. D1 is the first scheme decision map; D2 is the second scheme decision map; the red region in the initial decision map DF is the non-definite focus region DUniden. H i c is the i-th high-frequency component in the c-th layer decom-position, where c = 1 or 2.
Entropy 23 01362 g004
Figure 5. The source images of “wine” and the fusion results of different methods. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD, (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Figure 5. The source images of “wine” and the fusion results of different methods. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD, (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Entropy 23 01362 g005
Figure 6. The partial enlarged regions taken from Figure 5a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Figure 6. The partial enlarged regions taken from Figure 5a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Entropy 23 01362 g006
Figure 7. The partial enlarged regions taken from Figure 5a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Figure 7. The partial enlarged regions taken from Figure 5a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Entropy 23 01362 g007
Figure 8. The source image of “newspaper” and the fusion results of different methods. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Figure 8. The source image of “newspaper” and the fusion results of different methods. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Entropy 23 01362 g008aEntropy 23 01362 g008b
Figure 9. The partial enlarged regions taken from Figure 8a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Figure 9. The partial enlarged regions taken from Figure 8a–k. (a) Source image A, (b) source image B, (c) curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF, (j) NSCT, (k) proposed method.
Entropy 23 01362 g009
Figure 10. The source image of “temple” and the fusion results of different methods.
Figure 10. The source image of “temple” and the fusion results of different methods.
Entropy 23 01362 g010
Figure 11. The partial enlarged regions taken from Figure 10a–k. (a) Source image A, (b) Source image B, (c) Curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF (j) NSCT (k) Proposed method.
Figure 11. The partial enlarged regions taken from Figure 10a–k. (a) Source image A, (b) Source image B, (c) Curvelet, (d) DCT_SVD (e) DTCWT, (f) IFM, (g) LP, (h) MSVD, (i) MWGF (j) NSCT (k) Proposed method.
Entropy 23 01362 g011
Figure 12. 10 pairs of multi-focus source images.
Figure 12. 10 pairs of multi-focus source images.
Entropy 23 01362 g012
Figure 13. Fusion results of different methods.
Figure 13. Fusion results of different methods.
Entropy 23 01362 g013
Figure 14. (ad) show the score line graphs of the four image evaluation indicators (QAB/F, QMI, QPC, and QG) corresponding to Table 2,Table 3,Table 4,Table 5, respectively. In subfigures (ad), the horizontal axis represents the image indices ranging from 1 to 10, and the vertical axis represents values of the image evaluation indicators.
Figure 14. (ad) show the score line graphs of the four image evaluation indicators (QAB/F, QMI, QPC, and QG) corresponding to Table 2,Table 3,Table 4,Table 5, respectively. In subfigures (ad), the horizontal axis represents the image indices ranging from 1 to 10, and the vertical axis represents values of the image evaluation indicators.
Entropy 23 01362 g014aEntropy 23 01362 g014b
Figure 15. The source images (a,b) are from the lytro dataset; (c) is the fusion result of the CNN; (d) is the fusion result of the FuseGAN; (e) is the fusion result of the proposed method.
Figure 15. The source images (a,b) are from the lytro dataset; (c) is the fusion result of the CNN; (d) is the fusion result of the FuseGAN; (e) is the fusion result of the proposed method.
Entropy 23 01362 g015
Figure 16. The partial enlarged regions taken from Figure 15a–e.
Figure 16. The partial enlarged regions taken from Figure 15a–e.
Entropy 23 01362 g016
Figure 17. (a,b,e,f) show 16 pairs of source images; (c,g) are the fusion results of the FuseGAN; (d,h) are the fusion results of the proposed method.
Figure 17. (a,b,e,f) show 16 pairs of source images; (c,g) are the fusion results of the FuseGAN; (d,h) are the fusion results of the proposed method.
Entropy 23 01362 g017
Figure 18. Ablation experiment of the PCNN. (a,b) are source images; (c) results with PCNN; (d) results without PCNN.
Figure 18. Ablation experiment of the PCNN. (a,b) are source images; (c) results with PCNN; (d) results without PCNN.
Entropy 23 01362 g018
Figure 19. An example of applying the proposed method to fuse three source images. (ac) are source images, (d) fusion results of the proposed method.
Figure 19. An example of applying the proposed method to fuse three source images. (ac) are source images, (d) fusion results of the proposed method.
Entropy 23 01362 g019
Table 1. The objective assessments of different methods for Figure 5, Figure 8 and Figure 10.
Table 1. The objective assessments of different methods for Figure 5, Figure 8 and Figure 10.
ImagesMetricsCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
wineQAB/F0.64120.69420.67520.71110.68890.44670.69200.62900.7162
QMI5.26228.48755.66637.80426.59474.97326.43365.45348.6158
QPC0.62990.68450.66490.70400.00040.42060.69010.61740.7087
QG0.48090.65010.53030.65310.56790.35560.62510.49500.6682
newspaperQAB/F0.52440.66250.62700.66590.63690.30980.67660.41990.6751
QMI1.90366.43182.21175.88312.98151.74815.51511.98216.5558
QPC0.50430.65330.61180.65680.00040.28270.66390.39990.6665
QG0.48510.63820.58780.64250.61620.33490.62990.42660.6501
templeQAB/F0.57230.75120.67150.75820.74290.34740.60510.53690.7642
QMI2.98957.22763.03517.03555.19783.02243.28133.14487.3391
QPC0.58320.75330.67950.76190.00050.36580.60470.54230.7676
QG0.51090.71460.60890.71930.68910.37220.71240.49220.7203
Table 2. The objective assessment QAB/F of different methods for Figure 13.
Table 2. The objective assessment QAB/F of different methods for Figure 13.
ImagesCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
book0.73350.75940.75040.75960.75320.66630.72270.74080.7628
clock0.60220.67130.66180.70250.69200.56580.54370.61900.7018
craft0.66050.71950.68910.72670.70860.68980.44010.69410.7346
flower0.66570.70980.70040.71250.69850.66790.39910.68480.7133
girl0.61460.67770.65280.68570.66960.55300.58360.59130.6919
grass0.55740.64590.60370.66940.63200.41200.44960.55020.6706
lab0.61830.71160.68920.73840.71940.58520.68590.60460.7394
lytro0.62330.73730.70130.74280.73340.51630.71010.61620.7445
seascape0.53580.69550.62310.70380.63330.46140.87940.49200.7060
hoed0.66190.82070.74730.80940.80740.55680.73790.63230.8212
Table 3. The objective assessment QMI of different methods for Figure 13.
Table 3. The objective assessment QMI of different methods for Figure 13.
ImagesCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
book7.54699.22547.86369.26508.05687.09908.74037.56559.4974
clock6.57478.56436.60818.32897.26976.63586.56006.82978.5648
craft6.81478.59816.93868.80357.18227.27635.98107.45978.8691
flower5.29938.05695.82857.91846.42405.07633.86545.33638.0174
girl5.42268.84795.76478.86776.25465.32497.64085.52919.0835
grass4.80718.51314.98858.46495.83144.64844.97094.94668.9043
lab6.62198.51816.99028.53027.57736.93827.92297.02738.6211
lytro5.74198.19065.89218.07256.71155.73057.90505.90488.3023
seascape4.58157.94924.80317.71845.58104.65476.71744.83338.0761
hoed4.56548.39754.73907.98186.44624.55576.25994.6683 8.3834
Table 4. The objective assessment QPC of different methods for Figure 13.
Table 4. The objective assessment QPC of different methods for Figure 13.
ImagesCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
book0.72540.74990.74160.75030.00050.65420.70810.73530.7535
clock0.60060.67180.66080.70590.00050.56440.54980.61900.7067
craft0.61950.67400.65470.68170.00040.65050.31230.65390.6910
flower0.67220.71410.70580.71790.00050.68610.40420.69620.7181
girl0.59670.67120.63890.67570.00040.53060.57370.57020.6827
grass0.56200.64520.60900.66920.00040.42380.43930.55270.6705
lab0.63070.70110.69920.73140.00050.58910.67940.60930.7320
lytro0.60940.72990.69280.73540.00050.49460.69730.60060.7374
seascape0.54020.69940.62610.70530.00040.46230.88930.48910.7064
hoed0.67920.81710.75380.80740.00050.58420.75820.65160.8174
Table 5. The objective assessment QG of different methods for Figure 13.
Table 5. The objective assessment QG of different methods for Figure 13.
ImagesCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
book0.56360.66160.61420.67040.62490.55420.64820.60620.6733
clock0.47300.65380.52360.67000.56610.48610.65090.51940.6604
craft0.51240.65080.56190.66290.60700.57900.65070.59110.6494
flower0.59360.68310.66410.68630.65260.60680.67700.62000.6869
girl0.59240.67370.64430.68290.66280.54250.68260.57440.6859
grass0.52530.64350.58710.66960.62490.39500.64230.53000.6736
lab0.46040.70560.54630.69460.57880.47980.71530.48730.7167
lytro0.55520.69870.64650.70990.68790.50740.70680.56910.7101
seascape0.53320.69450.62950.69750.66870.47650.70320.51540.7116
hoed0.62160.79120.70900.78360.77480.53160.78060.59360.7896
Table 6. Average running time of different fusion methods.
Table 6. Average running time of different fusion methods.
MetricCurveletDCT_SVDDTCWTIFMLPMSVDMWGFNSCTProposed Method
Time
(Seconds)
0.97571.13960.40362.22000.30720.31621.70360.78421.4473
Table 7. Average objective assessment and running time of different fusion methods.
Table 7. Average objective assessment and running time of different fusion methods.
MetricQAb/FQSFQYQCBTime (Seconds)
FuseGAN0.72220.02110.99250.80320.53
CNN0.71770.03420.99010.8001109.16
Proposed0.71620.072610.97760.81272.85
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Share and Cite

MDPI and ACS Style

Wan, H.; Tang, X.; Zhu, Z.; Li, W. Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary. Entropy 2021, 23, 1362. https://doi.org/10.3390/e23101362

AMA Style

Wan H, Tang X, Zhu Z, Li W. Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary. Entropy. 2021; 23(10):1362. https://doi.org/10.3390/e23101362

Chicago/Turabian Style

Wan, Hui, Xianlun Tang, Zhiqin Zhu, and Weisheng Li. 2021. "Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary" Entropy 23, no. 10: 1362. https://doi.org/10.3390/e23101362

APA Style

Wan, H., Tang, X., Zhu, Z., & Li, W. (2021). Multi-Focus Image Fusion Method Based on Multi-Scale Decomposition of Information Complementary. Entropy, 23(10), 1362. https://doi.org/10.3390/e23101362

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop