Detail Enhancement Multi-Exposure Image Fusion Based on Homomorphic Filtering

: Due to the large dynamic range of real scenes, it is difﬁcult for images taken by ordinary devices to represent high-quality real scenes. To obtain high-quality images, the exposure fusion of multiple exposure images of the same scene is required. The fusion of multiple images results in the loss of edge detail in areas with large exposure differences. Aiming at this problem, this paper proposes a new method for the fusion of multi-exposure images with detail enhancement based on homomorphic ﬁltering. First, a fusion weight map is constructed using exposure and local contrast. The exposure weight map is calculated by threshold segmentation and an adaptively adjustable Gaussian curve. The algorithm can assign appropriate exposure weights to well-exposed areas so that the fused image retains more details. Then, the weight map is denoised using fast-guided ﬁltering. Finally, a fusion method for the detail enhancement of Laplacian pyramids with homomorphic ﬁltering is proposed to enhance the edge information lost by Laplacian pyramid fusion. The experimental results show that the method can generate high-quality images with clear edges and details as well as similar color appearance to real scenes and can outperform existing algorithms in both subjective and objective evaluations.


Introduction
In natural scenes, the dynamic range can reach six orders of magnitude, and the normal human eye can capture four orders of magnitude, dynamic range refers to the ratio between the maximum brightness and the minimum brightness in the same scene, while the dynamic range that can be captured by ordinary digital cameras [1] is only two orders of magnitude [2]; therefore, the images captured by ordinary digital cameras are exposed and poor areas can cause a loss of detail. High Dynamic Range (HDR) images can truly reflect natural scenes [3].
The generation of HDR images is divided into two methods, hardware and software. HDR equipment can directly obtain images of natural scenes; however, dedicated equipment is expensive, and HDR images cannot be displayed on ordinary low dynamic range (LDR) equipment, and thus it is difficult to use [4]. The software method uses HDR imaging technology to make the images displayed by LDR devices richer in detail. The technique includes two methods, tone mapping [5,6], and multi-exposure image fusion. The first requires estimating the camera response function (CRF) to construct an HDR image and then uses tone mapping to display the HDR image on a normal LDR display device; however, this has a disadvantage.
The calculation of CRF requires multiple exposure parameters, and these exposure parameters need to be calculated separately. Thus, it is time-consuming and limited [7]. In contrast, the computationally simple multi-exposure image fusion method is more efficient as it directly fuses multiple images of different exposures of an HDR scene into a high-quality image that can be displayed on an LDR device [8].
Existing traditional multi-exposure fusion methods can be divided into two categories: based on the spatial domain and based on the transform domain. The spatial domain mainly analyzes and operates on pixel values or structural blocks. Its three-dimensional information is rich [9], the method is simple, and the calculation cost is low; however, the generated images will have problems, such as noise and information loss. For example, Gu et al. [10] used the structural tensor fusion of the input image to iteratively correct the gradient field using quadratic mean filtering and multi-scale nonlinear compression; however, this method made the image noisy.
Li et al. [11] used local contrast, brightness, and color dissimilarity to construct a weight map, followed by recursive filtering for denoising and refinement, and finally, weighted fusion to obtain the result; however, this method produced unnatural artifacts. Huang et al. [12] decomposed the image into contrast extraction, structure preservation, and intensity adjustment, where structure preservation and intensity adjustment were calculated by local weights, global weights, and saliency weights, and finally reconstructed the resulting map. The details of the images generated by this method were well preserved, but the colors were quite different from the input images.
In the transform domain, the image is transformed into the frequency domain through discrete Fourier transform (DFT) [13], pyramid transform [14], etc., and the image color generated by these methods is very close to reality; however, it is easy to lose texture details [15]. For example, Mertens et al. [16] used contrast, saturation, and good exposure to construct a weight map, and finally used multi-resolution fusion, which can generate images with natural colors but cannot preserve the full edge texture details of the image.
Wang et al. [17] added local Laplacian filtering to the Mertens method to enhance details in local overexposed and underexposed regions and proposed discrete sampling and interpolation to speed up the results. With the development of machine learning, methods for multi-exposure image fusion with neural networks have begun to appear. Xu et al. [18] proposed a multi-exposure image fusion method based on generative adversarial networks, in which the generator network and the discriminator network are simultaneously trained to form an adversarial relationship, and a self-attention mechanism was introduced for the problem of large image exposure differences. In addition, some new fusion methods have gradually emerged.
For example, Yang et al. [19] first used the K-means-based algorithm (K-SVD) to calculate the sparsity exposure dictionary (SED) to construct the exposure estimation map and then used the exposure estimation map and adaptive guided filter to construct the final fusion decision map and finally pyramid fusion. The method of Ulucan et al. [20] utilizes the histogram and K-means classified atlas to extract linear embedding weights and watershed masks for fusion and finally corrects unsatisfactory color intensities.
In this paper, the transform domain pyramid fusion method is used and based on the Mertens method, threshold segmentation and an adaptively adjustable Gaussian curve are proposed to calculate the exposure weight, and the homomorphic filter is used to enhance the detail layer of the Laplacian pyramid. This method not only generates images with rich colors but also preserves the texture details of the images. The image fusion algorithm proposed in this paper has three main contributions: 1.
This paper applies homomorphic filtering to the multi-exposure image fusion algorithm for the first time. Other detail enhancement algorithms will lose some low-frequency signals when enhancing high-frequency details, while homomorphic filtering can enhance details while retaining low-frequency signals and can enhance the details based on retaining the original image information. 2.
An exposure weighting algorithm based on threshold segmentation and adaptively adjustable Gaussian curve is proposed, which assigns more reasonable weights to well-exposed areas and retains more detailed information. 3.
The Laplacian pyramid is improved based on homomorphic filtering, which enhances the edge details of the fused image and generates an image with obvious details.
The rest of this article is organized as follows. In the second part, we discuss the most common and state-of-the-art methods related to multi-exposure image fusion. Section 3 introduces the proposed method in detail. The fourth part analyzes the experimental results for subjective and objective evaluation. The last section discusses the conclusion of this paper and the next steps.

Related Works
In recent years, many studies have been conducted on multi-exposure fusion algorithms. Mertens et al. [16] first used contrast, saturation, and well exposure to construct a fused image weight map, then decomposed the source image sequence and weight map into a Laplacian pyramid and a Gaussian pyramid, respectively, and finally fused the resulting image with multi-resolution. The result of this fusion method is very close to reality, and the naked eye can hardly distinguish the large color difference but will lose detail.
Shen et al. [21] proposed a method for exposure fusion using enhanced Laplacian pyramids, the principle of which is to use local weights, global weights, and JND-based saliency weights to estimate the exposure weight map to enhance the detail signal and base signal of the image to construct a novel augmented Laplacian pyramid. This method effectively preserves the texture structure of the image but suffers from artifacts and color distortion due to excessive detail enhancement.
Li et al. [22] proposed extracting image details with a weighted structure tensor, used the Gaussian pyramid of the luminance component of the source image sequence as the guide image, smoothed the weighted Gaussian pyramid of all LDR images using weighted guided image filter (WGIF) [23], and the final result map is obtained by multi-resolution fusion. The detail preservation effect of this method is relatively good; however, the sharpness is flawed where there is a large difference between light and dark between the source image sequences.
Ma et al. [24] proposed a new structural block decomposition method, which first decomposes the source image sequence into three components: signal intensity, signal structure, and average intensity, then fuses each component of different images separately, and finally reconstructs into a fused image. The algorithm is capable of producing images with distinct color appearances but is prone to over-sharpening and local color distortion in areas with large differences in brightness.
Hayat et al. [25] proposed to estimate the initial weights with three indicators of local contrast, brightness, and color dissimilarity where the local contrast was calculated by the dense SIFT [26] descriptor, and then smoothed the weight map with a fast-guided filter, and finally used Pyramid fusion to generate the resulting graph. The algorithm maintains good global contrast but loses some details.
Qi et al. [27] proposed an accurate multi-exposure image fusion method based on low-order features. The principle is to use guided filters to decompose the source image into base layers and detail layers. For the base layer, the method of Ma et al. is used to decompose the image blocks for calculation, and the detail layer is weighted according to the average level of local brightness changes. Finally, the base layer and the detail layer are weighted and fused. This method performs well on image sharpness and color information but is prone to noise and halos.
Huang et al. [28] proposed a multi-exposure image fusion method based on adaptive factor feature evaluation. This method uses the adaptive factor exposure to evaluate the weight, uses the Sobel operator to calculate the texture change weight, and obtains the image by pyramid fusion. The obtained fusion image has better brightness and can retain certain details; however, the details will still be lost in the areas where the image exposure difference is too large. The detailed method comparison is shown in Table 1. Except for the methods of Ma and Qi, the above fusion methods all use pyramid fusion, and pyramid fusion has the defect of the loss of edge details. Hayat [25] The local contrast, brightness, and color dissimilar features are proposed to estimate the initial weights, where the local contrast is calculated by DSIFT, and pyramid fusion is used after denoising by a guided filter. Qi [27] The source image is decomposed into a base layer and a detail layer with a guided filter, the base layer is calculated based on image blocks, and the detail layer is weighted according to the local average brightness change.
MATLAB 24 static scenes and 15 dynamic scenes by Ma, Hu, and Sen The subjectivity of the six algorithms and their mean values IQA, Q AB/F , and MI are compared Huang [28] Based on the Mertens method, the adaptive factor exposure evaluation weight is proposed, the Sobel operator is used to calculate the texture change weight, and finally, the pyramid is fused.  Although Shen's method improves the Laplacian pyramid to enhance the image details, the generated image has the problem of color distortion-the color of the image is quite different from the actual scene. Ma's results are still poorly exposed where the exposure is poor, and Qi synthesizes methods based on pixel values or building blocks but does not do a good job of removing noise interference on the fusion of detail layers. To produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method.
In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fast-guided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process. produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fastguided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process.

Exposure Weight
The purpose of exposure weighting is to select better exposed areas, which usually contain more image information. The principle of the exposure weight algorithm in the method of Mertens et al. [16] is to use a Gaussian curve to assign the exposure weight after the image is normalized, as shown in Figure 2. The gray value close to 0.5 is considered to be medium exposure and is given a larger weight. When it deviates from 0.5, the larger the deviation, the smaller the weight is given. The specific formula is as follows: whereÎ gray n (i, j) represents the pixel value in the ith row and jth column of the normalized grayscale image of the nth input image, and W 1 n (i, j) represents the exposure weight value of the nth image in the ith row and jth column, the value range of n is 1, 2, . . . , N, N refers to the number of input image sequences, and σ controls the amplitude of the curve and generally takes a value of 0.2.        (5) and (6) 11: Use Equations (7) and (8) to assign weights to W 1n 12: end for 13: for each image I gray n (i, j) do 14: the local contrast value A n is calculated by Equation (9) 15: Use Equation (11) to assign weights to W 2n 16: end for 17: Use Equation (12) to calculate the initial weight mapŴ n 18: Use Equation (13) to denoiseŴ n , and get W n after normalization 19: Use Equations (14)- (16) to calculate L{I n } (l) 20: Reconstructed pyramid fused into F by Equations (17)- (19) 10: Calculate and separately by Equations (5) and (6)  11: Use Equations (7) and (8) to assign weights to 1 12: end for 13: for each image gray ( , ) do 14: the local contrast value is calculated by Equation (9)  15: Use Equation (11) to assign weights to 2 16: end for 17: Use Equation (12) to calculate the initial weight map ̂ 18: Use Equation (13)

Exposure Weight
The purpose of exposure weighting is to select better exposed areas, which usually contain more image information. The principle of the exposure weight algorithm in the method of Mertens et al. [16] is to use a Gaussian curve to assign the exposure weight after the image is normalized, as shown in Figure 2. The gray value close to 0.5 is considered to be medium exposure and is given a larger weight. When it deviates from 0.5, the larger the deviation, the smaller the weight is given. The specific formula is as follows: where ̂g ray ( , ) represents the pixel value in the th row and th column of the normalized grayscale image of the th input image, and 1 ( , ) represents the exposure weight value of the th image in the th row and th column, the value range of is 1, 2, . . . , , refers to the number of input image sequences, and controls the amplitude of the curve and generally takes a value of 0.2. The method used by Mertens et al. [16] is suitable for images with moderate overall brightness, and this algorithm is not optimal for images with too high or low overall brightness. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the high-brightness areas. To this end, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve, which can The method used by Mertens et al. [16] is suitable for images with moderate overall brightness, and this algorithm is not optimal for images with too high or low overall brightness. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the high-brightness areas. To this end, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve, which can adaptively adjust the weights according to the average brightness of the input image sequence. The algorithm is divided into two steps: threshold segmentation and the calculation of adaptive exposure weights.
First, we perform threshold segmentation and normalize the image sequence gray image I gray n to the [0, 1] interval. Since the gray value of the darker part of a well-exposed gray image is low, the color is lighter Part of the gray value is high but also contains detailed information. To avoid mistaking the dark and light parts as bad exposure, we adopt an iterative threshold algorithm [30] that divides each image into dark and light parts with a threshold and calculates the exposure weight separately. The specific algorithms of the threshold are as follows: where mean{·} represents the mean value calculation, and the specific algorithm is shown in Equation (2). A(i, j) represents the area where the mean is calculated, and the size of A(i, j) is equal to r × c, the initial value of T 1 is 0.5, and T 0 is a very small number. If Equation (3) holds, then T 2 is the optimal threshold, otherwise, we assign the value of T 2 to T 1 , and repeat the above steps are iterated until the optimal threshold T 2 is obtained. Then the image is divided into two parts G 1 and G 2 , according to the optimal threshold. The G 1 part is composed of pixels whose gray value is greater than T 2 , and the G 2 part is composed of pixels whose gray value is less than or equal to T 2 . Finally, the weight of adaptive exposure is calculated. If the brightness value of this image is low in the whole image sequence, then the part with high brightness is wellexposed, and the weight of this part needs to be appropriately increased. Similarly, if the brightness value of this image is high in the whole image sequence, it is necessary to appropriately increase the weight of the low brightness part. Therefore, we constructed two adaptive variables α n and β n , which can reflect the offset between the exposure of the input image and 0.5. The calculation formulas are as follows: where T 2n is the optimal threshold of the nth image; α n and β n represent the adaptive variables of the corresponding parts of G 1 and G 2 , respectively; and the Gaussian curve is used to assign the exposure weight W 1 n . The formulas are as follows: , σ controls the amplitude of the curve, we take the value as 0.2, and the process of calculating the exposure weight map is shown in Figure 3.

Local Contrast Weight
The computation of local contrast can be used to preserve important details, such as edges and textures. These edge and texture information is included in the gradient change, and thus the Laplacian filter with good edge detection is used to calculate the local contrast weight, and the Laplacian filter is used for each grayscale image. The specific algorithm is as follows: where A n (i, j) represents the local contrast value of the nth input image in the ith row and jth column, |·| represents the calculation of the absolute value, I gray n (i, j) represents the pixel value of the nth grayscale image in the ith row and jth column, * represents the convolution operation, h is the Laplacian filter kernel, and the value of h is as follows: We use the maximum value of the same pixel position in all images as the local contrast weight W 2 n (i, j), and the specific algorithm is as follows: where W 2n (i, j) represents the local contrast weight value of the nth image in the ith row and jth column.

Local Contrast Weight
The computation of local contrast can be used to preserve important details, such as edges and textures. These edge and texture information is included in the gradient change, and thus the Laplacian filter with good edge detection is used to calculate the local contrast weight, and the Laplacian filter is used for each grayscale image. The specific algorithm is as follows: where ( , ) represents the local contrast value of the th input image in the th row and th column, |•| represents the calculation of the absolute value, ( , ) represents the pixel value of the th grayscale image in the th row and th column, * represents the convolution operation, ℎ is the Laplacian filter kernel, and the value of ℎ is as follows: We use the maximum value of the same pixel position in all images as the local contrast weight 2 ( , ), and the specific algorithm is as follows: where 2 ( , ) represents the local contrast weight value of the th image in the th row and th column.

Pyramid Fusion Based on Homomorphic Filter Detail Enhancement
The initial weight map is constructed from the calculated two indicators. The obtained initial weight map is noisy and discontinuous. Therefore, it is very important to refine the initial weight map. The fast-guided filter uses the input image as the guide image, and the weight map is refined without damaging the edges, which can remove noise

Pyramid Fusion Based on Homomorphic Filter Detail Enhancement
The initial weight map is constructed from the calculated two indicators. The obtained initial weight map is noisy and discontinuous. Therefore, it is very important to refine the initial weight map. The fast-guided filter uses the input image as the guide image, and the weight map is refined without damaging the edges, which can remove noise very well. Therefore, we choose to use the guided filter to refine the weight map and normalize it, the specific algorithms are as follows: where GF r,ε (I, G) represents the fast-guided filtering, r represents the filter radius, ε manages the blur degree of the filtering, I represents the input image, and G represents the guided image. The fast bootstrap filter parameter settings are the same as the algorithm of Hayat et al. [25]. Direct weighted fusion can lead to the problem of seams and blurring in the output image, which can be solved by pyramid-based multi-resolution methods; however, pyramid fusion will lose some edge texture details. In order to preserve the detailed information of the image, a pyramid fusion based on homomorphic filter detail enhancement is proposed. Many detail enhancement algorithms lose some low-frequency signals when enhancing high-frequency details, while homomorphic filtering [31] can preserve low-frequency signals while enhancing details; therefore, we use homomorphic filtering for detail enhancement. The principle of the method is to decompose the weight map and the input image with the Gaussian pyramid and the Laplacian pyramid, respectively. The Laplacian pyramid decomposition decomposes the input image into the base layer and the detail layer. The highest layer is the base layer, and the other layers are the details. We augment each detail layer with homomorphic filtering. For the detailed calculation process of homomorphic filtering, see Algorithm 2 and see Equations (14)- (16) for details: where floor(·) means rounding towards negative infinity, r and c are the height and width of the input image, respectively; min(·) means taking the minimum value function; I l n (i, j) means the lth layer of the nth image for pixel values in the ith row and jth column; upsample(·) is an up-sampling operation; L{·} (l) represents the Laplacian pyramid image of the lth layer; and homomorphic(·) represents the Homomorphic filtering operation. After the enhanced Laplacian pyramid is obtained, it is fused and reconstructed with the Gaussian pyramid of the weight map to obtain the final fused image. The equations are as follows: where G{·} (l) represents the Gaussian pyramid image of the nth layer, and F(i, j) is the pixel value of the fusion image in the ith row and the jth column-that is, the final output image. produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fastguided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process.   (3) log L{I n } (l) + 1

3: The Fourier transformL f f t {I n } (l)
Electronics 2022, 11, x FOR PEER REVIEW 5 produce high-quality images with comfortable visual perception and clear edge det in this paper, a detail enhancement multi-exposure image fusion algorithm based on momorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve first used to assign more appropriate weights to well-exposed regions, thereby, pres ing more detailed information. Second, the detail layer of the Laplacian pyramid is hanced by homomorphic filtering, which is used to enhance the edge information finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm b on homomorphic filtering is proposed. When the average brightness of the image is h there are more details in the darker areas, and thus a larger weight should be assigne the low-brightness areas. Similarly, when it is low, a larger weight should be assigne the place with high brightness. To better preserve image details, this paper propose algorithm for calculating exposure weights based on threshold segmentation and adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of image, to solve this problem, we propose a detail enhancement algorithm based on momorphic filtering. The flow of our algorithm is as follows. First, the exposure we and local contrast weight are calculated to construct an initial weight map, then a guided filter is used to denoise the weight map, and finally, the improved Laplacian amid in this paper is performed for multi-resolution fusion. The specific calculation cess will be explained in the subsequent subsections, and the schematic diagram of proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation cess.  produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fastguided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process.  produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fastguided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process.  produce high-quality images with comfortable visual perception and clear e in this paper, a detail enhancement multi-exposure image fusion algorithm b momorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussi first used to assign more appropriate weights to well-exposed regions, there ing more detailed information. Second, the detail layer of the Laplacian py hanced by homomorphic filtering, which is used to enhance the edge infor finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algo on homomorphic filtering is proposed. When the average brightness of the im there are more details in the darker areas, and thus a larger weight should be the low-brightness areas. Similarly, when it is low, a larger weight should be the place with high brightness. To better preserve image details, this paper algorithm for calculating exposure weights based on threshold segmenta adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed inform image, to solve this problem, we propose a detail enhancement algorithm b momorphic filtering. The flow of our algorithm is as follows. First, the expo and local contrast weight are calculated to construct an initial weight map, guided filter is used to denoise the weight map, and finally, the improved La amid in this paper is performed for multi-resolution fusion. The specific calc cess will be explained in the subsequent subsections, and the schematic dia proposed method is shown in Figure 1. Algorithm 1 shows the detailed calc cess.  produce high-quality images with comfortable visual perception and clear edge in this paper, a detail enhancement multi-exposure image fusion algorithm base momorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian c first used to assign more appropriate weights to well-exposed regions, thereby, ing more detailed information. Second, the detail layer of the Laplacian pyram hanced by homomorphic filtering, which is used to enhance the edge informa finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorith on homomorphic filtering is proposed. When the average brightness of the imag there are more details in the darker areas, and thus a larger weight should be ass the low-brightness areas. Similarly, when it is low, a larger weight should be ass the place with high brightness. To better preserve image details, this paper pro algorithm for calculating exposure weights based on threshold segmentation adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed informatio image, to solve this problem, we propose a detail enhancement algorithm base momorphic filtering. The flow of our algorithm is as follows. First, the exposur and local contrast weight are calculated to construct an initial weight map, the guided filter is used to denoise the weight map, and finally, the improved Laplac amid in this paper is performed for multi-resolution fusion. The specific calcula cess will be explained in the subsequent subsections, and the schematic diagra proposed method is shown in Figure 1. Algorithm 1 shows the detailed calcula cess. produce high-quality images with comfortable visual perception and clear edge details, in this paper, a detail enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjustable Gaussian curve are first used to assign more appropriate weights to well-exposed regions, thereby, preserving more detailed information. Second, the detail layer of the Laplacian pyramid is enhanced by homomorphic filtering, which is used to enhance the edge information and finally to generate an image with realistic colors and rich edge details.

Proposed Method
In this paper, a detailed enhancement multi-exposure image fusion algorithm based on homomorphic filtering is proposed. When the average brightness of the image is high, there are more details in the darker areas, and thus a larger weight should be assigned to the low-brightness areas. Similarly, when it is low, a larger weight should be assigned to the place with high brightness. To better preserve image details, this paper proposes an algorithm for calculating exposure weights based on threshold segmentation and an adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the detailed information of the image, to solve this problem, we propose a detail enhancement algorithm based on homomorphic filtering. The flow of our algorithm is as follows. First, the exposure weight and local contrast weight are calculated to construct an initial weight map, then a fastguided filter is used to denoise the weight map, and finally, the improved Laplacian pyramid in this paper is performed for multi-resolution fusion. The specific calculation process will be explained in the subsequent subsections, and the schematic diagram of the proposed method is shown in Figure 1. Algorithm 1 shows the detailed calculation process. produce high-quality images with comfortable visual perceptio in this paper, a detail enhancement multi-exposure image fusion momorphic filtering is proposed based on the Mertens method. In our method, threshold segmentation and adaptively adjus first used to assign more appropriate weights to well-exposed r ing more detailed information. Second, the detail layer of the L hanced by homomorphic filtering, which is used to enhance th finally to generate an image with realistic colors and rich edge d

Proposed Method
In this paper, a detailed enhancement multi-exposure imag on homomorphic filtering is proposed. When the average bright there are more details in the darker areas, and thus a larger weig the low-brightness areas. Similarly, when it is low, a larger weig the place with high brightness. To better preserve image details algorithm for calculating exposure weights based on thresho adaptively adjustable Gaussian curve.
Since the multi-resolution fusion algorithm will lose the de image, to solve this problem, we propose a detail enhancement momorphic filtering. The flow of our algorithm is as follows. Fi and local contrast weight are calculated to construct an initial w guided filter is used to denoise the weight map, and finally, the i amid in this paper is performed for multi-resolution fusion. The cess will be explained in the subsequent subsections, and the s proposed method is shown in Figure 1. Algorithm 1 shows the cess.

Experimental Results and Analysis
In this section, to examine the performance of the proposed fusion method, seventeen different LDR multiple-exposure image sequences [24] were selected (which include different scenes) as listed in Table 2. We tested our method using these seventeen natural scenes with different exposure levels and compared them with the seven most popular existing algorithms [16,21,22,24,25,27,28]. Four representative sets of input images were selected for presentation in Figure 4, and Figures 5-8 show the results of these four sets of images fused with different methods. All experiments were run with MATLAB R2019a on a PC with an Intel i5 6200U @ 2.40 GHz processor and 4.00 GB RAM.       (e) (f) (g) (h)  [28]. (h) The proposed method.

Subjective Analysis
Figures 5-8 show the overall results and partial magnified images of the four experiments. The method of Mertens [16] has adequate color vibrancy; however, there is a certain loss of details, as shown in Figures 7a and 8a. The method of Shen [21] has rich texture details; however, there are artifacts at the junction of light and dark and serious color distortion of the entire image due to excessive detail enhancement. For example, there are obvious black shadows at the junction of light and dark at the entrance of the cave in

Subjective Analysis
Figures 5-8 show the overall results and partial magnified images of the four experiments. The method of Mertens [16] has adequate color vibrancy; however, there is a certain loss of details, as shown in Figures 7a and 8a. The method of Shen [21] has rich texture details; however, there are artifacts at the junction of light and dark and serious color distortion of the entire image due to excessive detail enhancement. For example, there are obvious black shadows at the junction of light and dark at the entrance of the cave in Figure 5b, and the color of the lamp is severely distorted in Figure 6b.
The method of Li [22] uses a weighted structure tensor to extract details for detail enhancement; however, the brightness and clarity are not good in some places, for instance the branches and leaves in Figure 7c are missing. In Ma's method [24], based on the structural block decomposition, the image block is decomposed into the signal intensity, signal structure and average intensity, which exhibit good global contrast but are prone to over-sharpening, resulting in local color distortion, as shown in Figures 5d, 6d and 7d.
Hayat [25] used the dense SIFT descriptor to calculate the contrast and the smoothing weight of the guided filter. The generated image maintains a good global contrast; however, it is easy to lose details. As shown in Figure 6e, the exposure of the desk lamp is too high, and in Figure 8e, the texture details of the cloud are lost and the clarity is poor.
The method of Qi [27] performs well in image clarity and color saturation but is prone to noise and halos at the connection between light and dark. For example, there are orange noise spots at the entrance of the cave in Figure 5f, there are obvious noise spots on the surface of the desk lamp in Figure 6f, and the leaves have unnatural halos in Figure 7f. The method of Huang [28] has moderate brightness and can retain certain details, however, still loses details in areas where the image exposure difference is too large. For example, the leaves and branches in Figure 7g are missing a part.
The algorithm proposed in this paper shows advantages in all aspects. As shown in Figure 5h, the light-dark transition area of the cave entrance is rich in details, and the color is more realistic. The edge and texture of the lamp in Figure 6h can be seen, and the leaves and branches are visible in Figure 7h. The detailed texture of the cloud is richer without distortion in Figure 8h. To summarize, compared with the other seven methods, our method can preserve more details and edge information and exhibits a comfortable visual effect.

Objective Evaluation
The evaluation metric can objectively display the comprehensive performance of the algorithm. We used two objective evaluation indicators to evaluate. Detail-preserving assessment (DPA) [39] is used to evaluate detail preservation, and Q AB/F [40] utilizes gradients to measure the edge information between fused and source images. The DPA and Q AB/F values of seventeen different groups of fused images were obtained through eight different algorithms, as shown in Table 3, and the bolded ones are the maximum DPA and Q AB/F values using different algorithms.
The higher the DPA score, the higher the detail retention rate. DPA is defined as follows: where E R * (x, y) is the exposure weight of the resulting image in row x and column y, max E A (x, y) is the maximum exposure weight of the input image in row and column, and the image size is equal to r × c.
The DPA score is calculated by comparing the resulting image to the input images; thus, when the resulting image has more detail than all the input images, the numerator in the above equation is larger, and the DPA value becomes low. Table 3 shows the numerical results of the method in this paper and seven other popular methods. Among the seventeen comparison charts, there are ten DPA test results of the algorithm in this paper with the maximum value. For the remaining seven DPA test results, the algorithm in this paper ranks second; thus, the detail retention ability of the algorithm in this paper is excellent.
The larger the Q AB/F value, the better the ability to maintain edge information. As shown in Table 3, among the seventeen sets of comparison results, the Q AB/F value of the algorithm in this paper is the largest in the twelve groups. The remaining five groups are also good, and, in general, this paper has better edge preservation performance.
As shown in Figure 9, in the mean test results of the method proposed in this paper, DPA ranks second, and Q AB/F ranks first. The first place of DPA means is the method proposed by Shen et al. [21]; however, the Q AB/F mean of their method ranks last, indicating that its ability to preserve edge information is not very superior. According to the line graph of the average value of DPA and Q AB/F in Figure 9, it can be seen that the method proposed in this paper ranks first. In general, the image fusion method proposed in this paper can better preserve the edge information of the source image compared with the tested methods and can retain most of the details of the image. The score is calculated by comparing the resulting image to the input images; thus, when the resulting image has more detail than all the input images, the numerator in the above equation is larger, and the value becomes low. Table 3 shows the numerical results of the method in this paper and seven other popular methods. Among the seventeen comparison charts, there are ten test results of the algorithm in this paper with the maximum value. For the remaining seven test results, the algorithm in this paper ranks second; thus, the detail retention ability of the algorithm in this paper is excellent.
The larger the Q / value, the better the ability to maintain edge information. As shown in Table 3, among the seventeen sets of comparison results, the Q / value of the algorithm in this paper is the largest in the twelve groups. The remaining five groups are also good, and, in general, this paper has better edge preservation performance.
As shown in Figure 9, in the mean test results of the method proposed in this paper, ranks second, and Q / ranks first. The first place of means is the method proposed by Shen et al. [21]; however, the Q / mean of their method ranks last, indicating that its ability to preserve edge information is not very superior. According to the line graph of the average value of and Q / in Figure 9, it can be seen that the method proposed in this paper ranks first. In general, the image fusion method proposed in this paper can better preserve the edge information of the source image compared with the tested methods and can retain most of the details of the image.

The Comparative Experiment of Adaptive Exposure Weight Calculation
We conducted experiments with 10 images with and without our adaptive exposure weighting algorithm to show the effectiveness of the algorithm. We only replaced our exposure weighting algorithm with the Mertens calculation exposure weighting algorithm, and the other calculation steps remained unchanged. Figure 10 shows Q / comparison of the proposed and alternative exposure weighting algorithms. It can be seen that the Q / value of the adaptive exposure weight calculation algorithm we used was higher than that of the other algorithm, that is to say, our adaptive exposure weight algo-

The Comparative Experiment of Adaptive Exposure Weight Calculation
We conducted experiments with 10 images with and without our adaptive exposure weighting algorithm to show the effectiveness of the algorithm. We only replaced our exposure weighting algorithm with the Mertens calculation exposure weighting algorithm, and the other calculation steps remained unchanged. Figure 10 shows Q AB/F comparison of the proposed and alternative exposure weighting algorithms. It can be seen that the Q AB/F value of the adaptive exposure weight calculation algorithm we used was higher than that of the other algorithm, that is to say, our adaptive exposure weight algorithm had a better ability to maintain edge information.

Conclusions
In this paper, we proposed a multi-exposure image fusion method for detail enhancement based on homomorphic filtering. We noticed that high-exposure images have more details at low exposure values, and similarly, low-exposure images have more details at high exposure values. Therefore, we proposed an exposure-weighting algorithm based on threshold segmentation and an adaptively adjustable Gaussian curve. The initial weight map was constructed by using exposure weight and local contrast weight, and then the initial weight map was denoised and refined by fast-guided filtering to obtain a comprehensive weight map.
Finally, the input image was decomposed into the improved Laplacian pyramid. We conducted the decomposition of comprehensive weights into a Gaussian pyramid and the multi-resolution fusion of pyramids into a result map. We compared seventeen sets of static natural image sequences with different exposure levels and then compared and analyzed the algorithms from both subjective and objective aspects. The experimental results show that the method proposed in this paper can better preserve the edge details of the source image and generate images with uniform illumination distribution. At present, the algorithm in this paper is only applicable in static scenes and cannot solve the ghosting problem of dynamic image fusion. In the future, we plan to study the problem of ghosting removal in dynamic scenes to enhance the practicability of the algorithm.

Conclusions
In this paper, we proposed a multi-exposure image fusion method for detail enhancement based on homomorphic filtering. We noticed that high-exposure images have more details at low exposure values, and similarly, low-exposure images have more details at high exposure values. Therefore, we proposed an exposure-weighting algorithm based on threshold segmentation and an adaptively adjustable Gaussian curve. The initial weight map was constructed by using exposure weight and local contrast weight, and then the initial weight map was denoised and refined by fast-guided filtering to obtain a comprehensive weight map.
Finally, the input image was decomposed into the improved Laplacian pyramid. We conducted the decomposition of comprehensive weights into a Gaussian pyramid and the multi-resolution fusion of pyramids into a result map. We compared seventeen sets of static natural image sequences with different exposure levels and then compared and analyzed the algorithms from both subjective and objective aspects. The experimental results show that the method proposed in this paper can better preserve the edge details of the source image and generate images with uniform illumination distribution. At present, the algorithm in this paper is only applicable in static scenes and cannot solve the ghosting problem of dynamic image fusion. In the future, we plan to study the problem of ghosting removal in dynamic scenes to enhance the practicability of the algorithm.