Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm

: To address the problem of unclear images a ﬀ ected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral ﬁltering as a replacement for the Gaussian function used in the original Retinex algorithm. Operation of the technique is as follows. To begin, we deduced the mathematical model for an improved bilateral ﬁltering function based on the spatial domain kernel function and the pixel di ﬀ erence parameter. The input RGB image was subsequently converted into the Hue Saturation Intensity (HSI) color space, where the reﬂection component of the intensity channel was extracted to obtain an image whose edges were retained and are not a ﬀ ected by changes in brightness. Following reconversion to the RGB color space, color images of this reﬂection component were obtained at di ﬀ erent resolutions using Gaussian pyramid down-sampling. Each of these images was then processed using the improved Retinex algorithm to improve the contrast of the ﬁnal image, which was reconstructed using the Laplace algorithm. Results from experiments show that the proposed algorithm can enhance image contrast e ﬀ ectively, and the color of the processed image is in line with what would be perceived by a human observer.


Introduction
With the development of intelligent monitoring technology, a large number of cameras have been deployed in intelligent cars, traffic monitoring, and military investigation. However, weather conditions have a great influence on the quality of video images. In particular, some foggy weather may need image enhancement rather than proceeding to the next step such as image monitoring [1][2][3]. The scattering of atmospheric particles in the fog introduces defects such as uneven exposure, low contrast, and serious color distortion in images captured by a Charge-coupled Device(CCD) camera. Since it is not possible to engineer these devices to eliminate the influence of atmospheric scattering due to the physical limitations of CCD sensors, the development of algorithms that reduce the effect of fluctuations in incident light-to improve the quality of images-is of great significance.
Retinex theory was originally developed by Edwin Land as a model for explaining human color perception [4][5][6]. Jobson et al. [7,8] subsequently extended this theory into a general-purpose image enhancement algorithm to address problems caused by changing the lighting and atmospheric conditions inherent to the process of acquiring images of Earth from space. The single-scale Retinex (SSR) algorithm provides the dynamic range compression, color constancy, and sharpening that are required to alleviate these problems. In essence, based on convolution with a Gaussian function with a specific scale factor, it operates by separating incident light components from reflected light components to improve the contrast in an image. As such, it is suitable for processing multispectral satellite imagery, and applicable in diverse areas such as aerospace exploration [9], aviation safety [10,11], medical radiography [12], underwater photography [13,14], forensic investigations [15], and general-purpose photography [16,17].
Several improvements have been made to the SSR algorithm since its initial development, in order to achieve a balance between dynamic range compression and tonal rendition. As the SSR's image enhancement effect depends on the scale factor of the Gaussian function, the multi-scale Retinex (MSR) algorithm [8] was developed to combine dynamic range compression, color consistency, and tonal rendition, producing images that compare favorably with human visual perception. Here, the input image is processed using three different scale factors (small, intermediate, and large), to ensure that the color fidelity and resolution of the image are retained. In the multi-scale Retinex with color restoration (MSRCR) algorithm, a supplementary color restoration process is included with the MSR algorithm to compensate for the color desaturation inherent in its operation. The resulting algorithm yields images with good color rendition even for severe gray-world violations. In Refs. [18] and [19], a different image enhancement framework based on the Retinex algorithm was proposed, in which the Gaussian filter (GSF) used in the traditional Retinex algorithm is replaced with a region covariance filter (RCF). In contrast to GSFs, for each pixel, the output of an RCF depends on the covariance between local image features. Therefore, an RCF is self-adapting to different pixels in an image, and it can thus estimate the nature of incident illumination more accurately than a GSF. Because of its ability to enhance contrast, eliminate noise, and enhance details in an image, the RCF-Retinex algorithm can be considered to have the best overall performance of the Retinex-based enhancement methods. Zhang et al. [20] proposed a mixed multi-Retinex method combined with fractional differentiation that can adjust its ambient illumination processing capacity. Hence, the dynamic range of an image captured in low light conditions can be modified to reduce noise caused by poor lighting and enhance the quality of nighttime images. A new technique, similar to conventional enhancement methods in which the illumination component of the processed image is manipulated, was introduced in [21]. The technique makes use of an augmented Lagrange multiplier based algorithm for optimization of simultaneous estimation of a smoothed illumination component and the reflectance component of the image.
While Retinex-based algorithms are able to enhance most images adequately, the output image can sometimes be optimized through visual analysis, depending on the user's experience. As previously stated, with traditional Retinex methods, the image enhancement effect depends on the scale factor of the Gaussian function, leading to some limitations. Hence, we propose an improved Retinex image enhancement algorithm based on Gaussian pyramid transformation with bilateral filtering to eliminate the impact of outliers in the histogram of the image and further improve its contrast, thus improving the overall visual performance of the Retinex algorithm.
The rest of the paper is structured as follows. The improved Retinex model is briefly described in Section 2. Details of the improved bilateral filtering function are discussed in Section 3. An overview of the Gaussian-Laplacian multi-scale pyramid algorithm is given in Section 4. Analysis of the results of experiments is given in Section 5, and conclusions are presented in Section 6.

Single-Scale Retinex Model
The SSR model is a color constant visual image enhancement method based on Land's theory of human visual perception. In this model, the color of an object is considered to be fixed, while the corresponding image obtained by the camera is composed of a low-frequency incident light component and a high-frequency component resulting from the reflections of light on the surface of the object, as shown in Figure 1. For a given image, the relationship between incident and reflected light components can be expressed as The principle of traditional Retinex image enhancement is to remove or reduce the impact of incident light components on an image by deconstructing this into separate incident and reflection components. The contrast of the reflection image is consequently enhanced, the texture information of the image is compressed within a certain range, and at the same time, the basic tone of the original image is retained. For ease of calculation, logarithms of both sides of (1) are taken, yielding the following: ( The mathematical form of the reflection component of the image can be obtained by rearranging (2) as follows: Inspection of (3) shows that the reflection component of the image can be obtained by deducing the incident light component. Hence, the accuracy of incident light estimation directly affects the fog removal effect. Since L is low-frequency information, a Gaussian convolution function can be used to better estimate the incident light component from the image, i.e., where * represents the convolution operation, G(x,y) is a Gaussian function, defined as , and σ is the standard deviation of the Gaussian function, also known as the scale factor.

HSI Color Space
The HSI color space is an alternative to the RGB color space that better reflects human perception of color information. With this model, color is encoded using three different attributes: Hue, saturation, and intensity. Here, hue and saturation are related to human perception of color, while, in contrast, intensity reflects the apparent brightness of an image, determined by the lighting conditions. Intensity is calculated as follows: 3 R x y G x y B x y I x y + + =  For a given image, the relationship between incident and reflected light components can be expressed as where S(x, y), R(x, y), and L(x, y) denote the full color component, the reflected light component, and the incident light component of a given image, respectively. The principle of traditional Retinex image enhancement is to remove or reduce the impact of incident light components on an image by deconstructing this into separate incident and reflection components. The contrast of the reflection image is consequently enhanced, the texture information of the image is compressed within a certain range, and at the same time, the basic tone of the original image is retained. For ease of calculation, logarithms of both sides of (1) are taken, yielding the following: ln(S(x, y)) = ln(L(x, y)) + ln(R(x, y)). ( The mathematical form of the reflection component of the image can be obtained by rearranging (2) as follows: where S i (x, y), R i (x, y), and L i (x, y) denote the color component, the reflection component, and the incident light component of the i-th color channel, respectively. Inspection of (3) shows that the reflection component of the image can be obtained by deducing the incident light component. Hence, the accuracy of incident light estimation directly affects the fog removal effect. Since L is low-frequency information, a Gaussian convolution function can be used to better estimate the incident light component from the image, i.e., where * represents the convolution operation, G(x,y) is a Gaussian function, defined as G(x, y) = 1 2πσ 2 exp(− x 2 +y 2 2σ 2 ), and σ is the standard deviation of the Gaussian function, also known as the scale factor.

HSI Color Space
The HSI color space is an alternative to the RGB color space that better reflects human perception of color information. With this model, color is encoded using three different attributes: Hue, saturation, and intensity. Here, hue and saturation are related to human perception of color, while, in contrast, intensity reflects the apparent brightness of an image, determined by the lighting conditions. Intensity is calculated as follows: where R, G, and B are the red, green, and blue components of an RGB-encoded image, respectively.

The Improved Retinex Algorithm
The dependence of the SSR's enhancement effect on the Gaussian scale factor, σ, can be illustrated as follows. When the value of σ is large, the color fidelity of the processed image is good. However, details in the image become difficult to identify. Conversely, when this value is reduced such that the details are highlighted after enhancement, the color fidelity of the processed image is poor. In particular, in areas where the local color changes significantly, there is a "halo" effect after SSR enhancement ( Figure 2b). where R , G , and B are the red, green, and blue components of an RGB-encoded image, respectively.

The Improved Retinex Algorithm
The dependence of the SSR's enhancement effect on the Gaussian scale factor, σ , can be illustrated as follows. When the value of σ is large, the color fidelity of the processed image is good. However, details in the image become difficult to identify. Conversely, when this value is reduced such that the details are highlighted after enhancement, the color fidelity of the processed image is poor. In particular, in areas where the local color changes significantly, there is a "halo" effect after SSR enhancement (Figure 2 To address the defects of a traditional Retinex algorithm, we proposed the use of Gaussian pyramid transformation and an improved bilateral filtering function that increases edge retention and improves noise removal to replace the Gaussian filter function. The mathematical model for this improved bilateral filter function is deduced by improving the spatial kernel function and using a rotating window function to determine the pixel difference scale parameters.
The operation of our technique proceeds as follows. The color space of the image is transformed from RGB to HSI in order to separate brightness from color information. Improved bilateral filtering of Retinex enhancement is subsequently performed on the intensity component of the HSI image in order to obtain a reflection image whose edges are maintained and are unaffected by changes in brightness. This enhanced image is then returned to the RGB color space, following which a Gaussian-Laplacian pyramid is generated by successively applying a Gaussian filter to an input image and scaling down the resolution of the resulting image. Finally, sub-images in the pyramid are enhanced using the improved Retinex algorithm for multi-scale processing before Laplacian reconstruction is applied to complete image enhancement. A flowchart detailing the operation of this improved Retinex algorithm is shown in Figure 3. To address the defects of a traditional Retinex algorithm, we proposed the use of Gaussian pyramid transformation and an improved bilateral filtering function that increases edge retention and improves noise removal to replace the Gaussian filter function. The mathematical model for this improved bilateral filter function is deduced by improving the spatial kernel function and using a rotating window function to determine the pixel difference scale parameters.
The operation of our technique proceeds as follows. The color space of the image is transformed from RGB to HSI in order to separate brightness from color information. Improved bilateral filtering of Retinex enhancement is subsequently performed on the intensity component of the HSI image in order to obtain a reflection image whose edges are maintained and are unaffected by changes in brightness. This enhanced image is then returned to the RGB color space, following which a Gaussian-Laplacian pyramid is generated by successively applying a Gaussian filter to an input image and scaling down the resolution of the resulting image. Finally, sub-images in the pyramid are enhanced using the improved Retinex algorithm for multi-scale processing before Laplacian reconstruction is applied to complete image enhancement. A flowchart detailing the operation of this improved Retinex algorithm is shown in Figure 3.

Traditional Bilateral Filtering Functions
In a traditional Retinex algorithm, the central-surround filter function is typically a Gaussian function. Hence, once operation of the algorithm is complete, the edges of the image are blurred, and details in the image disappear. To address this, in our algorithm we have adopted a bilateral filter [22] as an "edge maintenance-noise reduction" filter, which is defined mathematically as

Traditional Bilateral Filtering Functions
In a traditional Retinex algorithm, the central-surround filter function is typically a Gaussian function. Hence, once operation of the algorithm is complete, the edges of the image are blurred, and details in the image disappear. To address this, in our algorithm we have adopted a bilateral filter [22] as an "edge maintenance-noise reduction" filter, which is defined mathematically as where . (8) f (m, n) and f (i, j) are respectively the input and filtered images, w d and w s are kernel functions in the space and range domains, and σ d and σ r are spatial distance and pixel difference scale parameters, respectively. In addition, Ω p,i,j is the set of pixel points in the input image, with (i, j) defined as the center and 2p + 1 as the radius, and p is the radius of the filter. With this algorithm, a larger filter radius corresponds to a larger filter interval. However, calculation is made correspondingly more complex.
From (6), (7), and (8), it can be noted that the efficiency of a bilateral filter is determined by σ d and σ r . Experimentation has revealed [23] that increasing the value of σ d leads to a blurrier image after filtering. Similarly, increasing the value of σ r increases the size of the 2p + 1 pixel interval and reduces the correlation between pixels. In typical situations, the values of these two parameters are manually adjusted according to the requirements of the particular application. Hence, in this paper, we propose an adaptive adjustment method for selecting the values of σ d and σ r .

Improved Spatial Domain Kernel Function
To improve the efficiency of the w d kernel function, we considered the influence of different pixels on noise smoothing. In the spatial domain, the 2p + 1 filter window parameter is set with (i, j) defined as the central filter point. Inspection of (7) shows that pixels located closer to the central filter point have a greater influence on noise smoothing in the spatial domain. Hence, to address this, we defined our improved spatial domain kernel function as follows: where the spatial distance difference scale parameter is defined as σ d = p 2 , since 95% of possible outcomes from the Gaussian function are concentrated in the [−2σ d , 2σ d ] range. Thus, in essence, σ d is a spatial standard deviation, and after the experiment, we chose p = 5.

Improved Selection of Pixel Difference Scale Parameters
In the Ω p,i,j point set, which is used as a sample to find the standard deviation (σ n ) of pixel intensities in an image, the gray value of each pixel is defined as [a (2p−1,2p−1) , a (2p−1,2p) . . . a (2p+1,2p+1) ]. However, unlike with the spatial domain, selection of this standard deviation as the scale parameter leads to an excessively large value for σ d , as shown in [24]. Hence, a rotating window function for determining the value of σ d is proposed as follows. A (2p + 1) × (p + 1) rectangular window centered on the (x, y) pixel is selected, with an offset angle defined as 2π/K. The standard deviation of pixels in this rectangular window is then calculated, following which the window is rotated by the offset angle. This calculation and rotation process is successively iterated for a complete revolution (i.e., K rotations should be completed). Finally, the minimum pixel standard deviation min(σ n ) computed is selected as σ d ; the window in which this value was calculated is used in subsequent filtering. σ d is defined as follows: where f (x, y) 2πk/K is the value of a pixel at a given location in the window, K = 8.

Gaussian-Laplacian Multi-Scale Pyramid Algorithm
Gaussian-Laplacian pyramid [24] generation is a multi-scale image decomposition method with a sampling rate of 2, i.e., the resolution of a resulting image is halved in each successive decomposition. The resulting pyramid consists of layers of decomposed images decreasing in resolution from bottom to top, with the original image defined as the base level of the pyramid (level 0). In detail, beginning from level 0, decomposition consists of convolution of the input image with a Gaussian low-pass filter followed by a down-sample operation in order to halve the horizontal and vertical resolution of the processed image. The filtered down-sampled image is subsequently defined as the image at level 1 of the pyramid (F 1 ). This operation is repeated, as defined by the number of levels required in the Gaussian pyramid. Mathematically, this process is defined as where G(x, y) is a low-pass filter, l is the level of the pyramid, N is the number of layers in the pyramid, 0 ≤ l ≤ N, i ≤ R l , j ≤ C l , and R l , C l represent the number of rows and columns in F l and N = 3, respectively. As each image in the pyramid, F l , is of a different resolution, iterative image generation and boundary closure may be affected by noise and dispersed sampling. To ensure the structural information in the highest resolution image is retained in subsequent pyramid levels, image enhancement is required. To do this, we employed the improved Retinex algorithm to recover details in images lost in processing. Enhanced pyramid images are subsequently termed F l .
To complete the image enhancement process, the modified Gaussian pyramid is reconstructed to yield a single image with the same resolution as the original input. Reconstruction is completed by generating a Laplacian pyramid from the image-enhanced Gaussian pyramid. This reconstruction process is detailed as follows: (1) The topmost image in the Gaussian pyramid, F l+1 , is interpolated to obtain an image termed D l , which has the same resolution as the image in the preceding layer (F l ).
(2) D l is subtracted from F l , with the difference, F N , stored in the Laplace residual set. F N is subsequently added to F l to yield F N , which is interpolated for reconstruction of the image in the preceding layer.
(3) Computation of the Laplacian and image interpolation continues iteratively, until the reconstructed image is of the same resolution as the original input. Using the terminology described above, the process can be described as (12). A Gaussian-Laplacian multi-scale pyramid from intermediate results of different pyramid levels is presented in Figure 4. In our experiment, the sampling value of the pyramid was 3.
boundary closure may be affected by noise and dispersed sampling. To ensure the structural information in the highest resolution image is retained in subsequent pyramid levels, image enhancement is required. To do this, we employed the improved Retinex algorithm to recover details in images lost in processing. Enhanced pyramid images are subsequently termed ' l F .
To complete the image enhancement process, the modified Gaussian pyramid is reconstructed to yield a single image with the same resolution as the original input. Reconstruction is completed by generating a Laplacian pyramid from the image-enhanced Gaussian pyramid. This reconstruction process is detailed as follows: 1) The topmost image in the Gaussian pyramid, ' 1 + l F , is interpolated to obtain an image termed l D , which has the same resolution as the image in the preceding layer ( ' l F ).
2) l D is subtracted from ' l F , with the difference, N F , stored in the Laplace residual set. N F is subsequently added to ' l F to yield ' N F , which is interpolated for reconstruction of the image in the preceding layer. 3) Computation of the Laplacian and image interpolation continues iteratively, until the reconstructed image is of the same resolution as the original input. Using the terminology described above, the process can be described as (12). A Gaussian-Laplacian multi-scale pyramid from intermediate results of different pyramid levels is presented in Figure 4. In our experiment, the sampling value of the pyramid was 3.

Experiments and Results
To verify the effectiveness of our algorithm, we compared its operation in experiments with those of the original SSR, MSRCR, Non-local retinex(NLR) [25], and Illumination Invariant Face Recognition (INF) [26,27] algorithms based on a database [28] evaluating their performance using objective and subjective measures of image perception. The proposed operator was implemented in MATLAB 2014b on a computer running the Windows 7 (64 bit) operating system.

Experiments and Results
To verify the effectiveness of our algorithm, we compared its operation in experiments with those of the original SSR, MSRCR, Non-local retinex(NLR) [25], and Illumination Invariant Face Recognition (INF) [26,27] algorithms based on a database [28] evaluating their performance using objective and subjective measures of image perception. The proposed operator was implemented in MATLAB 2014b on a computer running the Windows 7 (64 bit) operating system. σ was added to these images-an example of which is shown in Figure 5-to degrade their quality such that the performance of the different filtering algorithms could be observed. We note that with the original bilateral filter, edges in the images were lost, as they had all been smoothed. In contrast, Gaussian noise is few perceptible following operation of our algorithm, as shown in Figure 5(d). Moreover, the image performance following operation of our algorithm is a better improvement, shown in Figure 5  For additional verification, we compared the two filtering algorithms with respect to Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity (SSIM) [30], Multi-scale structural similarity (MS-SSIM) [31], Visual Information Fidelity (VIF) [32], Full Reference Visual Information Fidelity (FR-VIF) [33], and visual signal-to-noise ratio (VSNR)[34], shown in Table 1. This MATLAB wrapper code is presented in [35]. In this case, the objective measure scores are directly averaged; we highlight the best result of the two algorithms with boldface. We note that the intensity of noise in the images following processing is higher with the traditional bilateral filtering algorithm than it is with our modified solution, based on a comparison of the values of seven Image Quality Assessment (IQA) measures.  For additional verification, we compared the two filtering algorithms with respect to Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity (SSIM) [30], Multi-scale structural similarity (MS-SSIM) [31], Visual Information Fidelity (VIF) [32], Full Refer-ence Visual Information Fidelity (FR-VIF) [33], and visual signal-to-noise ratio (VSNR) [34], shown in Table 1. This MATLAB wrapper code is presented in [35]. In this case, the objective measure scores are directly averaged; we highlight the best result of the two algorithms with boldface. We note that the intensity of noise in the images following processing is higher with the traditional bilateral filtering algorithm than it is with our modified solution, based on a comparison of the values of seven Image Quality Assessment (IQA) measures.

Subjective Analysis of Image Enhancement Algorithms
To verify the effectiveness of our improved image enhancement technique, we conducted experiments comparing its performance to those of the SSR, MSRCR, NLR, and INF algorithms based on a database [28] with 4322 images, which is a real foggy test dataset provided by Gac Research Institute. The partial results of these experiments are shown in Figure 6, with Column 1 depicting original images, Column 2 depicting images processed using the SSR algorithm, Column 3 depicting images processed using the MSRCR algorithm, Column 4 depicting images processed using the NLR algorithm, Column 5 depicting images processed using the INF algorithm, and Column 6 depicting images processed using our improved image enhancement technique.
There are degradation and dark regions in the original images in Figure 7, which are characteristic of uneven lighting, making them suitable for image enhancement. We note that with all five algorithms considered, most of the foreground details are preserved, and the color in the resulting images is enhanced following processing. However, with the SSR, MSRCR, and NLR algorithms, the "halo" effect can also be observed, as seen most obviously in Figure 7 Figure 7(a-3-f-3) and Figure 7(a-5-f-5). While details in the blurred areas of the original images can be enhanced using the other four algorithms, there is a deviation between the color in the input and output images. Hence, the color of the imaged object is not restored, and the visual representation of this object is thus distorted. In addition, the MSRCR and NLR algorithm was not effective in removing the fog from the foreground of the images, as shown in Figure 7(a-4-f-4).
In contrast, the texture and color contrast of images processed using our technique are improved, and enhancement is effective for images in different low light situations. In addition, following processing, the deviation of color between the input and output images is small. As color fidelity is better, the overall visual representation of the image is retained. Finally, details in the image are more clearly identified, as can be seen from Figure 7(a-6-f-6).
For a more quantitative representation of the effect of the different algorithms, histograms of the grayscale intensity of the images in Row (a) of Figure 6 are shown in Figure 8. From this, it can be seen that the distribution of pixel intensities is more uniform and the average grayscale intensity of the image is reduced. This is indicative of the fog removal effect obtained with our algorithm; fog is observed as a brighter area in an image, with a correspondingly large grayscale intensity. A reduction in the average grayscale intensity of an image thus suggests the removal of fog.  For a more quantitative representation of the effect of the different algorithms, histograms of the grayscale intensity of the images in Row (a) of Figure 6 are shown in Figure 8. From this, it can be seen that the distribution of pixel intensities is more uniform and the average grayscale intensity of the image is reduced. This is indicative of the fog removal effect obtained with our algorithm; fog is observed as a brighter area in an image, with a correspondingly large grayscale intensity. A reduction in the average grayscale intensity of an image thus suggests the removal of fog.

Objective Evaluation of Image Enhancement Algorithms
For objective evaluation of the different image enhancement algorithms, we calculated the Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, Information content weighting Structural SIMilarity (IWSSM) [35], and VSNR of the images in the database, as

Objective Evaluation of Image Enhancement Algorithms
For objective evaluation of the different image enhancement algorithms, we calculated the Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, Information content weighting Structural SIMilarity (IWSSM) [35], and VSNR of the images in the database, as shown in Table 2. In this case, the correlation scores are directly averaged and we highlight the best result with boldface, as each of these parameters characterizes a different aspect of the image.  Table 2 shows that images processed using our algorithm had the best values, suggesting that more high-frequency information is processed using our technique, and details in the image are subsequently enhanced. Similarly, the PSNR and SSIM values were maximized using our algorithm, indicating that the contrast of the images is improved with our technique, and details are more identifiable. Then, MSSIM and VIF value of the image were maximized with our technique, indicating its improved reconstruction quality. Finally, the other values also improved using our algorithm, demonstrated that our algorithm leads to significant and consistent performance of image enhancement.

Conclusions
In this paper, we proposed a new Retinex-based image enhancement algorithm to address the problem of image blurring caused by uneven illumination. The technique features an improved bilateral filtering function, addressing the problem of an over-enhanced luminance image and the associated loss of texture. In addition, we combine our modified Retinex model with Gaussian pyramid down-sampling for multi-scale processing, in order to eliminate blurring problems and increase the contrast of the final image. We compared the performance of our technique with those of the SSR, MSRCR, NLR, and INF algorithms in experiments, with the results highlighting its improved image enhancement. However, the combination of the improved bilateral filtering function and the Gaussian pyramid transformation increases the time complexity of the technique, a problem which will be addressed in future studies.