Next Article in Journal
The Research of Improved Active Disturbance Rejection Control Algorithm for Particleboard Glue System Based on Neural Network State Observer
Previous Article in Journal
A General Integrated Method for Design Analysis and Optimization of Missile Structure
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm

1
Marine Engineering College and Key Laboratory of Fujian Province Marine and Ocean Engineering, Jimei University, Xiamen 361021, China
2
School of Mechanical and Electrical Engineering, Putian University, Putian 351100, China
3
Key Laboratory of Modern Precision Measurement and Laser Nondestructive Detection Colleges and Universities in Fujian Province, Putian 351100, China
*
Author to whom correspondence should be addressed.
Algorithms 2019, 12(12), 258; https://doi.org/10.3390/a12120258
Submission received: 7 November 2019 / Revised: 24 November 2019 / Accepted: 26 November 2019 / Published: 1 December 2019

Abstract

:
To address the problem of unclear images affected by occlusion from fog, we propose an improved Retinex image enhancement algorithm based on the Gaussian pyramid transformation. Our algorithm features bilateral filtering as a replacement for the Gaussian function used in the original Retinex algorithm. Operation of the technique is as follows. To begin, we deduced the mathematical model for an improved bilateral filtering function based on the spatial domain kernel function and the pixel difference parameter. The input RGB image was subsequently converted into the Hue Saturation Intensity (HSI) color space, where the reflection component of the intensity channel was extracted to obtain an image whose edges were retained and are not affected by changes in brightness. Following reconversion to the RGB color space, color images of this reflection component were obtained at different resolutions using Gaussian pyramid down-sampling. Each of these images was then processed using the improved Retinex algorithm to improve the contrast of the final image, which was reconstructed using the Laplace algorithm. Results from experiments show that the proposed algorithm can enhance image contrast effectively, and the color of the processed image is in line with what would be perceived by a human observer.

1. Introduction

With the development of intelligent monitoring technology, a large number of cameras have been deployed in intelligent cars, traffic monitoring, and military investigation. However, weather conditions have a great influence on the quality of video images. In particular, some foggy weather may need image enhancement rather than proceeding to the next step such as image monitoring [1,2,3]. The scattering of atmospheric particles in the fog introduces defects such as uneven exposure, low contrast, and serious color distortion in images captured by a Charge-coupled Device(CCD) camera. Since it is not possible to engineer these devices to eliminate the influence of atmospheric scattering due to the physical limitations of CCD sensors, the development of algorithms that reduce the effect of fluctuations in incident light—to improve the quality of images—is of great significance.
Retinex theory was originally developed by Edwin Land as a model for explaining human color perception [4,5,6]. Jobson et al. [7,8] subsequently extended this theory into a general-purpose image enhancement algorithm to address problems caused by changing the lighting and atmospheric conditions inherent to the process of acquiring images of Earth from space. The single-scale Retinex (SSR) algorithm provides the dynamic range compression, color constancy, and sharpening that are required to alleviate these problems. In essence, based on convolution with a Gaussian function with a specific scale factor, it operates by separating incident light components from reflected light components to improve the contrast in an image. As such, it is suitable for processing multispectral satellite imagery, and applicable in diverse areas such as aerospace exploration [9], aviation safety [10,11], medical radiography [12], underwater photography [13,14], forensic investigations [15], and general-purpose photography [16,17].
Several improvements have been made to the SSR algorithm since its initial development, in order to achieve a balance between dynamic range compression and tonal rendition. As the SSR’s image enhancement effect depends on the scale factor of the Gaussian function, the multi-scale Retinex (MSR) algorithm [8] was developed to combine dynamic range compression, color consistency, and tonal rendition, producing images that compare favorably with human visual perception. Here, the input image is processed using three different scale factors (small, intermediate, and large), to ensure that the color fidelity and resolution of the image are retained. In the multi-scale Retinex with color restoration (MSRCR) algorithm, a supplementary color restoration process is included with the MSR algorithm to compensate for the color desaturation inherent in its operation. The resulting algorithm yields images with good color rendition even for severe gray-world violations. In Refs. [18] and [19], a different image enhancement framework based on the Retinex algorithm was proposed, in which the Gaussian filter (GSF) used in the traditional Retinex algorithm is replaced with a region covariance filter (RCF). In contrast to GSFs, for each pixel, the output of an RCF depends on the covariance between local image features. Therefore, an RCF is self-adapting to different pixels in an image, and it can thus estimate the nature of incident illumination more accurately than a GSF. Because of its ability to enhance contrast, eliminate noise, and enhance details in an image, the RCF–Retinex algorithm can be considered to have the best overall performance of the Retinex-based enhancement methods. Zhang et al. [20] proposed a mixed multi-Retinex method combined with fractional differentiation that can adjust its ambient illumination processing capacity. Hence, the dynamic range of an image captured in low light conditions can be modified to reduce noise caused by poor lighting and enhance the quality of nighttime images. A new technique, similar to conventional enhancement methods in which the illumination component of the processed image is manipulated, was introduced in [21]. The technique makes use of an augmented Lagrange multiplier based algorithm for optimization of simultaneous estimation of a smoothed illumination component and the reflectance component of the image.
While Retinex-based algorithms are able to enhance most images adequately, the output image can sometimes be optimized through visual analysis, depending on the user’s experience. As previously stated, with traditional Retinex methods, the image enhancement effect depends on the scale factor of the Gaussian function, leading to some limitations. Hence, we propose an improved Retinex image enhancement algorithm based on Gaussian pyramid transformation with bilateral filtering to eliminate the impact of outliers in the histogram of the image and further improve its contrast, thus improving the overall visual performance of the Retinex algorithm.
The rest of the paper is structured as follows. The improved Retinex model is briefly described in Section 2. Details of the improved bilateral filtering function are discussed in Section 3. An overview of the Gaussian–Laplacian multi-scale pyramid algorithm is given in Section 4. Analysis of the results of experiments is given in Section 5, and conclusions are presented in Section 6.

2. Improved Retinex Model

2.1. Single-Scale Retinex Model

The SSR model is a color constant visual image enhancement method based on Land’s theory of human visual perception. In this model, the color of an object is considered to be fixed, while the corresponding image obtained by the camera is composed of a low-frequency incident light component and a high-frequency component resulting from the reflections of light on the surface of the object, as shown in Figure 1.
For a given image, the relationship between incident and reflected light components can be expressed as
S ( x , y ) = L ( x , y ) R ( x , y ) ,
where S ( x , y ) , R ( x , y ) , and L ( x , y ) denote the full color component, the reflected light component, and the incident light component of a given image, respectively.
The principle of traditional Retinex image enhancement is to remove or reduce the impact of incident light components on an image by deconstructing this into separate incident and reflection components. The contrast of the reflection image is consequently enhanced, the texture information of the image is compressed within a certain range, and at the same time, the basic tone of the original image is retained. For ease of calculation, logarithms of both sides of (1) are taken, yielding the following:
ln ( S ( x , y ) ) = ln ( L ( x , y ) ) + ln ( R ( x , y ) ) .
The mathematical form of the reflection component of the image can be obtained by rearranging (2) as follows:
ln ( R i ( x , y ) ) = ln ( S i ( x , y ) ) ln ( L i ( x , y ) )
where S i ( x , y ) , R i ( x , y ) , and L i ( x , y ) denote the color component, the reflection component, and the incident light component of the i -th color channel, respectively.
Inspection of (3) shows that the reflection component of the image can be obtained by deducing the incident light component. Hence, the accuracy of incident light estimation directly affects the fog removal effect. Since L is low-frequency information, a Gaussian convolution function can be used to better estimate the incident light component from the image, i.e.,
L ( x , y ) = G ( x , y ) S ( x , y ) ,
where represents the convolution operation, G(x,y) is a Gaussian function, defined as G ( x , y ) = 1 2 π σ 2 exp ( x 2 + y 2 2 σ 2 ) , and σ is the standard deviation of the Gaussian function, also known as the scale factor.

2.2. HSI Color Space

The HSI color space is an alternative to the RGB color space that better reflects human perception of color information. With this model, color is encoded using three different attributes: Hue, saturation, and intensity. Here, hue and saturation are related to human perception of color, while, in contrast, intensity reflects the apparent brightness of an image, determined by the lighting conditions. Intensity is calculated as follows:
I ( x , y ) = R ( x , y ) + G ( x , y ) + B ( x , y ) 3
where R , G , and B are the red, green, and blue components of an RGB-encoded image, respectively.

2.3. The Improved Retinex Algorithm

The dependence of the SSR’s enhancement effect on the Gaussian scale factor, σ , can be illustrated as follows. When the value of σ is large, the color fidelity of the processed image is good. However, details in the image become difficult to identify. Conversely, when this value is reduced such that the details are highlighted after enhancement, the color fidelity of the processed image is poor. In particular, in areas where the local color changes significantly, there is a “halo” effect after SSR enhancement (Figure 2b).
To address the defects of a traditional Retinex algorithm, we proposed the use of Gaussian pyramid transformation and an improved bilateral filtering function that increases edge retention and improves noise removal to replace the Gaussian filter function. The mathematical model for this improved bilateral filter function is deduced by improving the spatial kernel function and using a rotating window function to determine the pixel difference scale parameters.
The operation of our technique proceeds as follows. The color space of the image is transformed from RGB to HSI in order to separate brightness from color information. Improved bilateral filtering of Retinex enhancement is subsequently performed on the intensity component of the HSI image in order to obtain a reflection image whose edges are maintained and are unaffected by changes in brightness. This enhanced image is then returned to the RGB color space, following which a Gaussian–Laplacian pyramid is generated by successively applying a Gaussian filter to an input image and scaling down the resolution of the resulting image. Finally, sub-images in the pyramid are enhanced using the improved Retinex algorithm for multi-scale processing before Laplacian reconstruction is applied to complete image enhancement. A flowchart detailing the operation of this improved Retinex algorithm is shown in Figure 3.

3. Improved Bilateral Filtering Function

3.1. Traditional Bilateral Filtering Functions

In a traditional Retinex algorithm, the central-surround filter function is typically a Gaussian function. Hence, once operation of the algorithm is complete, the edges of the image are blurred, and details in the image disappear. To address this, in our algorithm we have adopted a bilateral filter [22] as an “edge maintenance–noise reduction” filter, which is defined mathematically as
f ( i , j ) = ( m , n ) Ω p , i , j w d ( m , n ) w r ( m , n ) f ( m , n ) ( m , n ) Ω p , i , j w d ( m , n ) w r ( m , n ) ,
where
w d ( m , n ) = exp ( ( i m ) 2 + ( j n ) 2 2 σ d 2 )
w r ( m , n ) = exp ( ( f ( i , j ) f ( m , n ) ) 2 2 σ r 2 ) .
f ( m , n ) and f ( i , j ) are respectively the input and filtered images, w d and w s are kernel functions in the space and range domains, and σ d and σ r are spatial distance and pixel difference scale parameters, respectively. In addition, Ω p , i , j is the set of pixel points in the input image, with ( i , j ) defined as the center and 2 p + 1 as the radius, and p is the radius of the filter. With this algorithm, a larger filter radius corresponds to a larger filter interval. However, calculation is made correspondingly more complex.
From (6), (7), and (8), it can be noted that the efficiency of a bilateral filter is determined by σ d and σ r . Experimentation has revealed [23] that increasing the value of σ d leads to a blurrier image after filtering. Similarly, increasing the value of σ r increases the size of the 2 p + 1 pixel interval and reduces the correlation between pixels. In typical situations, the values of these two parameters are manually adjusted according to the requirements of the particular application. Hence, in this paper, we propose an adaptive adjustment method for selecting the values of σ d and σ r .

3.2. Improved Spatial Domain Kernel Function

To improve the efficiency of the w d kernel function, we considered the influence of different pixels on noise smoothing. In the spatial domain, the 2 p + 1 filter window parameter is set with ( i , j ) defined as the central filter point. Inspection of (7) shows that pixels located closer to the central filter point have a greater influence on noise smoothing in the spatial domain. Hence, to address this, we defined our improved spatial domain kernel function as follows:
w d ( m , n ) = ( 1 ( i m ) 2 + ( j n ) 2 2 p + 1 ) exp ( ( i m ) 2 + ( j n ) 2 2 σ d 2 )
where the spatial distance difference scale parameter is defined as σ d = p 2 , since 95% of possible outcomes from the Gaussian function are concentrated in the [ 2 σ d , 2 σ d ] range. Thus, in essence, σ d is a spatial standard deviation, and after the experiment, we chose p = 5 .

3.3. Improved Selection of Pixel Difference Scale Parameters

In the Ω p , i , j point set, which is used as a sample to find the standard deviation ( σ n ) of pixel intensities in an image, the gray value of each pixel is defined as [ a ( 2 p 1 , 2 p 1 ) , a ( 2 p 1 , 2 p ) a ( 2 p + 1 , 2 p + 1 ) ] . However, unlike with the spatial domain, selection of this standard deviation as the scale parameter leads to an excessively large value for σ d , as shown in [24]. Hence, a rotating window function for determining the value of σ d is proposed as follows. A ( 2 p + 1 ) × ( p + 1 ) rectangular window centered on the ( x , y ) pixel is selected, with an offset angle defined as 2 π / K . The standard deviation of pixels in this rectangular window is then calculated, following which the window is rotated by the offset angle. This calculation and rotation process is successively iterated for a complete revolution (i.e., K rotations should be completed). Finally, the minimum pixel standard deviation min ( σ n ) computed is selected as σ d ; the window in which this value was calculated is used in subsequent filtering. σ d is defined as follows:
σ d = min ( σ n ) = min k = 0 , 1 , K 1 { s t d [ f ( x , y ) 2 π k / K ] }
where f ( x , y ) 2 π k / K is the value of a pixel at a given location in the window, K = 8 .

4. Gaussian–Laplacian Multi-Scale Pyramid Algorithm

Gaussian–Laplacian pyramid [24] generation is a multi-scale image decomposition method with a sampling rate of 2, i.e., the resolution of a resulting image is halved in each successive decomposition. The resulting pyramid consists of layers of decomposed images decreasing in resolution from bottom to top, with the original image defined as the base level of the pyramid (level 0). In detail, beginning from level 0, decomposition consists of convolution of the input image with a Gaussian low-pass filter followed by a down-sample operation in order to halve the horizontal and vertical resolution of the processed image. The filtered down-sampled image is subsequently defined as the image at level 1 of the pyramid ( F 1 ). This operation is repeated, as defined by the number of levels required in the Gaussian pyramid. Mathematically, this process is defined as
F l = m = 2 2 n = 2 2 G ( x , y ) F l 1 ( 2 i + m , 2 j + n ) ,
where G ( x , y ) is a low-pass filter, l is the level of the pyramid, N is the number of layers in the pyramid, 0 l N , i R l , j C l , and R l , C l represent the number of rows and columns in F l and N = 3 , respectively.
As each image in the pyramid, F l , is of a different resolution, iterative image generation and boundary closure may be affected by noise and dispersed sampling. To ensure the structural information in the highest resolution image is retained in subsequent pyramid levels, image enhancement is required. To do this, we employed the improved Retinex algorithm to recover details in images lost in processing. Enhanced pyramid images are subsequently termed F l .
To complete the image enhancement process, the modified Gaussian pyramid is reconstructed to yield a single image with the same resolution as the original input. Reconstruction is completed by generating a Laplacian pyramid from the image-enhanced Gaussian pyramid. This reconstruction process is detailed as follows:
(1)
The topmost image in the Gaussian pyramid, F l + 1 , is interpolated to obtain an image termed D l , which has the same resolution as the image in the preceding layer ( F l ).
(2)
D l is subtracted from F l , with the difference, F N , stored in the Laplace residual set. F N is subsequently added to F l to yield F N , which is interpolated for reconstruction of the image in the preceding layer.
(3)
Computation of the Laplacian and image interpolation continues iteratively, until the reconstructed image is of the same resolution as the original input. Using the terminology described above, the process can be described as (12). A Gaussian–Laplacian multi-scale pyramid from intermediate results of different pyramid levels is presented in Figure 4. In our experiment, the sampling value of the pyramid was 3.
{ F N = F l D l F N = F N + F l .

5. Experiments and Results

To verify the effectiveness of our algorithm, we compared its operation in experiments with those of the original SSR, MSRCR, Non-local retinex(NLR) [25], and Illumination Invariant Face Recognition (INF) [26,27] algorithms based on a database [28] evaluating their performance using objective and subjective measures of image perception. The proposed operator was implemented in MATLAB 2014b on a computer running the Windows 7 (64 bit) operating system.

5.1. Results of Improved Bilateral Filtering

In this experiment, we selected the database [29] which includes 24 .png images (typically 768 × 512 or similar size) for processing. Gaussian white noise with mean value u = 0 and the standard deviation σ = 0.01 , σ = 0.02 , σ = 0.05 , σ = 0.1 , and σ = 0.2 was added to these images—an example of which is shown in Figure 5—to degrade their quality such that the performance of the different filtering algorithms could be observed. We note that with the original bilateral filter, edges in the images were lost, as they had all been smoothed. In contrast, Gaussian noise is few perceptible following operation of our algorithm, as shown in Figure 5d. Moreover, the image performance following operation of our algorithm is a better improvement, shown in Figure 5d. The edges in the image are retained after denoising, as shown in Figure 6c.
For additional verification, we compared the two filtering algorithms with respect to Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), Structural Similarity (SSIM) [30], Multi-scale structural similarity (MS-SSIM) [31], Visual Information Fidelity (VIF) [32], Full Refer- ence Visual Information Fidelity (FR-VIF) [33], and visual signal-to-noise ratio (VSNR) [34], shown in Table 1. This MATLAB wrapper code is presented in [35]. In this case, the objective measure scores are directly averaged; we highlight the best result of the two algorithms with boldface. We note that the intensity of noise in the images following processing is higher with the traditional bilateral filtering algorithm than it is with our modified solution, based on a comparison of the values of seven Image Quality Assessment (IQA) measures.

5.2. Subjective Analysis of Image Enhancement Algorithms

To verify the effectiveness of our improved image enhancement technique, we conducted experiments comparing its performance to those of the SSR, MSRCR, NLR, and INF algorithms based on a database [28] with 4322 images, which is a real foggy test dataset provided by Gac Research Institute. The partial results of these experiments are shown in Figure 6, with Column 1 depicting original images, Column 2 depicting images processed using the SSR algorithm, Column 3 depicting images processed using the MSRCR algorithm, Column 4 depicting images processed using the NLR algorithm, Column 5 depicting images processed using the INF algorithm, and Column 6 depicting images processed using our improved image enhancement technique.
There are degradation and dark regions in the original images in Figure 7, which are characteristic of uneven lighting, making them suitable for image enhancement. We note that with all five algorithms considered, most of the foreground details are preserved, and the color in the resulting images is enhanced following processing. However, with the SSR, MSRCR, and NLR algorithms, the “halo” effect can also be observed, as seen most obviously in Figure 7(a-2), Figure 7(d-3), Figure 7(e-3), Figure 7(f-3), and Figure 7(f-4). The MSRCR and INL algorithms darken the image and lose the details, as shown in Figure 7(a-3–f-3) and Figure 7(a-5–f-5). While details in the blurred areas of the original images can be enhanced using the other four algorithms, there is a deviation between the color in the input and output images. Hence, the color of the imaged object is not restored, and the visual representation of this object is thus distorted. In addition, the MSRCR and NLR algorithm was not effective in removing the fog from the foreground of the images, as shown in Figure 7(a-4–f-4).
In contrast, the texture and color contrast of images processed using our technique are improved, and enhancement is effective for images in different low light situations. In addition, following processing, the deviation of color between the input and output images is small. As color fidelity is better, the overall visual representation of the image is retained. Finally, details in the image are more clearly identified, as can be seen from Figure 7(a-6–f-6).
For a more quantitative representation of the effect of the different algorithms, histograms of the grayscale intensity of the images in Row (a) of Figure 6 are shown in Figure 8. From this, it can be seen that the distribution of pixel intensities is more uniform and the average grayscale intensity of the image is reduced. This is indicative of the fog removal effect obtained with our algorithm; fog is observed as a brighter area in an image, with a correspondingly large grayscale intensity. A reduction in the average grayscale intensity of an image thus suggests the removal of fog.

5.3. Objective Evaluation of Image Enhancement Algorithms

For objective evaluation of the different image enhancement algorithms, we calculated the Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, Information content weighting Structural SIMilarity (IWSSM) [35], and VSNR of the images in the database, as shown in Table 2. In this case, the correlation scores are directly averaged and we highlight the best result with boldface, as each of these parameters characterizes a different aspect of the image.
Table 2 shows that images processed using our algorithm had the best values, suggesting that more high-frequency information is processed using our technique, and details in the image are subsequently enhanced. Similarly, the PSNR and SSIM values were maximized using our algorithm, indicating that the contrast of the images is improved with our technique, and details are more identifiable. Then, MSSIM and VIF value of the image were maximized with our technique, indicating its improved reconstruction quality. Finally, the other values also improved using our algorithm, demonstrated that our algorithm leads to significant and consistent performance of image enhancement.

6. Conclusions

In this paper, we proposed a new Retinex-based image enhancement algorithm to address the problem of image blurring caused by uneven illumination. The technique features an improved bilateral filtering function, addressing the problem of an over-enhanced luminance image and the associated loss of texture. In addition, we combine our modified Retinex model with Gaussian pyramid down-sampling for multi-scale processing, in order to eliminate blurring problems and increase the contrast of the final image. We compared the performance of our technique with those of the SSR, MSRCR, NLR, and INF algorithms in experiments, with the results highlighting its improved image enhancement. However, the combination of the improved bilateral filtering function and the Gaussian pyramid transformation increases the time complexity of the technique, a problem which will be addressed in future studies.

Author Contributions

Conceptualization, C.L., H.Z. and W.C.; methodology, C.L. and H.Z.; validation, C.L., H.Z. and W.C.; formal analysis, C.L.; writing—original draft preparation, C.L. and H.Z.; writing—review and editing, C.L., H.Z. and W.C.; funding acquisition, H.Z.

Funding

This research was supported in part by the National Natural Science Foundation of China under Grant 51179074; the Natural Science Foundation of Fujian Province under Grant 2018J01495; the Young and Middle-Aged Teachers Projects of Fujian Province under Grant JAT170507, JAT170507(p); the Putian Science and Technology bureau project under Grant 2018RP4002; in part by Modern Precision Measurement and Laser Nondestructive Testing under Grant B17119; and the Doctoral Research Start-up Fund of Jimei University under Grant ZQ2013007.

Acknowledgments

The authors would like to thank Sun Hui and Juntao Chen for their help.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chang, H.; Ng, M.K.; Wang, W.; Zeng, T. Retinex image enhancement via a learned dictionary. Opt. Eng. 2015, 54, 013107. [Google Scholar] [CrossRef]
  2. Soori, U.; Yuen, P.W.; Han, J.W.; Ibrahim, I.; Chen, W.; Hong, K.; Merfort, C.; James, D.B.; Richardson, M.A. Target recognitions in multiple-camera closed-circuit television using color constancy. Opt. Eng. 2013, 52, 047202. [Google Scholar] [CrossRef]
  3. Yoon, J.; Choi, J.; Choe, Y. Efficient image enhancement using sparse source separation in the Retinex theory. Opt. Eng. 2017, 56, 113103. [Google Scholar] [CrossRef]
  4. Land, E.H. Recent advances in retinex theory and some implications for cortical computations: Color vision and the natural image. Proc. Natl. Acad. Sci. USA 1989, 80, 5163–5169. [Google Scholar] [CrossRef] [PubMed]
  5. Land, E.H. An alternative technique for the computation of the designator in the retinex theory of color vision. Proc. Natl. Acad. Sci. USA 1986, 83, 3078–3080. [Google Scholar] [CrossRef]
  6. Land, E.H. Recent Advances in Retinex Theory; Vision Research: Wayne, NJ, USA, 1986; Volume 26, pp. 7–21. [Google Scholar]
  7. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Properties and Performance of a Center/Surround Retinex. IEEE Trans. 1997, 6, 451–462. [Google Scholar] [CrossRef]
  8. Jobson, D.; Rahman, Z.; Woodell, G. A multiscale retinex for bridging the gap between color images and the human observation of scenes. Image Proc. IEEE Trans. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  9. Jobson, D.J.; Rahman, Z.U.; Woodell, G.A. Retinex image processing: Improved fidelity to direct visual observation. In Proceedings of the Color and Imaging Conference, Scottsdale, AZ, USA, 19–22 November 1996; No. 1. pp. 124–125. [Google Scholar]
  10. Jiang, B.; Rahman, Z. Runway hazard detection in poor visibility conditions. Proc. SPIE 2012, 8300, 83000H. [Google Scholar]
  11. Zavalaromero, O.; Meyerbaese, A.; Meyerbaese, U. Multiplatform GPGPU implementation of the active contours without edges algorithm. Proc. SPIE 2012, 8399, 1083990E. [Google Scholar]
  12. Rahman, Z.; Woodell, G.A.; Jobson, D.J. Retinex image enhancement: Application to medical images. In Proceedings of the NASA Workshop on New Partnerships in Medical Diagnostic Imaging, Greenbelt, MD, USA, 17–18 July 2001; pp. 1–23. [Google Scholar]
  13. Jobson, D.J.; Rahman, Z.; Woodell, G. Spatial aspect of color and scientific implications of retinex image processing. Vis. Inf. Process. X 2001, 4388, 117–128. [Google Scholar]
  14. Wang, Y.X.; Diao, M.; Han, C. Underwater imaging enhancement algorithm under conventional light source based on iterative histogram equalization. Acta Photonica Sin. 2018, 47, 1101002. [Google Scholar] [CrossRef]
  15. Woodell, G.; Rahman, Z.; Jobson, D.J.; Hines, G. Enhanced images for checked and carry-on baggage and cargo screening. Proc. SPIE 2004, 5403, 582. [Google Scholar]
  16. Rahman, Z.; Jobson, D.J.; Woodell, G.A. Retinex processing for automatic image enhancement. J. Electron. Imaging 2004, 13, 100–111. [Google Scholar]
  17. Loh, Y.P.; Liang, X.; Chan, C.S. Low-light image enhancement using Gaussian Process for features retrieval. Signal Process. Image Commun. 2019, 74, 175–190. [Google Scholar] [CrossRef]
  18. Tao, F.; Yang, X.; Wu, W.; Liu, K.; Zhou, Z.; Liu, Y. Retinex-based image enhancement framework by using region covariance filter. Soft Comput. 2018, 22, 1399–1420. [Google Scholar] [CrossRef]
  19. Karacan, L.; Erdem, E.; Erdem, A. Structure-preserving image smoothing via region covariances. ACM Trans. Graph. (TOG) 2013, 32, 176. [Google Scholar] [CrossRef]
  20. Zhang, Y.; Zheng, J.; Kou, X.; Xie, Y. Night View Road Scene Enhancement Based on Mixed Multi-scale Retinex and Fractional Differentiation. In Proceedings of the International Conference on Brain Inspired Cognitive Systems, Xi’an, China, 7–8 July 2018; Springer: Cham, Switzerland; pp. 818–826. [Google Scholar]
  21. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-revealing low-light image enhancement via robust Retinex model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar] [CrossRef]
  22. Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 4–7 January 1998; Volume 98, p. 1. [Google Scholar]
  23. Shi, K.Q.; Wei, W.G. Image denoising method of surface defect on cold rolled aluminum sheet by bilateral filtering. Surf. Technol. 2018, 47, 317–323. [Google Scholar]
  24. Burt, P.; Adelson, E. The Laplacian pyramid as a compact image code. IEEE Trans. Commun. 1983, 31, 532–540. [Google Scholar] [CrossRef]
  25. Zosso, D.; Tran, G.; Osher, S.J. Non-local retinex—A Unifying Framework and Beyond. SIAM J. Imaging Sci. 2015, 8, 787–826. [Google Scholar] [CrossRef]
  26. Zosso, D.; Tran, G.; Osher, S. A unifying retinex model based on non-local differential operators. Comput. Imaging XI 2013, 8657, 865702. [Google Scholar]
  27. The INface Toolbox v2.0 for Illumination Invariant Face Recognition. Available online: https://www.mathworks.com/matlabcentral/fileexchange/26523-the-inface-toolbox-v2-0-for-illumination-invariant-face-recognition (accessed on 10 October 2019).
  28. Liu, Y.; Zhao, G.; Gong, B.; Li, Y.; Raj, R.; Goel, N.; Tao, D. Improved techniques for learning to dehaze and beyond: A collective study. arXiv 2018, arXiv:1807.00202. [Google Scholar]
  29. Kodak Image Dataset. Available online: http://www.cs.albany.edu/~xypan/research/snr/Kodak.html (accessed on 15 October 2019).
  30. Wang, Z.; Bovik, A.C.; Sheikh, H.R.; Simoncelli, E.P. Image quality assessment: From error visibility to structural similarity. IEEE Trans. Image Proc. 2004, 13, 600–612. [Google Scholar] [CrossRef] [PubMed]
  31. Wang, Z.; Simoncelli, E.P.; Bovik, A.C. Multi-scale structural similarity for image quality assessment; Invited Paper. In Proceedings of the IEEE Asilomar Conference on Signals, Systems and Computers, Pacific Grove, CA, USA, 9–12 November 2003; Volume 2, pp. 1398–1402. [Google Scholar]
  32. Sheikh, H.R.; Bovik, A.C. Image information and visual quality. IEEE Trans. Image Process. 2006, 15, 430–444. [Google Scholar] [CrossRef]
  33. Chandler, D.M.; Hemami, S.S. VSNR: A wavelet-based visual signal-to-noise ratio for natural images. IEEE Trans. Image Process. 2007, 16, 2284–2298. [Google Scholar] [CrossRef]
  34. Image-Quality-Tools. Available online: https://github.com/sattarab/image-quality-tools/tree/master/metrix_mux (accessed on 15 October 2019).
  35. Wang, Z.; Li, Q. Information Content Weighting for Perceptual Image Quality Assessment. IEEE Trans. Image Process. 2011, 20, 1185–1198. [Google Scholar] [CrossRef]
Figure 1. Image capture principle according to Retinex theory.
Figure 1. Image capture principle according to Retinex theory.
Algorithms 12 00258 g001
Figure 2. Illustration of the limitations of the single-scale Retinex (SSR) algorithm. (a) Input image. (b) Result of SSR enhancement. The “halo” effect observed in areas with significant changes in local color is marked in red.
Figure 2. Illustration of the limitations of the single-scale Retinex (SSR) algorithm. (a) Input image. (b) Result of SSR enhancement. The “halo” effect observed in areas with significant changes in local color is marked in red.
Algorithms 12 00258 g002
Figure 3. Flowchart detailing image-enhancement using the improved Retinex algorithm.
Figure 3. Flowchart detailing image-enhancement using the improved Retinex algorithm.
Algorithms 12 00258 g003
Figure 4. A Gaussian–Laplacian multi-scale pyramid from intermediate results of different pyramid level. (Column 1) The Gaussian pyramid image of different pyramid levels. (Column 2) The enhanced pyramid images of different pyramid levels. (Column 3) Processed images of Laplace reconstruction; the final image of the reconstruction is in Column 3, first picture.
Figure 4. A Gaussian–Laplacian multi-scale pyramid from intermediate results of different pyramid level. (Column 1) The Gaussian pyramid image of different pyramid levels. (Column 2) The enhanced pyramid images of different pyramid levels. (Column 3) Processed images of Laplace reconstruction; the final image of the reconstruction is in Column 3, first picture.
Algorithms 12 00258 g004
Figure 5. Comparison of results of bilateral filtering (one image of the database: “kodim08”). (a) Original image. (b) Image with Gaussian noise ( σ = 0.01 ). (c) Image processed using original bilateral filtering algorithm. (d) Image processed using improved bilateral filtering algorithm.
Figure 5. Comparison of results of bilateral filtering (one image of the database: “kodim08”). (a) Original image. (b) Image with Gaussian noise ( σ = 0.01 ). (c) Image processed using original bilateral filtering algorithm. (d) Image processed using improved bilateral filtering algorithm.
Algorithms 12 00258 g005
Figure 6. Magnified view of the effects of bilateral filtering focusing on the region defined by the red box in Figure 4. (a) Image with Gaussian noise. (b) Image processed using original bilateral filtering algorithm. (c) Image processed using improved bilateral filtering algorithm.
Figure 6. Magnified view of the effects of bilateral filtering focusing on the region defined by the red box in Figure 4. (a) Image with Gaussian noise. (b) Image processed using original bilateral filtering algorithm. (c) Image processed using improved bilateral filtering algorithm.
Algorithms 12 00258 g006
Figure 7. Subjective comparison of the performance of different enhancement algorithms (six images of the dataset). (Column 1) Original images. (Column 2) Images processed using the SSR algorithm. (Column 3) Images processed using the multi-scale Retinex with color restoration (MSRCR) algorithm. (Column 4) Images processed using NLR algorithm. (Column 5) Images processed using INF algorithm. (Column 6) Images processed using our algorithm.
Figure 7. Subjective comparison of the performance of different enhancement algorithms (six images of the dataset). (Column 1) Original images. (Column 2) Images processed using the SSR algorithm. (Column 3) Images processed using the multi-scale Retinex with color restoration (MSRCR) algorithm. (Column 4) Images processed using NLR algorithm. (Column 5) Images processed using INF algorithm. (Column 6) Images processed using our algorithm.
Algorithms 12 00258 g007
Figure 8. Grayscale-based comparison of the performance of different enhancement algorithms. Histograms were computed using the images in Row (a) of Figure 7. (a) Original image. (b) Image processed using the SSR algorithm. (c) Image processed using the MSRCR algorithm. (d) Image processed using the NLR algorithm. (e) Image processed using the INF algorithm. (f) Image processed using our algorithm.
Figure 8. Grayscale-based comparison of the performance of different enhancement algorithms. Histograms were computed using the images in Row (a) of Figure 7. (a) Original image. (b) Image processed using the SSR algorithm. (c) Image processed using the MSRCR algorithm. (d) Image processed using the NLR algorithm. (e) Image processed using the INF algorithm. (f) Image processed using our algorithm.
Algorithms 12 00258 g008
Table 1. Evaluation of bilateral filtering algorithms based database [29] (24 images).
Table 1. Evaluation of bilateral filtering algorithms based database [29] (24 images).
Gaussian White NoiseAlgorithmPSNRMSESSIMMSSIMVIFFR-VIFVSNR
σ = 0.01 Original algorithm22.493 376.772 0.542 0.912 0.361 0.231 18.221
Modified algorithm27.034129.2310.7990.9640.5660.37926.314
σ = 0.02 Original algorithm20.505 587.780 0.488 0.898 0.327 0.205 17.375
Modified algorithm28.41493.7520.9280.9930.7940.46334.386
σ = 0.05 Original algorithm18.014 1035.936 0.451 0.881 0.284 0.182 16.101
Modified algorithm27.220123.4500.9460.9910.6910.46930.743
σ = 0.1 Original algorithm16.222 1560.708 0.431 0.869 0.257 0.170 15.226
Modified algorithm27.507115.5250.9700.9950.7130.50232.404
σ = 0.2 Original algorithm16.127 1610.014 0.418 0.856 0.249 0.165 14.992
Modified algorithm22.286481.9230.8600.9520.4460.36521.867
Table 2. Objective comparison of the performance of different enhancement algorithms based on information Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, IWSSIM, and VSNR.
Table 2. Objective comparison of the performance of different enhancement algorithms based on information Peak Signal to Noise Ratio (PSNR), Mean Squared Error (MSE), SSIM, MS-SSIM, VIF, IWSSIM, and VSNR.
AlgorithmPSNRMSESSIMMSSIMVIFIWSSIMVSNR
OIN/A0.000 1.000 1.000 1.000 1.000 N/A
SSR18.584 1097.904 0.734 0.761 1.192 0.728 7.194
MSRCR12.034 4438.279 0.547 0.761 0.826 0.731 12.376
NLR13.751 3424.179 0.767 0.799 0.867 0.6626.932
INF10.357 6132.800 0.632 0.480 0.355 0.7367.956
Our24.071333.6020.8510.9140.9940.89413.224

Share and Cite

MDPI and ACS Style

Lin, C.; Zhou, H.-f.; Chen, W. Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm. Algorithms 2019, 12, 258. https://doi.org/10.3390/a12120258

AMA Style

Lin C, Zhou H-f, Chen W. Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm. Algorithms. 2019; 12(12):258. https://doi.org/10.3390/a12120258

Chicago/Turabian Style

Lin, Chang, Hai-feng Zhou, and Wu Chen. 2019. "Improved Bilateral Filtering for a Gaussian Pyramid Structure-Based Image Enhancement Algorithm" Algorithms 12, no. 12: 258. https://doi.org/10.3390/a12120258

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop