You are currently viewing a new version of our website. To view the old version click .
Sensors
  • Article
  • Open Access

16 September 2020

SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images

,
and
1
School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
2
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
This article belongs to the Section Sensing and Imaging

Abstract

Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.

1. Introduction

Images captured in outdoor scene are often degraded by interaction of atmospheric phenomena. The phenomena such as haze, fog and smoke are mainly generated by the substantial presence of suspended atmospheric particles which absorb, emit or scatter light. As a result, acquired outdoor scenery images are often of low visual quality, such as reduced contrast, limited visibility, and weak color fidelity. The performance of computer vision-based algorithms like detection, recognition, and surveillance are severely limited under such hazy conditions. The goal of image dehazing is to mitigate the influence of haze and recover a clear scene image, which is beneficial for both computational photography and computer vision applications. Based on analysis of the mechanism of haze formation [], a number of approaches have been proposed in the past decades to solve this challenging problem relying on various image assumptions and priors [,,,,,,]. While most existing methods are focused on daytime image dehazing, nighttime image dehazing remains a challenging problem. During the night, the performances of existing dehazing methods are significantly limited, because the priors and assumptions of these methods become invalid due to the weak illumination and multiple scattering of the scene ambient light.
Unlike uniformly distributed sunlight as the dominate light source in daytime, illumination varies spatially during the night. In addition, the overall brightness of the scene could be extremely weak, resulting failures of currently used priors and assumptions, such as the dark channel prior [] and the color attenuation prior []. Moreover, significant halo effect caused by multiple scattering can be observed around light sources in hazy night, which is not considered in the commonly used daytime dehazing model. In order to overcome these problems, several techniques have been introduced with novel assumptions recent years, such as bright alpha blending [], color transfer [], maximum reflection prior [] illumination correction [], glow removal [] and image fusion [,]. However, although the nighttime haze can be removed from the observed images, color distortion still exist in the dehazed results. In addition, visibility and scene details are reduced and hidden due to the low ambient illumination. While various methods have been proposed for low-light image enhancement and have achieved satisfying results [,,,], practically, dehazing and brightening are regarded as two independent problems in nighttime image processing. As the result, ambient illumination remains dim in dehazed nighttime images, while haze remains in the low-light enhancement results. Although the result can be obtained by a dehazing-and-enhancing flow, severe color distortion could appear. Therefore, it is argued that both brightness enhancement and dehazing are crucial for visibility improvement of nighttime hazy images.
In this paper, a novel unified framework for SImultaneously Dehazing and Enhancement of nighttime hazy images (SIDE) is proposed by considering both halo mitigation and ambient illumination enhancement. More importantly, the classic daytime dehazing methods can be effectively incorporated into nighttime dehazing based on the proposed framework. Specifically, a layer extraction approach is firstly introduced to mitigate the halo artifacts caused by multiple scattering. Then, a Retinex based decomposition is proposed to estimate the spatially varying ambient illumination. After that, daytime dehazing methods can be applied to recover the scene radiance. To the best of our knowledge, the proposed SIDE is the first attempt which considers both haze removal and illumination enhancement for nighttime hazy images. Experimental results on both real-world and synthetic datasets demonstrate the effectiveness of the proposed framework for classic daytime dehazing methods under nighttime hazy conditions. In the comparisons with the state-of-the-art nighttime dehazing methods, both the quantitative and qualitative evaluations indicate the superiority of our proposed SIDE in terms of halo mitigation, visibility improvement and color preservation.
The rest of the paper is organized as follows—Section 2 introduces related work and limitations in existing image dehazing and low-light enhancement. Section 3 throughly describes the proposed SIDE. Section 4 shows the experimental results and analysis. Finally, Section 5 concludes the work.

3. Methodology of the Proposed SIDE

The scheme of the proposed SIDE is illustrated in Figure 1, which includes Halo Decomposition Module, Illumination Decomposition Module, Image Dehazing Module and Enhancement Module. We firstly introduce a halo extraction module to mitigate the halo artifacts caused by multiple scattering After that, we propose a Retinex based illumination decomposition method to estimate the spatially varying ambient illumination. With the illumination extracted, the classic daytime dehazing methods are employed for haze removal. Finally, we generate the output scene image by combining the adjusted illumination and dehazed scene layer. We will express each module in detail in the following subsections.
Figure 1. Framework of the proposed SImultaneously Dehazing and Enhancement of nighttime hazy images (SIDE).

3.1. Halo Decomposition Module

As discussed above, one major problem of nighttime hazy images is detail and visibility degradation of objects around light sources caused by the multiple scattering of nearby illuminants. Inspired by Li’s work [], an observed hazy image I ( x ) can be modeled as a linear superimposition of a scene layer S which contains haze and scene information, and a halo layer H which indicates halos and glows around illuminants. Consequentially, Equation (3) can thus be written as:
I ( x ) = S ( x ) + H ( x ) ,
where S and H indicate the scene layer and the halo layer, respectively.
The halo layer has the characteristics of high intensities and smooth variation around light sources, wheres hazy scene layer itself only contains scene structure and texture details with relatively dim brightness. It is noticed that the gradients of halo patches has a sharper and sparser distribution compared with the non-halo patches. Therefore, a probabilistic model with prior knowledge on gradient distribution of the two layers can be employed to extract the halo layer. By assuming S and H are independent, the optimized S and H can be decomposed by minimizing the object function as follows:
min S , H x i Ω S i S ( x ) + j Ω H λ 2 j H ( x ) 2 ,
where ∗ denotes the convolution operator. indicates the derivative filters in sets Ω S = [ 1 , 1 ] , [ 1 , 1 ] T and Ω H = [ 1 , 2 , 1 ] , [ 1 , 2 , 1 ] T , which contain the first order derivative filters in two directions, and the second order Laplacian filter, respectively. The scale weight λ controls the smoothness of the halo image layer.
According to convolutional system theory, the convolutional result of a signal and a finite-length sequence can be expressed with the product of a Toeplitz matrix and the signal, where the Toeplitz matrix is uniquely determined by the finite-length sequence. By substituting H = I S , Equation (5) can be rewritten as follows:
min S x i Ω S D i S ( x ) + j Ω H λ 2 D j I ( x ) D j S ( x ) 2 ,
where D k S ( x ) indicates the elements of the Toeplitz matrix generated by the convolutional kernel k .
The non-convex problem in (6) can be optimized via the alternating direction method of multipliers (ADMM) as in References [,]. Figure 2 shows a result of halo layer separation. It is observed in the scene layer that halo effect around light sources is mitigated, compared with the observed nighttime hazy image.
Figure 2. Illustration of halo layer extraction. (a): a nighttime hazy image, (b): the halo layer, (c): the scene layer.

3.2. Illumination Decomposition Module

With the halo layer H extracted, the dehazing problem becomes:
S ( x ) = J ( x ) t ( x ) + L ( x ) 1 t ( x ) .
Unlike the commonly used haze imaging model in (1) which assumes a global constant atmospheric light, the ambient illumination in our work is assumed to be an inhomogeneous and spatially-varying map L ( x ) . Consequentially, the local maximum assumption of atmospheric light in daytime dehazing no longer holds for estimating L ( x ) during nighttime. Inspired by the Retinex theory which assumes an observed image as the combination of reflectance and illumination, we resort to the illumination decomposition model to overcome these limitations.
According to the Retinex theory [], the scene layer image can be formulated as the pixel-wise product of a reflectance component and a light-dependent illumination component as follows:
S = R ( x ) L ( x ) ,
where R is the reflectance component, and L is the illumination component. ∘ represents the pixel-wise multiplication.
Denoting J ( x ) = L ( x ) ρ ( x ) , Equation (7) can be reformulated in a Retinex-like pattern as follows:
S ( x ) = L ( x ) [ ρ ( x ) t ( x ) + 1 t ( x ) ] ,
where ρ is the intrinsic reflectance of objects in the scene [,].
The illumination is presumed to be spatially smooth and contains the overall structure, which shares the same characteristic of ambient illumination of the scene layer, therefore, (9) can be solved by minimizing the following objective function:
min R , L , ω R L S F 2 + TGV α 2 ( L ) + β R S 0 ,
where R = ρ ( x ) t ( x ) + 1 t ( x ) , β is the regularization parameter and ∇ is the first order differential operator. TGV α 2 ( L ) indicates the second order TGV with respect to L , which can be expressed in terms of the following 1 minimization problem:
TGV α 2 ( L ) = min ω α 1 D ( L ω ) 1 + α 0 ω 1 ,
where α { α 0 , α 1 } is a weighting vector and ω is a vector field with low variation. D is a diffusion tensor formulated as follows:
D = exp ζ | S | μ n n T + n n T ,
where n = S / S indicates the normalized direction of the image gradient, and n is the normal vector to the gradient. ζ and μ are parameters controlling the magnitude and the sharpness of D .
In practice, we apply the 1 norm as the convex relaxation [] of the original 0 minimization problem. In addition, (10) is a non-convex and non-smooth optimization problem because of the adoption of pixel-wise multiplication and TGV regularization. An alternating direction method of multipliers (ADMM) [,] is adopted to solve it. Figure 3 demonstrates an example of ambient illumination estimation. It is observed the unevenly distributed ambient illumination is effectively estimated. Color distortion caused by artificial illuminant is also corrected in the reflectance.
Figure 3. A sample of ambient illumination decomposition. (a): scene layer, (b): estimated ambient illumination, (c): estimated reflectance.

3.3. Scene Recovery Module

As expressed in (9), the reflectance component R in Equation (8) is composed as follows:
R ( x ) = ρ ( x ) t ( x ) + 1 t ( x ) ,
which can be expressed in the following formation:
R ( x ) = J 1 ( x ) t ( x ) + A 1 ( 1 t ( x ) ) .
It can be easily observed that Equation (13) has the similar formation of the original single scattering model in Equation (1) with the atmospheric light as A 1 = ( 1 , 1 , 1 ) T and the scene radiance J 1 ( x ) = ρ ( x ) . Therefore, various assumptions and priors of existing daytime dehazing approaches can be employed to effectively solve Equation (13). In addition, to preserve the naturalness of the scene, we also manipulate the ambient illumination L ( x ) with gamma transformation to increase the global brightness. We will demonstrate and analyze the results in the next section.

4. Experimental Results and Analysis

In this section, we firstly demonstrate the effectiveness of the proposed SIDE in terms of halo layer extraction. After that, we compare the performances of various daytime dehazing methods with and without the proposed SIDE. In addition, comparisons with conventional low-light image enhancement methods are also illustrated. Next, comprehensive experiments are conducted to demonstrate the superiority of the proposed SIDE. Since it is hard to obtain the nighttime haze and daytime haze-free image pairs, the proposed SIDE is evaluated on Zhang’s datasets [], which contains 20 real world nighttime hazy images. We also test the proposed SIDE on synthesized nighttime hazy images for quantitative comparisons. The proposed algorithm is implemented using MATLAB 2019b on PC with 9700K CPU and 32GB RAM. In the implementation, parameter λ is set to 3 for halo layer extraction, the regularization parameters for ambient illumination decomposition are set as α 0 = 0.5 , α 0 = 0.05 . While most conventional daytime dehazing methods can be applied for scene radiance recovery, He’s dark channel prior (DCP) [], Meng’s boundary constraint method (BC) [], Color Attenuation Prior (CAP) [], and Berman’s non-local method (NL) [] are selected for demonstrations. The performance of the proposed SIDE is also compared with several state-of-the-art nighttime dehazing approaches, including Zhang’s Maximum Reflectance Prior (MRP) [,], Li’s Glow and Multiple Light Colors (GMLC) [], Yu’s Pixel-wise Alpha Blending (PAB) [] and Lou’s Haze Density Features (HDF) [], where the parameters are set as defined in the references.

4.1. Results on Hazy Scene Estimation

To demonstrate the effectiveness of our halo layer extraction, we first illustrate the results of the extracted halo layer and scene layer, in Figure 4. It can be seen that the halos around illuminants are effectively mitigated in the scene layers.
Figure 4. Illustration of halo layer extraction. (a,d,g): nighttime hazy images, (b,e,h): the halo layers, (c,f,i): the scene layers.

4.2. Verification on Daytime Dehazing Methods

To demonstrate the effectiveness of the proposed SIDE, we compare the performances of five classic daytime dehazing methods with and without SIDE on test nighttime hazy images. As shown in Figure 5a–d show the dehazing results without our SIDE using He’s dark channel prior (DCP) [], Meng’s boundary constraint method (BC) [], Color Attenuation Prior (CAP) [], and Berman’s non-local method (NL) [], respectively. Figure 5e–h show the results in the enhancing-and-dehazing flow of the corresponding methods, while the observed hazy image is firstly enhanced using LIME []. The bottom row of Figure 5 show the dehazing results of corresponding methods with the proposed SIDE. It is observed in the top row that applying daytime dehazing methods directly on nighttime hazy images would cause halo artifacts, color distortion, contrast reduction in outputs. Details are also lost in dark regions in the results. In the middle row, halo artifacts and color distortion become more serious after enhancement by LIME []. It is also noticed in Figure 5g that the visibility is worse after the enhancement. On the contrary, the results in the bottom row have better visibility and ambient color. In addition, the halo artifacts are significantly mitigate. It is clearly that these daytime dehazing methods can be effectively applied to nighttime dehazing under the proposed SIDE framework.
Figure 5. Illustration of the proposed SIDE. (ad): results of conventional daytime dehazing methods without SIDE; (eh): results with the enhancing-and-dehazing flow (enhanced by LIME []); (il): results of corresponding methods with the proposed SIDE. From left to right: dark channel prior (DCP) [], boundary constraint method (BC) [], Color Attenuation Prior (CAP) [] and Berman’s non-local method (NL) [].
We also compare our SIDE with different low-light enhancement methods in Figure 6. Specifically, the probabilistic image enhancement method (PIE) [], the naturalness preserved enhancement method (NPE) [], the low-light image enhancement via illumination map estimation (LIME) [] and the structure-revealing low-light image enhancement method (SLIE) [] are employed for comparisons. It is easily observed that although existing low-light enhancement methods can significantly increase the contrast of the nighttime hazy images, haze remains in the enhanced results.
Figure 6. Comparisons of conventional low-light enhancement methods. From left to right: real nighttime hazy images, results of low-light enhancement methods PIE [], NPE [], LIME [], SLIE [] and the result of proposed SIDE.
To further verify the effectiveness of the proposed SIDE, we compare the results of our SIDE with the results using traditional daytime dehazing method BC [] directly, using the low-light enhancement method LIME [], and using the enhancing and dehazing flow, respectively. As shown in Figure 7, although conventional daytime dehazing methods can mitigate haze in certain regions, residual haze can be observed globally. In addition, the globally illumination remains dark in the dehazing results. On the other hand, while traditional low-light enhancement methods are capable of increasing visibility, haze and halo artifacts become severer. It is also noticed that the performances of dehazing-after-enhancement are not satisfying. Clearly, based on the proposed SIDE, the conventional daytime dehazing methods can be employed in nighttime dehazing.
Figure 7. Comparisons of conventional dehazing methods and low-light enhancement methods. From left to right: observed nighttime hazy images, dehazing results using BC [] directly, enhancement results using LIME [], dehazing results after enhancement, dehazing results using BC with the proposed SIDE.

4.3. Qualitative Comparisons on Real Nighttime Hazy Images

We also compare the proposed SIDE with existing nighttime dehazing methods on real nighttime hazy images. In our implementation, Meng’s BC [] is employed for scene recovery. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 show seven comparison results on real test images.
Figure 8. Comparisons with other nighttime dehazing methods on test image Pavilion. (a): the nighttime hazy image, (b): Zhang’s Maximum Reflectance Prior (MRP) [] result, (c): Lou’s Haze Density Features (HDF) [] result, (d): Li’s Glow and Multiple Light Colors (GMLC) [] result, (e): Yu’s Pixel-wise Alpha Blending (PAB) [] result, (f): SIDE result.
Figure 9. Comparisons with other nighttime dehazing methods on test image Lake. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Figure 10. Comparisons with other nighttime dehazing methods on test image Street. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Figure 11. Comparisons with other nighttime dehazing methods on test image Cityscape. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Figure 12. Comparisons with other nighttime dehazing methods on test image Church. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Figure 13. Comparisons with other nighttime dehazing methods on test image Riverside. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Figure 14. Comparisons with other nighttime dehazing methods on test image Railway. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
As observed in Figure 8, all the compared methods are capable of removing haze for nighttime hazy images. Although PAB [] is capable of increasing the contrast of the scene to a certain degree, apparent halo can be observed in Figure 8e. In addition, the brightness of the result remains dim. While MRP [], HDF [], GMLC [] and the proposed SIDE can significantly increase the visibility and suppress halo effect, the proposed SIDE has better contrast improvement in local regions. Color distortion can be seen in grove regions in MRP [] and GMLC [], while it is more natural in our result. HDF [] also results in residual haze in the grove region. Although no ground-truth reference image is available, the result of SIDE has the best subjective performance on color constancy. In Figure 9, while all the methods are able to remove haze and increase scene visibility, the proposed SIDE has the best performance on detail recovery. Reflection of the woods in the water is recovered well in our result. In Figure 10, it is noticed in MRP [] result that, over-saturation around lamps can be observed and halos are also significant in the dehazed result. Although haze is removed in PAB [] and HDF [] results, halo artifacts still exist. Moreover, the result suffers from dim and distorted illumination. While both GMLC [] and our SIDE are capable of mitigating halo artifacts significantly, more details and better visibility are observed in our result. In Figure 11, halo artifacts are observed in HDF and PAB results. In GMLC [] result, although halos around illuminants are well suppressed, structural halos are generated around buildings. Moreover, details in dark regions are lost. As observed in Figure 11f, our result has the best performance in terms of halo suppression and detail recovery. Figure 12 shows the comparison on the real test image Church. All the nighttime dehazing methods are capable of removing haze from the scene effectively. Although GMLC [] can suppress halos around artificial illumination, structural halo are noticed around build boundary. In addition, while all the compared methods have limitation on recovering scene details hiding in the dark, it is easily noticed that our result has the best performance on halo mitigation and detail recovery. Figure 13 demonstrates the comparison on the test image Riverside. Since the haze is quite light in the scene, all the methods ca effectively remove the haze and halos around lights are not significant. As observed in the comparison, our method can achieve the best visual performance in terms of detail recovery and globally illumination enhancement. Figure 14 presents a comparison on the test image Railway, where the color temperature of the illumination is quite warm, and the illumination is mainly distributed at the top of the image. As observed in the comparison, all the methods can increase the visibility of the hazy image. Halos are noticed in MRP [] and GMLC [] results due to over-saturation in bright regions. It is observed in our result that halos are well suppressed and details in foreground regions are significantly enhanced. However, it is also noticed in our result that lights at the top are slightly over-enhanced due to the illumination enhancement. Figure 15 presents a comparison on the test image Tomb, where the flashlight was activated. As seen in the comparison, halos are suppressed in results by GMLC and our SIDE. However, all the compared nighttime dehazing methods have limitations when recovering details of foreground regions (left bottom corner). Figure 15 shows a comparison on the test image Building, which shows a cityview in a hazy night. As observed in the comparison, GMLC generates obvious halo artifacts around building boundary, and HDF slightly increase the visibility.
Figure 15. Comparisons with other nighttime dehazing methods on test image Tomb. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.
Since no ground-truth images are available, the no-reference haze density metric (HDM) and the no-reference image quality metric for contrast distortion (NIQMC) [] are employed for evaluation. The HDM includes three evaluators, namely e, Σ and r ¯ , which indicate the rate of newly appeared edges, the percentage of pure white or pure black pixels and the mean ratio of edge gradients, respectively. A better dehazed image should have higher values of e, r ¯ , NIQMC, and a lower value of Σ , which are indicated by ↓ and ↓, respectively. Table 1 shows the HDM evaluation results of Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 and the average evaluation result on the real-world dataset by different methods. It is shown that the proposed SIDE achieved the best performance for most indicators. Due to the illumination enhancement of the proposed SIDE, it is also noticed that our method achieves absolute advantage in terms of the metrics r ¯ and NIQMC.
Table 1. Quantitative Comparison on Real-world Benchmark Dataset. The bold indicates the best scores of the quantitative comparisons.
Figure 16. Comparisons with other nighttime dehazing methods on test image Building. (a): the nighttime hazy image, (b): MRP [] result, (c): HDF [] result, (d): GMLC [] result, (e): PAB [] result, (f): SIDE result.

4.4. Comparisons on Synthesized Nighttime Hazy Images

Besides comparisons on a real image dataset, we also evaluate the performance of the proposed SIDE objectively on synthetic test image according to Li’s work [], where the hazy image is generated using PBRT. Figure 17 demonstrates the comparison of MRP [], HDF [], GMLC [], PAB [] and SIDE. Table 2 shows the PSNR and structural similarity (SSIM) evaluation results of different methods. It is observed that our proposed SIDE achieved the best results in both terms of MSE and SSIM, comparing with the existing state-of-the-art nighttime dehazing methods.
Figure 17. Comparisons on the synthetic test image. (a): the ground-truth image, (b): result of MRP [], (c): result of HDF [], (d): GMLC [], (e): result of PAB [], (f): result of SIDE.
Table 2. Quantitative comparison on the synthetic test image. The bold indicates the best scores of the quantitative comparisons.

5. Conclusions

In this paper, we propose a novel unified framework, namely SIDE, to simultaneously remove haze and enhance illumination for nighttime hazy images. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination are considered in our approach. In addition, we prove that the conventional daytime dehazing approaches can be effectively incorporated into nighttime dehazing task based on the proposed SIDE. In order to mitigate the halo artifacts caused by multiple scattering, a robust layer decomposition method is firstly introduced to separate the halo layer from the hazy image. A Retinex based illumination decomposition method is then proposed to estimate the non-uniformly distributed ambient illumination. By removing the ambient illumination, the original nighttime dehazing problem can be effectively solved using various daytime dehazing methods. Experimental results demonstrate the effectiveness of the proposed framework for classic daytime dehazing methods under nighttime hazy conditions. In addition, compared with the state-of-the-art nighttime dehazing methods, both quantitative and qualitative comparisons indicate the superiority of the proposed SIDE in terms of halo mitigation, visibility improvement and color preservation.

Author Contributions

Methodology, R.H.; software, X.G.; validation, X.G.; investigation, R.H.; writing—original draft preparation, R.H.; writing—review and editing, Z.S.; visualization, X.G.; supervision, Z.S.; project administration, R.H.; funding acquisition, R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Shaanxi basic research program of Natural Science (2019JQ-572) and the Natural Science Foundation of China (61420106007).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar]
  2. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar]
  3. Ancuti, C.; Ancuti, C.; Hermans, C.; Bekaert, P. A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image. In Proceedings of the Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2011; pp. 501–514. [Google Scholar]
  4. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 3–6 December 2013; pp. 617–624. [Google Scholar]
  5. Fattal, R. Dehazing Using Color-Lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar]
  6. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed]
  7. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  8. He, R.; Huang, X. Single Image Dehazing Using Non-local Total Generalized Variation. In Proceedings of the IEEE International Conference on Industrial Engineering and Applications, Xi’an, China, 19–21 June 2019; pp. 19–24. [Google Scholar]
  9. Yu, T.; Song, K.; Miao, P.; Yang, G.; Yang, H.; Chen, C. Nighttime Single Image Dehazing via Pixel-Wise Alpha Blending. IEEE Access 2019, 7, 114619–114630. [Google Scholar]
  10. Pei, S.; Lee, T. Nighttime haze removal using color transfer pre-processing and dark channel prior. In Proceedings of the IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 957–960. [Google Scholar]
  11. Zhang, J.; Cao, Y.; Fang, S.; Kang, Y.; Chen, C.W. Fast haze removal for nighttime image using maximum reflectance prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7418–7426. [Google Scholar]
  12. Zhang, J.; Cao, Y.; Wang, Z. Nighttime haze removal based on a new imaging model. In Proceedings of the IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 4557–4561. [Google Scholar]
  13. Li, Y.; Tan, R.T.; Brown, M.S. Nighttime haze removal with glow and multiple light colors. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 226–234. [Google Scholar]
  14. Ancuti, C.; Ancuti, C.; De Vleeschouwer, C.; Bovik, A. Night-time dehazing by fusion. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 2256–2260. [Google Scholar]
  15. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C.; Bovik, A.C. Day and night-time dehazing by local airlight estimation. IEEE Trans. Image Process. 2020, 29, 6264–6275. [Google Scholar]
  16. Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-Light Image Enhancement using Variational Optimization-based Retinex Model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar]
  17. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar]
  18. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar]
  19. He, R.; Guan, M.; Wen, C. SCENS: Simultaneous Contrast Enhancement and Noise Suppression for Low-light Images. IEEE Trans. Ind. Electron. 2020, in press. [Google Scholar] [CrossRef]
  20. Tan, R. Visibility in bad weather from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  21. Joshi, N.; Cohen, M. Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal. In Proceedings of the IEEE International Conference on Computational Photography, Cambridge, MA, USA, 29–30 March 2010; pp. 1–8. [Google Scholar]
  22. Matlin, E.; Milanfar, P. Removal of haze and noise from a single image. In Proceedings of the IS&T/SPIE Electronic Imaging, Burlingame, CA, USA, 22–26 January 2012; pp. 177–188. [Google Scholar]
  23. Huang, S.; Chen, B.; Wang, W. Visibility restoration of single hazy images captured in real-world weather conditions. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1814–1824. [Google Scholar]
  24. Li, Z.; Zheng, J. Edge-preserving decomposition-based single image haze removal. IEEE Trans. Image Process. 2015, 24, 5432–5441. [Google Scholar] [PubMed]
  25. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2015, 24, 120–129. [Google Scholar] [PubMed]
  26. Li, Z.; Zheng, J. Single image de-hazing using globally guided image filtering. IEEE Trans. Image Process. 2018, 27, 442–450. [Google Scholar]
  27. Liu, Y.; Shang, J.; Pan, L.; Wang, A.; Wang, M. A Unified Variational Model for Single Image Dehazing. IEEE Access 2019, 7, 15722–15736. [Google Scholar]
  28. Tang, K.; Yang, J.; Wang, J. Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2995–3002. [Google Scholar]
  29. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar]
  30. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 154–169. [Google Scholar]
  31. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
  32. Li, R.; Pan, J.; Li, Z.; Tang, J. Single image dehazing via conditional generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8202–8211. [Google Scholar]
  33. Li, J.; Li, G.; Fan, H. Image dehazing using residual-based deep CNN. IEEE Access 2018, 6, 26831–26842. [Google Scholar]
  34. Zhang, H.; Patel, V. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar]
  35. Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar]
  36. Zhang, X.; Wang, T.; Wang, J.; Tang, G.; Zhao, L. Pyramid Channel-based Feature Attention Network for image dehazing. Comput. Vis. Image Underst. 2020, 197, 103003. [Google Scholar]
  37. Zhu, H.; Peng, X.; Chandrasekhar, V.; Li, L.; Lim, J.H. DehazeGAN: When Image Dehazing Meets Differential Programming. In Proceedings of the International Joint Conferences on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 1234–1240. [Google Scholar]
  38. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced pix2pix dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8160–8168. [Google Scholar]
  39. Dudhane, A.; Singh Aulakh, H.; Murala, S. Ri-gan: An end-to-end network for single image haze removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 2014–2023. [Google Scholar]
  40. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing. In AAAI; AAAI Press: Palo Alto, CA, USA, 2020; pp. 10729–10736. [Google Scholar]
  41. Lou, W.; Li, Y.; Yang, G.; Chen, C.; Yang, H.; Yu, T. Integrating Haze Density Features for Fast Nighttime Image Dehazing. IEEE Access 2020, 8, 3318–3330. [Google Scholar]
  42. Kuanar, S.; Rao, K.; Mahapatra, D.; Bilas, M. Night time haze and glow removal using deep dilated convolutional network. arXiv 2019, arXiv:1902.00855. [Google Scholar]
  43. Rahman, Z.; Jobson, D.; Woodell, G. Retinex Processing for Automatic Image Enhancement. J. Electron. Imaging 2004, 13, 100–111. [Google Scholar]
  44. Jobson, D.; Rahman, Z.; Woodell, G. Properties and Performance of a Center/Surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  45. Jobson, D.J.; Rahman, Z.; Woodell, G. A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef]
  46. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A Variational Framework for Retinex. Int. J. Comput. Vis. 2003, 52, 7–23. [Google Scholar]
  47. Ng, M.; Wang, W. A Total Variation Model for Retinex. SIAM J. Imaging Sci. 2011, 4, 345–365. [Google Scholar]
  48. Ma, W.; Morel, J.M.; Osher, S.; Chien, A. An 1-based Variational Model for Retinex Theory and Its Application to Medical Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 153–160. [Google Scholar]
  49. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar]
  50. Wang, W.; He, C. A Variational Model with Barrier Functionals for Retinex. SIAM J. Imaging Sci. 2015, 8, 1955–1980. [Google Scholar]
  51. Fu, X.; Liao, Y.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation. IEEE Trans. Image Process. 2015, 24, 4965–4977. [Google Scholar]
  52. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  53. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-Light Image Enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
  54. Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
  55. Park, S.; Yu, S.; Kim, M.; Park, K.; Paik, J. Dual Autoencoder Network for Retinex-based Low-Light Image Enhancement. IEEE Access 2018, 6, 22084–22093. [Google Scholar]
  56. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018; pp. 1–12. [Google Scholar]
  57. Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
  58. Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  59. Goldstein, T.; Osher, S. The Split Bregman Method for 1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar]
  60. Wang, Y.; Yin, W.; Zeng, J. Global Convergence of ADMM in Nonconvex Nonsmooth Optimization. J. Sci. Comput. 2019, 78, 29–63. [Google Scholar]
  61. Land, E. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
  62. Li, Y.; You, S.; Brown, M.S.; Tan, R.T. Haze visibility enhancement: A survey and quantitative benchmarking. Comput. Vis. Image Underst. 2017, 165, 1–16. [Google Scholar]
  63. Jenatton, R.; Mairal, J.; Obozinski, G.; Bach, F. Proximal Methods for Sparse Hierarchical Dictionary Learning. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 487–494. [Google Scholar]
  64. Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization. IEEE Trans. Cybern. 2016, 47, 4559–4565. [Google Scholar] [CrossRef]

Article Metrics

Citations

Article Access Statistics

Multiple requests from the same IP address are counted as one view.