Next Article in Journal
Computer Vision Tool-Setting System of Numerical Control Machine Tool
Previous Article in Journal
Towards to Optimal Wavelet Denoising Scheme—A Novel Spatial and Volumetric Mapping of Wavelet-Based Biomedical Data Smoothing
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images

1
School of Automation, Northwestern Polytechnical University, Xi’an 710129, China
2
School of Electronics and Information, Northwestern Polytechnical University, Xi’an 710129, China
*
Author to whom correspondence should be addressed.
Sensors 2020, 20(18), 5300; https://doi.org/10.3390/s20185300
Submission received: 3 August 2020 / Revised: 29 August 2020 / Accepted: 8 September 2020 / Published: 16 September 2020
(This article belongs to the Section Sensing and Imaging)

Abstract

:
Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.

1. Introduction

Images captured in outdoor scene are often degraded by interaction of atmospheric phenomena. The phenomena such as haze, fog and smoke are mainly generated by the substantial presence of suspended atmospheric particles which absorb, emit or scatter light. As a result, acquired outdoor scenery images are often of low visual quality, such as reduced contrast, limited visibility, and weak color fidelity. The performance of computer vision-based algorithms like detection, recognition, and surveillance are severely limited under such hazy conditions. The goal of image dehazing is to mitigate the influence of haze and recover a clear scene image, which is beneficial for both computational photography and computer vision applications. Based on analysis of the mechanism of haze formation [1], a number of approaches have been proposed in the past decades to solve this challenging problem relying on various image assumptions and priors [2,3,4,5,6,7,8]. While most existing methods are focused on daytime image dehazing, nighttime image dehazing remains a challenging problem. During the night, the performances of existing dehazing methods are significantly limited, because the priors and assumptions of these methods become invalid due to the weak illumination and multiple scattering of the scene ambient light.
Unlike uniformly distributed sunlight as the dominate light source in daytime, illumination varies spatially during the night. In addition, the overall brightness of the scene could be extremely weak, resulting failures of currently used priors and assumptions, such as the dark channel prior [2] and the color attenuation prior [6]. Moreover, significant halo effect caused by multiple scattering can be observed around light sources in hazy night, which is not considered in the commonly used daytime dehazing model. In order to overcome these problems, several techniques have been introduced with novel assumptions recent years, such as bright alpha blending [9], color transfer [10], maximum reflection prior [11] illumination correction [12], glow removal [13] and image fusion [14,15]. However, although the nighttime haze can be removed from the observed images, color distortion still exist in the dehazed results. In addition, visibility and scene details are reduced and hidden due to the low ambient illumination. While various methods have been proposed for low-light image enhancement and have achieved satisfying results [16,17,18,19], practically, dehazing and brightening are regarded as two independent problems in nighttime image processing. As the result, ambient illumination remains dim in dehazed nighttime images, while haze remains in the low-light enhancement results. Although the result can be obtained by a dehazing-and-enhancing flow, severe color distortion could appear. Therefore, it is argued that both brightness enhancement and dehazing are crucial for visibility improvement of nighttime hazy images.
In this paper, a novel unified framework for SImultaneously Dehazing and Enhancement of nighttime hazy images (SIDE) is proposed by considering both halo mitigation and ambient illumination enhancement. More importantly, the classic daytime dehazing methods can be effectively incorporated into nighttime dehazing based on the proposed framework. Specifically, a layer extraction approach is firstly introduced to mitigate the halo artifacts caused by multiple scattering. Then, a Retinex based decomposition is proposed to estimate the spatially varying ambient illumination. After that, daytime dehazing methods can be applied to recover the scene radiance. To the best of our knowledge, the proposed SIDE is the first attempt which considers both haze removal and illumination enhancement for nighttime hazy images. Experimental results on both real-world and synthetic datasets demonstrate the effectiveness of the proposed framework for classic daytime dehazing methods under nighttime hazy conditions. In the comparisons with the state-of-the-art nighttime dehazing methods, both the quantitative and qualitative evaluations indicate the superiority of our proposed SIDE in terms of halo mitigation, visibility improvement and color preservation.
The rest of the paper is organized as follows—Section 2 introduces related work and limitations in existing image dehazing and low-light enhancement. Section 3 throughly describes the proposed SIDE. Section 4 shows the experimental results and analysis. Finally, Section 5 concludes the work.

2. Related Work

In hazy days, a beam of light traveling through the atmosphere is attenuated along the incident path and is scattered to other directions. Sensors not only receive the attenuated reflection from the scene objects, but also the additive light in the atmosphere. To describe the formation of the hazy imaging process, in computer vision, a hazy image can be mathematically modeled as the pixel-wise convex combination of the scene radiance and the airlight according to the Koschmieder’s law as follows:
I x = J x t x + A 1 t x ,
where I x and J x represent the observed hazy image and the haze-free scene radiance in RGB color space, respectively. A is the global atmospheric light, which is assumed to be spatially constant. x indicates the pixel index. t x [ 0 , 1 ] is the depth dependent medium transmission function describing the portion of the light that reaches the camera without scattering, which can be expressed as follows:
t ( x ) = e β d ( x ) ,
where β is the scattering coefficient of the atmosphere medium and d is the scene depth.
Based on single scattering haze model in (1), various approaches have been proposed to address the single image dehazing problem [1,2,3,4,20,21,22,23]. In recent years, encouraged by the milestone progress in single image dehazing—known as the Dark Channel Prior [2]—the performance of single image dehazing has been improved continuously [4,5,6,7,24,25,26,27]. With the development of machine learning technique, learning-based methods, especially the convolutional neural network (CNN) based methods, have also been introduced to single image dehazing [28,29,30,31,32,33,34,35,36]. Cai et al. proposed an end-to-end DehazeNet to estimate transmission map [29]. The scene radiance was then recovered according to the single scattering model. Ren et al. further improved the accuracy of transmission estimation by introducing a multi-scale deep neural network [30]. Li et al. introduced a lightweight end-to-end CNN based AOD-Net for image dehazing by generating the haze-free images directly [31]. Zhang et al. also addressed the end-to-end dehazing problem under a deep learning based densely connected pyramid dehazing network [34]. Approaches using generative adversarial networks have also been studied in recent years [37,38,39,40] Although these methods are capable of recovering satisfying results in daytime, their performances on nighttime dehazing are quite limited. In addition, learning based methods, especially the data-driven deep learning based approaches rely on the sufficient training data. However, datasets containing a large number of paired earl nighttime hazy images and corresponding clear scene images are practically unable to obtain due to the complex environmental illumination.
Compared with daytime dehazing, less attention has been paid to nighttime dehazing. Pei and Lee [10] proposed a color transfer method for nighttime hazy image mapping, haze was removed using a modified dark channel prior. Although the visibility could be increased, their results are visually unrealistic. Zhang et al. introduced an imaging model that combines gamma correction and color correction [12]. Although their results are visually better, a severe halo effect is also observed. They further proposed a maximum reflectance prior to estimate ambient illumination [11]. However, their method has limitation in illuminant regions. Li et al. addressed the halo effect in nighttime dehazing by introducing an atmospheric point spread function [13]. They removed the glow around light sources through layer decomposition. However, the illumination in the dehazed results remain dim. Ancuti et al. investigated the local airlight estimation and introduced a multi-scale fusion technique for both daytime and nighttime hazy image enhancement by employing a patch-based dark channel prior [14,15].
However, the patch size requires carefully selection. More recently, Yu et al. proposed a pixel-wise alpha blending method to improve the dark channel prior, and the illumination is estimated using guided filter [9]. Although the color constancy is well preserved, halo effect is still obvious due to the use of guided filter. Lou et al. constructed a linear model to connect transmission and haze-relevant features and employed a learning approach to solve the model [41]. Kuanar et al. introduced a CNN based DeGlow model with a embedded DeHaze module for nighttime dehazing [42].
The main limitation of the single scattering model is that Equation (1) only considers the single scattering effect, which means each pixel in the obtained image I ( x ) corresponds to a sole scene pixel in J ( x ) , while the flux scattered in the other directions by each particle is ignored. The value of a pixel in the observed image is not only composed by the direct attenuation and airlight, but also influenced by its neighbor points due to multiple scattering phenomenon. In daytime, constant atmospheric light is presumed since the homogeneous sunlight is the main light source in the scene. However, the ambient illumination is no longer uniformly distributed during the night. In addition, the inhomogeneous ambient light generated by multicolor light sources varies spatially, and the influence of multiple scattering becomes significant.
On the other hand, continues contributions have been introduced for low-light image enhancement. Based on the Retinex theory, early methods enhance the low-light images by removing illumination component with Gaussian filtering [43,44,45]. However, due to the ill-posed nature of the Retinex model, results are often over-enhanced unnaturally. To effectively overcome such a problem, many recent works resort to variational methods by applying priors and assumptions on the illumination and reflectance with different regularized models [46,47,48,49,50]. Kimmel et al. introduced a smooth illumination prior in the regularization term [46]. Ng et al. proposed a 2 fidelity prior with TV based regularized model by considering both the illumination and the reflectance [47]. Inspired by Ng’s work, Wang et al. proposed a constrained variational model with barrier functionals [50]. Based on the assumption that the illumination is spatial smooth and the reflectance is piece-wise continuous, Fu et al. proposed a probabilistic algorithm and a weighted variational method to decompose the illumination and the reflectance simultaneously [51,52]. Guo et al. proposed a 1 norm based regularization framework to refine an initial illumination map under a structure-aware prior [17]. Learning based methods have also been developed for low-light image enhancement [53,54,55,56,57,58]. Although visibility and contrast could be increased, these approaches can not handle haze in the scene.
In this paper, we address both haze removal and illumination enhancement for nighttime hazy images in a unified framework. In order to better understand and describe nighttime haze, halo effect and spatially varying illumination should be considered. Inspired by Li’s work [13], we employ following nighttime haze model in our work:
I ( x ) = J ( x ) t ( x ) + L ( x ) 1 t ( x ) + H ( x ) ,
where L ( x ) denotes the varying ambient illumination, and H ( x ) indicates the additive halo layer.

3. Methodology of the Proposed SIDE

The scheme of the proposed SIDE is illustrated in Figure 1, which includes Halo Decomposition Module, Illumination Decomposition Module, Image Dehazing Module and Enhancement Module. We firstly introduce a halo extraction module to mitigate the halo artifacts caused by multiple scattering After that, we propose a Retinex based illumination decomposition method to estimate the spatially varying ambient illumination. With the illumination extracted, the classic daytime dehazing methods are employed for haze removal. Finally, we generate the output scene image by combining the adjusted illumination and dehazed scene layer. We will express each module in detail in the following subsections.

3.1. Halo Decomposition Module

As discussed above, one major problem of nighttime hazy images is detail and visibility degradation of objects around light sources caused by the multiple scattering of nearby illuminants. Inspired by Li’s work [13], an observed hazy image I ( x ) can be modeled as a linear superimposition of a scene layer S which contains haze and scene information, and a halo layer H which indicates halos and glows around illuminants. Consequentially, Equation (3) can thus be written as:
I ( x ) = S ( x ) + H ( x ) ,
where S and H indicate the scene layer and the halo layer, respectively.
The halo layer has the characteristics of high intensities and smooth variation around light sources, wheres hazy scene layer itself only contains scene structure and texture details with relatively dim brightness. It is noticed that the gradients of halo patches has a sharper and sparser distribution compared with the non-halo patches. Therefore, a probabilistic model with prior knowledge on gradient distribution of the two layers can be employed to extract the halo layer. By assuming S and H are independent, the optimized S and H can be decomposed by minimizing the object function as follows:
min S , H x i Ω S i S ( x ) + j Ω H λ 2 j H ( x ) 2 ,
where ∗ denotes the convolution operator. indicates the derivative filters in sets Ω S = [ 1 , 1 ] , [ 1 , 1 ] T and Ω H = [ 1 , 2 , 1 ] , [ 1 , 2 , 1 ] T , which contain the first order derivative filters in two directions, and the second order Laplacian filter, respectively. The scale weight λ controls the smoothness of the halo image layer.
According to convolutional system theory, the convolutional result of a signal and a finite-length sequence can be expressed with the product of a Toeplitz matrix and the signal, where the Toeplitz matrix is uniquely determined by the finite-length sequence. By substituting H = I S , Equation (5) can be rewritten as follows:
min S x i Ω S D i S ( x ) + j Ω H λ 2 D j I ( x ) D j S ( x ) 2 ,
where D k S ( x ) indicates the elements of the Toeplitz matrix generated by the convolutional kernel k .
The non-convex problem in (6) can be optimized via the alternating direction method of multipliers (ADMM) as in References [59,60]. Figure 2 shows a result of halo layer separation. It is observed in the scene layer that halo effect around light sources is mitigated, compared with the observed nighttime hazy image.

3.2. Illumination Decomposition Module

With the halo layer H extracted, the dehazing problem becomes:
S ( x ) = J ( x ) t ( x ) + L ( x ) 1 t ( x ) .
Unlike the commonly used haze imaging model in (1) which assumes a global constant atmospheric light, the ambient illumination in our work is assumed to be an inhomogeneous and spatially-varying map L ( x ) . Consequentially, the local maximum assumption of atmospheric light in daytime dehazing no longer holds for estimating L ( x ) during nighttime. Inspired by the Retinex theory which assumes an observed image as the combination of reflectance and illumination, we resort to the illumination decomposition model to overcome these limitations.
According to the Retinex theory [61], the scene layer image can be formulated as the pixel-wise product of a reflectance component and a light-dependent illumination component as follows:
S = R ( x ) L ( x ) ,
where R is the reflectance component, and L is the illumination component. ∘ represents the pixel-wise multiplication.
Denoting J ( x ) = L ( x ) ρ ( x ) , Equation (7) can be reformulated in a Retinex-like pattern as follows:
S ( x ) = L ( x ) [ ρ ( x ) t ( x ) + 1 t ( x ) ] ,
where ρ is the intrinsic reflectance of objects in the scene [20,62].
The illumination is presumed to be spatially smooth and contains the overall structure, which shares the same characteristic of ambient illumination of the scene layer, therefore, (9) can be solved by minimizing the following objective function:
min R , L , ω R L S F 2 + TGV α 2 ( L ) + β R S 0 ,
where R = ρ ( x ) t ( x ) + 1 t ( x ) , β is the regularization parameter and ∇ is the first order differential operator. TGV α 2 ( L ) indicates the second order TGV with respect to L , which can be expressed in terms of the following 1 minimization problem:
TGV α 2 ( L ) = min ω α 1 D ( L ω ) 1 + α 0 ω 1 ,
where α { α 0 , α 1 } is a weighting vector and ω is a vector field with low variation. D is a diffusion tensor formulated as follows:
D = exp ζ | S | μ n n T + n n T ,
where n = S / S indicates the normalized direction of the image gradient, and n is the normal vector to the gradient. ζ and μ are parameters controlling the magnitude and the sharpness of D .
In practice, we apply the 1 norm as the convex relaxation [63] of the original 0 minimization problem. In addition, (10) is a non-convex and non-smooth optimization problem because of the adoption of pixel-wise multiplication and TGV regularization. An alternating direction method of multipliers (ADMM) [59,60] is adopted to solve it. Figure 3 demonstrates an example of ambient illumination estimation. It is observed the unevenly distributed ambient illumination is effectively estimated. Color distortion caused by artificial illuminant is also corrected in the reflectance.

3.3. Scene Recovery Module

As expressed in (9), the reflectance component R in Equation (8) is composed as follows:
R ( x ) = ρ ( x ) t ( x ) + 1 t ( x ) ,
which can be expressed in the following formation:
R ( x ) = J 1 ( x ) t ( x ) + A 1 ( 1 t ( x ) ) .
It can be easily observed that Equation (13) has the similar formation of the original single scattering model in Equation (1) with the atmospheric light as A 1 = ( 1 , 1 , 1 ) T and the scene radiance J 1 ( x ) = ρ ( x ) . Therefore, various assumptions and priors of existing daytime dehazing approaches can be employed to effectively solve Equation (13). In addition, to preserve the naturalness of the scene, we also manipulate the ambient illumination L ( x ) with gamma transformation to increase the global brightness. We will demonstrate and analyze the results in the next section.

4. Experimental Results and Analysis

In this section, we firstly demonstrate the effectiveness of the proposed SIDE in terms of halo layer extraction. After that, we compare the performances of various daytime dehazing methods with and without the proposed SIDE. In addition, comparisons with conventional low-light image enhancement methods are also illustrated. Next, comprehensive experiments are conducted to demonstrate the superiority of the proposed SIDE. Since it is hard to obtain the nighttime haze and daytime haze-free image pairs, the proposed SIDE is evaluated on Zhang’s datasets [11], which contains 20 real world nighttime hazy images. We also test the proposed SIDE on synthesized nighttime hazy images for quantitative comparisons. The proposed algorithm is implemented using MATLAB 2019b on PC with 9700K CPU and 32GB RAM. In the implementation, parameter λ is set to 3 for halo layer extraction, the regularization parameters for ambient illumination decomposition are set as α 0 = 0.5 , α 0 = 0.05 . While most conventional daytime dehazing methods can be applied for scene radiance recovery, He’s dark channel prior (DCP) [2], Meng’s boundary constraint method (BC) [4], Color Attenuation Prior (CAP) [6], and Berman’s non-local method (NL) [7] are selected for demonstrations. The performance of the proposed SIDE is also compared with several state-of-the-art nighttime dehazing approaches, including Zhang’s Maximum Reflectance Prior (MRP) [11,12], Li’s Glow and Multiple Light Colors (GMLC) [13], Yu’s Pixel-wise Alpha Blending (PAB) [9] and Lou’s Haze Density Features (HDF) [41], where the parameters are set as defined in the references.

4.1. Results on Hazy Scene Estimation

To demonstrate the effectiveness of our halo layer extraction, we first illustrate the results of the extracted halo layer and scene layer, in Figure 4. It can be seen that the halos around illuminants are effectively mitigated in the scene layers.

4.2. Verification on Daytime Dehazing Methods

To demonstrate the effectiveness of the proposed SIDE, we compare the performances of five classic daytime dehazing methods with and without SIDE on test nighttime hazy images. As shown in Figure 5a–d show the dehazing results without our SIDE using He’s dark channel prior (DCP) [2], Meng’s boundary constraint method (BC) [4], Color Attenuation Prior (CAP) [6], and Berman’s non-local method (NL) [7], respectively. Figure 5e–h show the results in the enhancing-and-dehazing flow of the corresponding methods, while the observed hazy image is firstly enhanced using LIME [17]. The bottom row of Figure 5 show the dehazing results of corresponding methods with the proposed SIDE. It is observed in the top row that applying daytime dehazing methods directly on nighttime hazy images would cause halo artifacts, color distortion, contrast reduction in outputs. Details are also lost in dark regions in the results. In the middle row, halo artifacts and color distortion become more serious after enhancement by LIME [17]. It is also noticed in Figure 5g that the visibility is worse after the enhancement. On the contrary, the results in the bottom row have better visibility and ambient color. In addition, the halo artifacts are significantly mitigate. It is clearly that these daytime dehazing methods can be effectively applied to nighttime dehazing under the proposed SIDE framework.
We also compare our SIDE with different low-light enhancement methods in Figure 6. Specifically, the probabilistic image enhancement method (PIE) [51], the naturalness preserved enhancement method (NPE) [49], the low-light image enhancement via illumination map estimation (LIME) [17] and the structure-revealing low-light image enhancement method (SLIE) [18] are employed for comparisons. It is easily observed that although existing low-light enhancement methods can significantly increase the contrast of the nighttime hazy images, haze remains in the enhanced results.
To further verify the effectiveness of the proposed SIDE, we compare the results of our SIDE with the results using traditional daytime dehazing method BC [4] directly, using the low-light enhancement method LIME [17], and using the enhancing and dehazing flow, respectively. As shown in Figure 7, although conventional daytime dehazing methods can mitigate haze in certain regions, residual haze can be observed globally. In addition, the globally illumination remains dark in the dehazing results. On the other hand, while traditional low-light enhancement methods are capable of increasing visibility, haze and halo artifacts become severer. It is also noticed that the performances of dehazing-after-enhancement are not satisfying. Clearly, based on the proposed SIDE, the conventional daytime dehazing methods can be employed in nighttime dehazing.

4.3. Qualitative Comparisons on Real Nighttime Hazy Images

We also compare the proposed SIDE with existing nighttime dehazing methods on real nighttime hazy images. In our implementation, Meng’s BC [4] is employed for scene recovery. Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13 and Figure 14 show seven comparison results on real test images.
As observed in Figure 8, all the compared methods are capable of removing haze for nighttime hazy images. Although PAB [9] is capable of increasing the contrast of the scene to a certain degree, apparent halo can be observed in Figure 8e. In addition, the brightness of the result remains dim. While MRP [11], HDF [41], GMLC [13] and the proposed SIDE can significantly increase the visibility and suppress halo effect, the proposed SIDE has better contrast improvement in local regions. Color distortion can be seen in grove regions in MRP [11] and GMLC [13], while it is more natural in our result. HDF [41] also results in residual haze in the grove region. Although no ground-truth reference image is available, the result of SIDE has the best subjective performance on color constancy. In Figure 9, while all the methods are able to remove haze and increase scene visibility, the proposed SIDE has the best performance on detail recovery. Reflection of the woods in the water is recovered well in our result. In Figure 10, it is noticed in MRP [11] result that, over-saturation around lamps can be observed and halos are also significant in the dehazed result. Although haze is removed in PAB [9] and HDF [41] results, halo artifacts still exist. Moreover, the result suffers from dim and distorted illumination. While both GMLC [13] and our SIDE are capable of mitigating halo artifacts significantly, more details and better visibility are observed in our result. In Figure 11, halo artifacts are observed in HDF and PAB results. In GMLC [13] result, although halos around illuminants are well suppressed, structural halos are generated around buildings. Moreover, details in dark regions are lost. As observed in Figure 11f, our result has the best performance in terms of halo suppression and detail recovery. Figure 12 shows the comparison on the real test image Church. All the nighttime dehazing methods are capable of removing haze from the scene effectively. Although GMLC [13] can suppress halos around artificial illumination, structural halo are noticed around build boundary. In addition, while all the compared methods have limitation on recovering scene details hiding in the dark, it is easily noticed that our result has the best performance on halo mitigation and detail recovery. Figure 13 demonstrates the comparison on the test image Riverside. Since the haze is quite light in the scene, all the methods ca effectively remove the haze and halos around lights are not significant. As observed in the comparison, our method can achieve the best visual performance in terms of detail recovery and globally illumination enhancement. Figure 14 presents a comparison on the test image Railway, where the color temperature of the illumination is quite warm, and the illumination is mainly distributed at the top of the image. As observed in the comparison, all the methods can increase the visibility of the hazy image. Halos are noticed in MRP [11] and GMLC [13] results due to over-saturation in bright regions. It is observed in our result that halos are well suppressed and details in foreground regions are significantly enhanced. However, it is also noticed in our result that lights at the top are slightly over-enhanced due to the illumination enhancement. Figure 15 presents a comparison on the test image Tomb, where the flashlight was activated. As seen in the comparison, halos are suppressed in results by GMLC and our SIDE. However, all the compared nighttime dehazing methods have limitations when recovering details of foreground regions (left bottom corner). Figure 15 shows a comparison on the test image Building, which shows a cityview in a hazy night. As observed in the comparison, GMLC generates obvious halo artifacts around building boundary, and HDF slightly increase the visibility.
Since no ground-truth images are available, the no-reference haze density metric (HDM) and the no-reference image quality metric for contrast distortion (NIQMC) [64] are employed for evaluation. The HDM includes three evaluators, namely e, Σ and r ¯ , which indicate the rate of newly appeared edges, the percentage of pure white or pure black pixels and the mean ratio of edge gradients, respectively. A better dehazed image should have higher values of e, r ¯ , NIQMC, and a lower value of Σ , which are indicated by ↓ and ↓, respectively. Table 1 shows the HDM evaluation results of Figure 8, Figure 9, Figure 10, Figure 11, Figure 12, Figure 13, Figure 14, Figure 15 and Figure 16 and the average evaluation result on the real-world dataset by different methods. It is shown that the proposed SIDE achieved the best performance for most indicators. Due to the illumination enhancement of the proposed SIDE, it is also noticed that our method achieves absolute advantage in terms of the metrics r ¯ and NIQMC.

4.4. Comparisons on Synthesized Nighttime Hazy Images

Besides comparisons on a real image dataset, we also evaluate the performance of the proposed SIDE objectively on synthetic test image according to Li’s work [13], where the hazy image is generated using PBRT. Figure 17 demonstrates the comparison of MRP [11], HDF [41], GMLC [13], PAB [9] and SIDE. Table 2 shows the PSNR and structural similarity (SSIM) evaluation results of different methods. It is observed that our proposed SIDE achieved the best results in both terms of MSE and SSIM, comparing with the existing state-of-the-art nighttime dehazing methods.

5. Conclusions

In this paper, we propose a novel unified framework, namely SIDE, to simultaneously remove haze and enhance illumination for nighttime hazy images. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination are considered in our approach. In addition, we prove that the conventional daytime dehazing approaches can be effectively incorporated into nighttime dehazing task based on the proposed SIDE. In order to mitigate the halo artifacts caused by multiple scattering, a robust layer decomposition method is firstly introduced to separate the halo layer from the hazy image. A Retinex based illumination decomposition method is then proposed to estimate the non-uniformly distributed ambient illumination. By removing the ambient illumination, the original nighttime dehazing problem can be effectively solved using various daytime dehazing methods. Experimental results demonstrate the effectiveness of the proposed framework for classic daytime dehazing methods under nighttime hazy conditions. In addition, compared with the state-of-the-art nighttime dehazing methods, both quantitative and qualitative comparisons indicate the superiority of the proposed SIDE in terms of halo mitigation, visibility improvement and color preservation.

Author Contributions

Methodology, R.H.; software, X.G.; validation, X.G.; investigation, R.H.; writing—original draft preparation, R.H.; writing—review and editing, Z.S.; visualization, X.G.; supervision, Z.S.; project administration, R.H.; funding acquisition, R.H. All authors have read and agreed to the published version of the manuscript.

Funding

This research was partially supported by Shaanxi basic research program of Natural Science (2019JQ-572) and the Natural Science Foundation of China (61420106007).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Fattal, R. Single image dehazing. ACM Trans. Graph. 2008, 27, 1–9. [Google Scholar]
  2. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Miami, FL, USA, 20–25 June 2009; pp. 1956–1963. [Google Scholar]
  3. Ancuti, C.; Ancuti, C.; Hermans, C.; Bekaert, P. A Fast Semi-inverse Approach to Detect and Remove the Haze from a Single Image. In Proceedings of the Asian Conference on Computer Vision, Queenstown, New Zealand, 8–12 November 2011; pp. 501–514. [Google Scholar]
  4. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, NSW, Australia, 3–6 December 2013; pp. 617–624. [Google Scholar]
  5. Fattal, R. Dehazing Using Color-Lines. ACM Trans. Graph. 2014, 34, 1–14. [Google Scholar]
  6. Zhu, Q.; Mai, J.; Shao, L. A Fast Single Image Haze Removal Algorithm Using Color Attenuation Prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed] [Green Version]
  7. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  8. He, R.; Huang, X. Single Image Dehazing Using Non-local Total Generalized Variation. In Proceedings of the IEEE International Conference on Industrial Engineering and Applications, Xi’an, China, 19–21 June 2019; pp. 19–24. [Google Scholar]
  9. Yu, T.; Song, K.; Miao, P.; Yang, G.; Yang, H.; Chen, C. Nighttime Single Image Dehazing via Pixel-Wise Alpha Blending. IEEE Access 2019, 7, 114619–114630. [Google Scholar]
  10. Pei, S.; Lee, T. Nighttime haze removal using color transfer pre-processing and dark channel prior. In Proceedings of the IEEE International Conference on Image Processing, Orlando, FL, USA, 30 September–3 October 2012; pp. 957–960. [Google Scholar]
  11. Zhang, J.; Cao, Y.; Fang, S.; Kang, Y.; Chen, C.W. Fast haze removal for nighttime image using maximum reflectance prior. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 7418–7426. [Google Scholar]
  12. Zhang, J.; Cao, Y.; Wang, Z. Nighttime haze removal based on a new imaging model. In Proceedings of the IEEE International Conference on Image Processing, Paris, France, 27–30 October 2014; pp. 4557–4561. [Google Scholar]
  13. Li, Y.; Tan, R.T.; Brown, M.S. Nighttime haze removal with glow and multiple light colors. In Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile, 7–13 December 2015; pp. 226–234. [Google Scholar]
  14. Ancuti, C.; Ancuti, C.; De Vleeschouwer, C.; Bovik, A. Night-time dehazing by fusion. In Proceedings of the IEEE International Conference on Image Processing, Phoenix, AZ, USA, 25–28 September 2016; pp. 2256–2260. [Google Scholar]
  15. Ancuti, C.; Ancuti, C.O.; De Vleeschouwer, C.; Bovik, A.C. Day and night-time dehazing by local airlight estimation. IEEE Trans. Image Process. 2020, 29, 6264–6275. [Google Scholar]
  16. Park, S.; Yu, S.; Moon, B.; Ko, S.; Paik, J. Low-Light Image Enhancement using Variational Optimization-based Retinex Model. IEEE Trans. Consum. Electron. 2017, 63, 178–184. [Google Scholar]
  17. Guo, X.; Li, Y.; Ling, H. LIME: Low-Light Image Enhancement via Illumination Map Estimation. IEEE Trans. Image Process. 2017, 26, 982–993. [Google Scholar]
  18. Li, M.; Liu, J.; Yang, W.; Sun, X.; Guo, Z. Structure-Revealing Low-Light Image Enhancement via Robust Retinex Model. IEEE Trans. Image Process. 2018, 27, 2828–2841. [Google Scholar]
  19. He, R.; Guan, M.; Wen, C. SCENS: Simultaneous Contrast Enhancement and Noise Suppression for Low-light Images. IEEE Trans. Ind. Electron. 2020, in press. [Google Scholar] [CrossRef]
  20. Tan, R. Visibility in bad weather from a single image. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  21. Joshi, N.; Cohen, M. Seeing Mt. Rainier: Lucky imaging for multi-image denoising, sharpening, and haze removal. In Proceedings of the IEEE International Conference on Computational Photography, Cambridge, MA, USA, 29–30 March 2010; pp. 1–8. [Google Scholar]
  22. Matlin, E.; Milanfar, P. Removal of haze and noise from a single image. In Proceedings of the IS&T/SPIE Electronic Imaging, Burlingame, CA, USA, 22–26 January 2012; pp. 177–188. [Google Scholar]
  23. Huang, S.; Chen, B.; Wang, W. Visibility restoration of single hazy images captured in real-world weather conditions. IEEE Trans. Circuits Syst. Video Technol. 2014, 24, 1814–1824. [Google Scholar]
  24. Li, Z.; Zheng, J. Edge-preserving decomposition-based single image haze removal. IEEE Trans. Image Process. 2015, 24, 5432–5441. [Google Scholar] [PubMed]
  25. Li, Z.; Zheng, J.; Zhu, Z.; Yao, W.; Wu, S. Weighted guided image filtering. IEEE Trans. Image Process. 2015, 24, 120–129. [Google Scholar] [PubMed]
  26. Li, Z.; Zheng, J. Single image de-hazing using globally guided image filtering. IEEE Trans. Image Process. 2018, 27, 442–450. [Google Scholar]
  27. Liu, Y.; Shang, J.; Pan, L.; Wang, A.; Wang, M. A Unified Variational Model for Single Image Dehazing. IEEE Access 2019, 7, 15722–15736. [Google Scholar]
  28. Tang, K.; Yang, J.; Wang, J. Investigating Haze-Relevant Features in a Learning Framework for Image Dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Columbus, OH, USA, 23–28 June 2014; pp. 2995–3002. [Google Scholar]
  29. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. DehazeNet: An End-to-End System for Single Image Haze Removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar]
  30. Ren, W.; Liu, S.; Zhang, H.; Pan, J.; Cao, X.; Yang, M. Single image dehazing via multi-scale convolutional neural networks. In Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; pp. 154–169. [Google Scholar]
  31. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
  32. Li, R.; Pan, J.; Li, Z.; Tang, J. Single image dehazing via conditional generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 8202–8211. [Google Scholar]
  33. Li, J.; Li, G.; Fan, H. Image dehazing using residual-based deep CNN. IEEE Access 2018, 6, 26831–26842. [Google Scholar]
  34. Zhang, H.; Patel, V. Densely connected pyramid dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3194–3203. [Google Scholar]
  35. Ren, W.; Pan, J.; Zhang, H.; Cao, X.; Yang, M.H. Single image dehazing via multi-scale convolutional neural networks with holistic edges. Int. J. Comput. Vis. 2020, 128, 240–259. [Google Scholar]
  36. Zhang, X.; Wang, T.; Wang, J.; Tang, G.; Zhao, L. Pyramid Channel-based Feature Attention Network for image dehazing. Comput. Vis. Image Underst. 2020, 197, 103003. [Google Scholar]
  37. Zhu, H.; Peng, X.; Chandrasekhar, V.; Li, L.; Lim, J.H. DehazeGAN: When Image Dehazing Meets Differential Programming. In Proceedings of the International Joint Conferences on Artificial Intelligence, Stockholm, Sweden, 13–19 July 2018; pp. 1234–1240. [Google Scholar]
  38. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced pix2pix dehazing network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 15–20 June 2019; pp. 8160–8168. [Google Scholar]
  39. Dudhane, A.; Singh Aulakh, H.; Murala, S. Ri-gan: An end-to-end network for single image haze removal. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019; pp. 2014–2023. [Google Scholar]
  40. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing. In AAAI; AAAI Press: Palo Alto, CA, USA, 2020; pp. 10729–10736. [Google Scholar]
  41. Lou, W.; Li, Y.; Yang, G.; Chen, C.; Yang, H.; Yu, T. Integrating Haze Density Features for Fast Nighttime Image Dehazing. IEEE Access 2020, 8, 3318–3330. [Google Scholar]
  42. Kuanar, S.; Rao, K.; Mahapatra, D.; Bilas, M. Night time haze and glow removal using deep dilated convolutional network. arXiv 2019, arXiv:1902.00855. [Google Scholar]
  43. Rahman, Z.; Jobson, D.; Woodell, G. Retinex Processing for Automatic Image Enhancement. J. Electron. Imaging 2004, 13, 100–111. [Google Scholar]
  44. Jobson, D.; Rahman, Z.; Woodell, G. Properties and Performance of a Center/Surround Retinex. IEEE Trans. Image Process. 1997, 6, 451–462. [Google Scholar] [CrossRef] [PubMed]
  45. Jobson, D.J.; Rahman, Z.; Woodell, G. A Multiscale Retinex for Bridging the Gap Between Color Images and the Human Observation of Scenes. IEEE Trans. Image Process. 1997, 6, 965–976. [Google Scholar] [CrossRef] [Green Version]
  46. Kimmel, R.; Elad, M.; Shaked, D.; Keshet, R.; Sobel, I. A Variational Framework for Retinex. Int. J. Comput. Vis. 2003, 52, 7–23. [Google Scholar]
  47. Ng, M.; Wang, W. A Total Variation Model for Retinex. SIAM J. Imaging Sci. 2011, 4, 345–365. [Google Scholar]
  48. Ma, W.; Morel, J.M.; Osher, S.; Chien, A. An 1-based Variational Model for Retinex Theory and Its Application to Medical Images. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Providence, RI, USA, 20–25 June 2011; pp. 153–160. [Google Scholar]
  49. Wang, S.; Zheng, J.; Hu, H.M.; Li, B. Naturalness Preserved Enhancement Algorithm for Non-Uniform Illumination Images. IEEE Trans. Image Process. 2013, 22, 3538–3548. [Google Scholar]
  50. Wang, W.; He, C. A Variational Model with Barrier Functionals for Retinex. SIAM J. Imaging Sci. 2015, 8, 1955–1980. [Google Scholar]
  51. Fu, X.; Liao, Y.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Probabilistic Method for Image Enhancement With Simultaneous Illumination and Reflectance Estimation. IEEE Trans. Image Process. 2015, 24, 4965–4977. [Google Scholar]
  52. Fu, X.; Zeng, D.; Huang, Y.; Zhang, X.; Ding, X. A Weighted Variational Model for Simultaneous Reflectance and Illumination Estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 2782–2790. [Google Scholar]
  53. Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-Light Image Enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef] [Green Version]
  54. Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to See in the Dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 3291–3300. [Google Scholar]
  55. Park, S.; Yu, S.; Kim, M.; Park, K.; Paik, J. Dual Autoencoder Network for Retinex-based Low-Light Image Enhancement. IEEE Access 2018, 6, 22084–22093. [Google Scholar]
  56. Wei, C.; Wang, W.; Yang, W.; Liu, J. Deep Retinex Decomposition for Low-Light Enhancement. In Proceedings of the British Machine Vision Conference, Newcastle, UK, 3–6 September 2018; pp. 1–12. [Google Scholar]
  57. Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef] [PubMed]
  58. Zhang, Y.; Zhang, J.; Guo, X. Kindling the Darkness: A Practical Low-light Image Enhancer. In Proceedings of the ACM International Conference on Multimedia, Nice, France, 21–25 October 2019; pp. 1632–1640. [Google Scholar]
  59. Goldstein, T.; Osher, S. The Split Bregman Method for 1-Regularized Problems. SIAM J. Imaging Sci. 2009, 2, 323–343. [Google Scholar]
  60. Wang, Y.; Yin, W.; Zeng, J. Global Convergence of ADMM in Nonconvex Nonsmooth Optimization. J. Sci. Comput. 2019, 78, 29–63. [Google Scholar]
  61. Land, E. The Retinex Theory of Color Vision. Sci. Am. 1977, 237, 108–129. [Google Scholar] [CrossRef] [PubMed]
  62. Li, Y.; You, S.; Brown, M.S.; Tan, R.T. Haze visibility enhancement: A survey and quantitative benchmarking. Comput. Vis. Image Underst. 2017, 165, 1–16. [Google Scholar]
  63. Jenatton, R.; Mairal, J.; Obozinski, G.; Bach, F. Proximal Methods for Sparse Hierarchical Dictionary Learning. In Proceedings of the International Conference on Machine Learning, Haifa, Israel, 21–24 June 2010; pp. 487–494. [Google Scholar]
  64. Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-Reference Quality Metric of Contrast-Distorted Images Based on Information Maximization. IEEE Trans. Cybern. 2016, 47, 4559–4565. [Google Scholar] [CrossRef]
Figure 1. Framework of the proposed SImultaneously Dehazing and Enhancement of nighttime hazy images (SIDE).
Figure 1. Framework of the proposed SImultaneously Dehazing and Enhancement of nighttime hazy images (SIDE).
Sensors 20 05300 g001
Figure 2. Illustration of halo layer extraction. (a): a nighttime hazy image, (b): the halo layer, (c): the scene layer.
Figure 2. Illustration of halo layer extraction. (a): a nighttime hazy image, (b): the halo layer, (c): the scene layer.
Sensors 20 05300 g002
Figure 3. A sample of ambient illumination decomposition. (a): scene layer, (b): estimated ambient illumination, (c): estimated reflectance.
Figure 3. A sample of ambient illumination decomposition. (a): scene layer, (b): estimated ambient illumination, (c): estimated reflectance.
Sensors 20 05300 g003
Figure 4. Illustration of halo layer extraction. (a,d,g): nighttime hazy images, (b,e,h): the halo layers, (c,f,i): the scene layers.
Figure 4. Illustration of halo layer extraction. (a,d,g): nighttime hazy images, (b,e,h): the halo layers, (c,f,i): the scene layers.
Sensors 20 05300 g004
Figure 5. Illustration of the proposed SIDE. (ad): results of conventional daytime dehazing methods without SIDE; (eh): results with the enhancing-and-dehazing flow (enhanced by LIME [17]); (il): results of corresponding methods with the proposed SIDE. From left to right: dark channel prior (DCP) [2], boundary constraint method (BC) [4], Color Attenuation Prior (CAP) [6] and Berman’s non-local method (NL) [7].
Figure 5. Illustration of the proposed SIDE. (ad): results of conventional daytime dehazing methods without SIDE; (eh): results with the enhancing-and-dehazing flow (enhanced by LIME [17]); (il): results of corresponding methods with the proposed SIDE. From left to right: dark channel prior (DCP) [2], boundary constraint method (BC) [4], Color Attenuation Prior (CAP) [6] and Berman’s non-local method (NL) [7].
Sensors 20 05300 g005
Figure 6. Comparisons of conventional low-light enhancement methods. From left to right: real nighttime hazy images, results of low-light enhancement methods PIE [51], NPE [49], LIME [17], SLIE [18] and the result of proposed SIDE.
Figure 6. Comparisons of conventional low-light enhancement methods. From left to right: real nighttime hazy images, results of low-light enhancement methods PIE [51], NPE [49], LIME [17], SLIE [18] and the result of proposed SIDE.
Sensors 20 05300 g006
Figure 7. Comparisons of conventional dehazing methods and low-light enhancement methods. From left to right: observed nighttime hazy images, dehazing results using BC [4] directly, enhancement results using LIME [17], dehazing results after enhancement, dehazing results using BC with the proposed SIDE.
Figure 7. Comparisons of conventional dehazing methods and low-light enhancement methods. From left to right: observed nighttime hazy images, dehazing results using BC [4] directly, enhancement results using LIME [17], dehazing results after enhancement, dehazing results using BC with the proposed SIDE.
Sensors 20 05300 g007
Figure 8. Comparisons with other nighttime dehazing methods on test image Pavilion. (a): the nighttime hazy image, (b): Zhang’s Maximum Reflectance Prior (MRP) [11] result, (c): Lou’s Haze Density Features (HDF) [41] result, (d): Li’s Glow and Multiple Light Colors (GMLC) [13] result, (e): Yu’s Pixel-wise Alpha Blending (PAB) [9] result, (f): SIDE result.
Figure 8. Comparisons with other nighttime dehazing methods on test image Pavilion. (a): the nighttime hazy image, (b): Zhang’s Maximum Reflectance Prior (MRP) [11] result, (c): Lou’s Haze Density Features (HDF) [41] result, (d): Li’s Glow and Multiple Light Colors (GMLC) [13] result, (e): Yu’s Pixel-wise Alpha Blending (PAB) [9] result, (f): SIDE result.
Sensors 20 05300 g008
Figure 9. Comparisons with other nighttime dehazing methods on test image Lake. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 9. Comparisons with other nighttime dehazing methods on test image Lake. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g009
Figure 10. Comparisons with other nighttime dehazing methods on test image Street. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 10. Comparisons with other nighttime dehazing methods on test image Street. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g010
Figure 11. Comparisons with other nighttime dehazing methods on test image Cityscape. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 11. Comparisons with other nighttime dehazing methods on test image Cityscape. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g011
Figure 12. Comparisons with other nighttime dehazing methods on test image Church. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 12. Comparisons with other nighttime dehazing methods on test image Church. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g012
Figure 13. Comparisons with other nighttime dehazing methods on test image Riverside. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 13. Comparisons with other nighttime dehazing methods on test image Riverside. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g013
Figure 14. Comparisons with other nighttime dehazing methods on test image Railway. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 14. Comparisons with other nighttime dehazing methods on test image Railway. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g014
Figure 15. Comparisons with other nighttime dehazing methods on test image Tomb. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 15. Comparisons with other nighttime dehazing methods on test image Tomb. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g015
Figure 16. Comparisons with other nighttime dehazing methods on test image Building. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Figure 16. Comparisons with other nighttime dehazing methods on test image Building. (a): the nighttime hazy image, (b): MRP [11] result, (c): HDF [41] result, (d): GMLC [13] result, (e): PAB [9] result, (f): SIDE result.
Sensors 20 05300 g016
Figure 17. Comparisons on the synthetic test image. (a): the ground-truth image, (b): result of MRP [11], (c): result of HDF [41], (d): GMLC [13], (e): result of PAB [9], (f): result of SIDE.
Figure 17. Comparisons on the synthetic test image. (a): the ground-truth image, (b): result of MRP [11], (c): result of HDF [41], (d): GMLC [13], (e): result of PAB [9], (f): result of SIDE.
Sensors 20 05300 g017
Table 1. Quantitative Comparison on Real-world Benchmark Dataset. The bold indicates the best scores of the quantitative comparisons.
Table 1. Quantitative Comparison on Real-world Benchmark Dataset. The bold indicates the best scores of the quantitative comparisons.
MRP [11]HDF [41]GMLC [13]PAB [9]SIDE
Pavilione0.080.130.150.090.17
Σ 0.010.0100.030
r ¯ 4.352.975.014.675.08
NIQMC↑4.674.824.935.015.16
Lakee0.030.080.130.190.23
Σ 0.010.02000
r ¯ 2.333.545.285.667.21
NIQMC↑4.914.885.144.995.37
Streete0.200.170.110.090.22
Σ 0.030.180.050.240.01
r ¯ 4.553.824.391.614.74
NIQMC↑4.894.285.324.475.15
Cityscapee0.110.130.020.080.19
Σ 0.030.440.250.190.05
r ¯ 3.693.172.011.873.96
NIQMC↑3.984.965.114.205.03
Churche0.110.100.120.180.29
Σ 0.070.060.080.030.04
r ¯ 4.773.832.983.955.10
NIQMC↑3.974.985.054.195.61
Riversidee0.060.090.100.140.25
Σ 0.090.100.040.050.03
r ¯ 2.853.684.174.236.51
NIQMC↑4.714.394.974.855.43
Railwaye0.180.130.150.070.27
Σ 0.080.120.040.160.02
r ¯ 2.682.742.302.874.98
NIQMC↑4.254.195.035.245.48
Tombe0.260.180.090.310.45
Σ 0.030.070.160.020
r ¯ 5.134.253.975.727.11
NIQMC↑5.044.614.025.266.07
Buildinge0.110.180.090.210.30
Σ 0.040.060.290.030.01
r ¯ 3.794.052.883.814.74
NIQMC↑4.564.802.735.766.20
Averagee0.120.160.110.050.29
Σ 0.090.130.080.190.04
r ¯ 3.293.452.162.375.62
NIQMC↑4.804.935.155.045.86
Table 2. Quantitative comparison on the synthetic test image. The bold indicates the best scores of the quantitative comparisons.
Table 2. Quantitative comparison on the synthetic test image. The bold indicates the best scores of the quantitative comparisons.
MRP [11]HDF [41]GMLC [13]PAB [9]SIDE
SSIM0.71330.75580.76050.75910.7616
PSNR17.412217.953717.290717.892618.0025

Share and Cite

MDPI and ACS Style

He, R.; Guo, X.; Shi, Z. SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. Sensors 2020, 20, 5300. https://doi.org/10.3390/s20185300

AMA Style

He R, Guo X, Shi Z. SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images. Sensors. 2020; 20(18):5300. https://doi.org/10.3390/s20185300

Chicago/Turabian Style

He, Renjie, Xintao Guo, and Zhongke Shi. 2020. "SIDE—A Unified Framework for Simultaneously Dehazing and Enhancement of Nighttime Hazy Images" Sensors 20, no. 18: 5300. https://doi.org/10.3390/s20185300

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop