Next Article in Journal
Emotional Variability Analysis Based I-Vector for Speaker Verification in Under-Stress Conditions
Previous Article in Journal
An Energy-Efficient Secure Forwarding Scheme for QoS Guarantee in Wireless Sensor Networks
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Dehazing with Offset Correction and a Weighted Residual Map

1
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China
2
College of Materials Science and Opto-Electronic Technology, University of Chinese Academy of Sciences, Beijing 100049, China
3
College of Mechanical and Electrical Engineering, Beijing Information Science and Technology University, Beijing 100192, China
*
Author to whom correspondence should be addressed.
Electronics 2020, 9(9), 1419; https://doi.org/10.3390/electronics9091419
Submission received: 3 June 2020 / Revised: 26 July 2020 / Accepted: 30 July 2020 / Published: 1 September 2020
(This article belongs to the Section Computer Science & Engineering)

Abstract

:
In hazy environments, image quality is degraded by haze and the degraded photos have reduced visibility, making the less vivid and visually attractive. This paper proposes a method for recovering image information from a single hazy image. The dark channel prior algorithm tends to underestimate the transmission of bright areas. To address this problem, an improved dehazing algorithm is proposed in this paper. Assuming that intensity in a dark channel affected by haze produces the same offset, the expected value of the dark channel of a hazy image is used as an approximation of this offset to correct the transmission. However, this correction may neglect scene difference and affect the clarity of the recovered images. Therefore, a weighted residual map is used to enhance contrast and recover more information. Experimental results demonstrate that our algorithm can effectively lessen color oversaturation and restore images with enhanced details. This algorithm provides a more accurate transmission estimation method that can be used with a weighted residual map to eliminate haze and improve contrast.

1. Introduction

Aerosols in the air scatter light into the atmosphere. This scattering impairs the direct transmission of scene radiance and degrades image quality, especially on hazy days [1]. These degraded images usually have low contrast and saturation, loss of detail, and hue shift, thereby affecting visual effects and subsequently image processing. Hence, many approaches have been developed to eliminate haze and generate realistic clear images.
Based on the atmospheric scattering model describing the attenuation and distribution of light through aerosols, a hazy image is described as a convex combination of scene radiance and atmospheric light, and the coefficients of this equation are determined by the scene transmission of each pixel in the image. In a color (RGB) image, each pixel of the model has four unknowns, the scene radiance per color channel (one each for R, G, and B) and a transmission value. However, a single image can only provide three constraints for each pixel, i.e., the intensities of the three channels. Therefore, more constraints are needed to address this uncertainty. Some methods use additional information about the scene, such as multiple images taken under diverse conditions [2], polarization angle [3], or geometric features of the scene [4], to determine transmission and obtain haze-free images.
More recently, some dehazing methods using a single image have been developed; for example, by assuming that local transmission is not correlated with surface shading [5], relaxing the physical model by maximizing the contrast of the image [6,7], or introducing statistics of clear images, such as the dark channel prior, which in most local areas of haze-free images the minimum value of the intensity is a small number that is close to zero [8].
A dehazing method using dark channel prior (DCP) was proposed in 2009; in 2010, its speed was further improved by means of replacing the soft matting method with guided image filtering [9]. Dark channel prior is based on the observation that clear outdoor images usually have some points with lower color channel values in local areas, and the intensity of these points changes due to atmospheric light. He exploited this change to estimate the transmission and remove haze. Though this algorithm can effectively recover haze-free images for most outdoor images, it still has a limitation that for bright regions where the dark channel is invalid, transmission is often underestimated, causing color shift and supersaturation in the restored image.
Subsequently, some novel algorithms have been proposed. Meng [10] proposed a regularization method to remove haze from a single input image by combining it with weighted L1-norm-based contextual regularization to estimate the unknown scene transmission. Fattal [11] derived a local formation model that explains the color lines in hazy images and used it to recover images. Zhu [12] modeled the scene depth of hazy images and trained this model with a supervised learning method to recover depth information. Berman [13] assumed that colors of a clean image are well approximated by a few hundred distinct colors and used them to remove haze. Follow-up works based on the dark channel prior are still in progress [14]. Wang [15] used the layered total variation, multichannel total variation, and colour total variation model to denoise and preserve edges. Zhu [16] combined the dark channel prior with a patch-based prior to avoid artifacts. Bui [17] proposed a color ellipsoid prior for dehazing for which the dark channel prior is a special prior vector. Golts [18] minimized an unsupervised dark channel prior energy function to train a convolution neural network for the purpose of dehazing.
Inspired by the success of the convolution neural network (CNN) in object detection [19], recognition [20] and related tasks [21], learning-based methods has been used to extract features for dehazing [22]. Li [23] proposed an image dehazing model built with a convolutional neural network (CNN) that generates clean images. Generative Adversarial Network (GAN) which has been made great progress in text-to-image synthesis [24], image-image translation [25] and other applications [26] was also applied for dehazing [27]. Li [28] developed an encoder and decoder architecture in the generative network and a loss function built on the pretrained VGG features and an L1-regularized gradient prior to solve image dehazing problem. Zhang [29] proposed a densely connected pyramid dehazing network (DCPDN) for dehazing. This method embedded the atmospheric scattering model into the network and connected encoder-decoder structure with multi-level pyramid pooling module to estimate the transmission map. Suàrez [30] employed a stacked conditional Generative Adversarial Network (GAN) to remove the haze on each color channel independently. Nevertheless, this approach requires ground truth images for training. Gated fusion network [31] is based on the basic principle of image fusion and is learned to generate the clear image directly without without assuming restrictions on scene transmission and atmospheric light. Enhanced Pix2pix Dehazing Network (EPDN) [32] transformed a hazy image to a clear image pixel by pixel directly without relying on the physical scattering model, which includes three parts: the discriminator, the generator, and the enhancer.
We proposed an improved algorithm based on the dark channel that restores images to the original color and well-preserved the details. This progress stems from two key improvements compared with previous work. First, we use the expected value of the dark channel of a hazy image to correct the transmission to avoid image color oversaturation. Second, a weighted residual map is introduced to increase the contrast of haze-free images. This approach has a significant effect on lessening color oversaturation and sharpening the edges and details.
The remainder of the paper is structured as follows: In Section 2, we review the image degradation model and the dark prior. A transmission estimation algorithm is introduced to obtain more accurate transmission by correcting the offset using the expected value of the dark channel. Then, a weighted residual map is used to further improve the contrast of recovered images. In Section 3, we present the results of our proposed algorithm for hazy images; conclusions are presented in Section 4.

2. Materials and Methods

2.1. Image Degradation Model and Dark Prior

According to the atmospheric scattering model [33], images captured in hazy weather can be expressed as
I ( x ) = t ( x ) J ( x ) + [ 1 t ( x ) ] A
where I ( x ) is the observed intensity; J ( x ) is the radiance of the real scene, a haze-free image that needs to be recovered; x = ( m , n ) is the image coordinates; Transmission t ( x ) = exp [ β · d ( x ) ] describes the portion of the radiance that is not scattered during propagation through the atmosphere to the imaging system, where β is the atmospheric scattering coefficient related to the wavelength, and d ( x ) is the scene depth. The atmospheric light A represents the intensity of ambient light and is usually approximated as a global constant.
Analyzing Equation (1), when the transmission and atmospheric light are known, a dehazed image can be obtained as follows:
J ( x ) = I ( x ) A t ( x ) + A
Therefore, based on this model, the key for the dehazing methods is to obtain the transmission and atmospheric light of the scene.
To estimate the transmission, He et al. [8] used the dark channel prior (DCP), statistics of the locally minimized images (dark channel) of clear images to compute the transmission and restore the image. The dark channel is defined as
J d a r k ( x ) = min c { r , g , b } min y Ω ( x ) J C ( y )
where J d a r k ( x ) is the dark channel of J ( x ) , J C ( x ) is a color channel of J ( x ) , Ω ( x ) is the neighborhood of x , and its size is 15 × 15 .
He et al. [8] believed that 0.1 % of the brightest pixels in the dark channel usually belong to most haze-opaque regions. Picking the brightest color in these pixels is equivalent to choosing the largest intensity in the area with the heaviest haze, which is the optimal solution for the atmospheric light A. This paper uses the same method to estimate A of each channel as that in [8].
Minimizing Equation (2), the transmission can be estimated by
t ( x ) = 1 min c { r , g , b } min y Ω ( x ) I C ( y ) A C 1 min c { r , g , b } min y Ω ( x ) J C ( y ) A C
According to the dark channel prior, for most local areas that do not cover the sky, there are always pixels with at least one color (RGB) channel with very low radiance intensity, i.e., J d a r k ( x ) 0 . Then, the transmission can be generated as follows:
t ( x ) = 1 min c { r , g , b } min y Ω ( x ) I C ( y ) A C
The dark prior is valid and effective when there are shadows, colorful objects, or surfaces of objects and dark objects or surfaces in the scene. However, when there is an extensive bright region in the hazy image such as whitish sky or objects, this prior will lead to underestimation of transmission, which will result in color cast or oversaturation in these regions of recovered images.

2.2. Transmission Estimation Using Offset Correction

To solve this problem, this study devises an improved algorithm based on an offset correction to more accurately estimate the transmission of bright regions.
Considering that the dark channel of haze-free images in the bright area may have a high value, the dark channel prior is extended to a full scene prior:
J d a r k ( x ) α ( x )
when α ( x ) = 0 , it is a dark channel prior, and when α ( x ) ( 0 , 1 ) , it is a prior satisfying bright scenes. Due to the variability of the scene, the statistics α ( x ) of the bright regions that do not conform the dark channel prior are difficult to calculate quantitatively and can only be approximated.
To reasonably estimate α ( x ) , this paper proposes a method to estimate transmission using the expected value of the dark channel of foggy images. It is known from the dark channel prior that the expected value of the dark channel image of clear outdoor images is zero. For hazy images, the intensity of the dark channel affected by the haze will increase on the whole. Correspondingly, its expected value will produce an offset compared with clear images.
In this study, we assume that the brightness of the dark channel in the scene is affected by haze to the same degree and has the same increment. Then, the statistics α ( x ) can be approximated by the difference between the dark channel I d a r k ( x ) of a hazy image and its expected value. To ensure α ( x ) > 0 , α ( x ) is defined as
α ( x ) : = max I d a r k ( x ) μ c e n , 0
where max ( ) is the maximum function, I d a r k ( x ) is the dark channel of the hazy image I ( x ) , and μ c e n is calculated as
μ c e n = x I d a r k ( x ) M N
where M N denotes the number of pixels.
Bringing Equation (6) into Equation (4) gives an offset-corrected estimation of transmission:
t ˜ ( x ) = 1 min C { r , g , b } min y Ω ( x ) I C ( y ) A C 1 min C { r , g , b } α ( x ) A C
when the offset-corrected transmission t ˜ ( x ) is calculated, according to Equation (2) the dehazed image J ˜ ( x ) can be estimated as
J ˜ ( x ) = I ( x ) A t ˜ ( x ) + A
We neglect the difference in the degree of impact, which may affect the clarity of a dehazed image. Fortunately, we can improve the contrast using a weighted residual map, which will be discussed next.
Although the expected value is only an approximation, this approach can effectively reduce the estimation error of the transmission in bright regions and avoid color oversaturation. Figure 1 shows a comparison between DCP and our method with regard to transmission and the recovered image. As shown in Figure 1d, our method can more accurately estimate the transmission of bright regions and exhibit better transition at the edges.
Unlike the setting of the minimum threshold for the transmission of bright areas, this method does not just reflect the uniqueness of shooting conditions, such as scene and illumination, but can also consider the intensity difference in bright areas to obtain an estimated value that is closer to the true transmission value.

2.3. A Weighted Residual Map

As the expected value is a measure of the central tendency of the intensity of the dark channel image, it cannot accurately calculate the transmission of each pixel and thoroughly remove haze in regions where local intensities in the dark channel are significantly higher than the expected value. Therefore, we propose a dehazing approach to further recover image details using a weighted residual map.
The fundamental idea of a weighted residual map is to use a residual map between the actual observed image I ( x ) and the image J ˜ ( x ) estimated by the offset-correcting approach for dehazing. The residual map R r e s i d u a l ( x ) is defined as
R r e s i d u a l ( x ) = I ( x ) J ˜ ( x )
As shown in Figure 2a, this residual map contains unrecovered scene and structure information that we can use to estimate a more accurate recovery of the original image.
Substituting Equation (10) into Equation (11) gives
R r e s i d u a l ( x ) = 1 t ˜ ( x ) 1 A I ( x )
From Equation (12), the residual map R r e s i d u a l ( x ) is proportional to the offset-corrected transmission t ˜ ( x ) . From transmission t ( x ) = exp [ β · d ( x ) ] , distance d ( x ) is negatively correlated with transmission, as is the offset-corrected transmission t ˜ ( x ) . From Equation (12), the residual map R r e s i d u a l ( x ) is also negatively correlated with the offset-corrected transmission t ˜ ( x ) . Therefore, according to Equation (12), regions in the residual map with lower transmission that are farther away from the imaging device have larger intensity magnification, which is the opposite of what we desire. Therefore, we apply a negative residual map for contrast enhancement and get a dehazed image as follows:
J ( x ) = J ˜ ( x ) ω R r e s i d u a l ( x )
where ω is the weight function, which is used to further adjust the influence of different regions in the residual image on the final restored image J ( x ) . In this paper, the offset-corrected transmission t ˜ ( x ) is chosen as ω :
t ˜ ( x ) R r e s i d u a l ( x ) = 1 t ˜ ( x ) A I ( x )
The appealing property of this selection lies in converting an inverse function to a linear function while preserving distance dependence. Chosing t ˜ ( x ) as ω makes these two multiplicative terms in Equation (14) of the same order of magnitude, thus avoiding overamplification, and the range of the value can be limited to [ 0 , 255 ] . Besides, this selection will balance the restored information and excessive sharpening, as shown in Figure 2b.
The pseudo code of the dehazing algorithm with offset correction and a weighted residual map is in Table 1. Firstly, the offset-corrected transmission t ˜ ( x ) is estimated by calculating the full scene prior α . Secondly, the final dehazed image J ( x ) is estimated according to the dehazed image J ˜ ( x ) by the offset-correcting approach and the weighted residual map t ˜ ( x ) R r e s i d u a l ( x ) .
Figure 2 shows an example of residual-based dehazing. To illustrate the effect, Figure 1a still serves as the input. Compared with the results in Figure 1c,e, the quality of the recovered image in Figure 2c, which has clearer edges and more natural colors, is further improved by this residual-based approach.

3. Results and Discussion

In this section, we conduct various experiments on both real-world images and the synthetic dataset to demonstrate the effectiveness of the proposed method. Moreover, we do an ablation study to demonstrate the effectiveness of our offset correction and the weighted residual map.

3.1. Ablation Study

To better reveal the effectiveness of offset correction in transmission estimation and the weighted residual map, we conduct an ablation study on images with or without sky regions. We compare our apprach with DCPin Figure 3 and Figure 4. In Figure 3, due to underestimating the transmission of extensive bright regions, the results of DCP show oversaturation and artifacts in sky regions in Figure 3c. Figure 3d,e demonstrate that our offset correction method works well in these regions.
In Figure 4, although there is no significant difference between the transmission estimation of DCP in Figure 4b and our transmission estimation in Figure 4d for image without sky regions, the weighted residual map can further enhance final results by capturing fine structural details. As shown in Figure 3 and Figure 4e,g, we observe that whether there are whiteish regions or not, the weighted residual map can take effect on improving image clarity.
The ablation study demonstrates that offset correction in transmission estimation and the weighted residual map are effective for image dehazing.

3.2. Performance on Real-World Hazy Images

To display the performance of offset correction and a weighed residual map on haze removal, we apply our dehazing algorithm on both real-world hazy “Florence”, “Community”, and “Buildings” images containing a large grayish white sky or objects and real-world hazy “Flags”, “Forest” and “Girls” images whithout sky regions.
In Figure 5, we compare the proposed method with the algorithms proposed by DCP, Berman et al. [13], Bui et al. [17], DCPDN [29], Zhu et al. [16] and Golts et al. [18]. In DCP, the colors of restored images are often oversaturated, especially in the sky areas, as the transmission is underestimated. Berman et al.’s results contain some saturated or dark pixels, and haze may not be sufficiently removed in some regions, for instance, in the buildings in the lower left corner of “Community” and “Buildings”. Bui et al.’s results have greater contrast and fewer halo artifacts, but some pixels are oversaturated. Due to the needs of the network, Zhang et al.’s algorithm resizes hazy images to 512 × 512 and generates images of the same size. In order to facilitate comparison, we resize Zhang et al.’s results to the original image size. In Zhang et al.’s results, the color is natural but the clarity is compromised. Zhu et al.’s [16] method and Golts et al.’s method leave haze in the results, as seen in the “Community” and “Flags”. However, our approach can restore images with clearer structures and enhanced details.

3.3. Performance on Synthetic Dataset

To further illustrate the effectiveness of the proposed algorithm, we use the large-scale RESIDE (REalistic Single Image DEhazing) test dataset [22] which contains 1000 synthetic images to test dehazing effects. The RESIDE test dataset is divided into indoor and outdoor parts, called “SOTS-indoor” and “SOTS-outdoor”. In addition, we use the Peak Signal to Noise Ratio (PSNR) and the Mean Squared Error (MSE) to evaluate dehazed images. The PSNR of restored images is calculated to show objective evaluation results. The PSNR is obtained by calculating the logarithmic function of the MSE of two images to measure the image error. The larger the PSNR, the smaller the distortion. The subjective quality assessment test was conducted among thirty subjects, most of them were college students. The subjects were asked to assign each image a score between 1 and 5. 5 represents the best quality and 1 the worst. The 30 scores of each image were averaged to a final MOS of the image.
From Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, it can be observed that even though DCP, Berman et al.’s [13] method and Zhu et al.’s [16] method, which are based on the atmospheric scattering model such as ours, can recover dehazed images, they tend to lead to either saturation or color shifting. It can also be found that DCPDN [29] and Golts et al.’s [18] method which are learning-based method are able to recover images with more natural colors. However, DCPDN tends to generate overexposed images, Golts et al.’s [18] method cannot fully remove heavier haze. In contrast, it can be observed from our results that they preserve more details and sharper contours.
The comparison results are shown in Table 2 and Table 3 in which the values are PSNR and MOS of the defogged images from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11, respectively. The bold fonts indicate the optimal value of each set of results.
Table 4 shows the average PSNR of the dehazed images of the RESIDE test dataset. The quantitative results of each part are shown in Figure 12. From Table 2, Table 3 and Table 4 and Figure 12, it can be observed that our results have higher PSNR and MOS, which indicates that our method performs well on subjective visual effects and can recover more information and edges from hazy images, and that our resulting images have a higher contrast.

4. Conclusions

In this paper, we proposed an improved dehazing algorithm based on the dark channel prior. This haze removal algorithm adapts an offset-correcting scheme, which is built on the assumption that the intensity of the dark channel in bad weather has the same increment owing to the haze. This is achieved by approximating the increment with the expected value of this dark channel to update the transmission in bright regions. Subsequently, a weighted residual map is introduced to recover more details and improve contrast. The results of our method show that a combination of offset correction and residual map does not just reduce color oversaturation but also enhances details in dehazed images. Besides having practical effects, from the perspective of image understanding, this method focuses on the residual information in images and uses it to improve the quality of recovered images.
Similar to other dehazing algorithms using the atmospheric scattering model, our method also suffers from the drawback of color shifting in defogged images. More advanced methods [34] focus on color shifting and preserve color information. In the future, we intend to investigate dehazing models and color balance algorithms [35] to address color shifting.

Author Contributions

Conceptualization, L.J.; Data curation, C.S.; Funding acquisition, X.Z. and L.J.; Investigation, C.S. and W.W.; Software, C.S. and W.W.; Supervision, X.Z. and L.J.; Validation, C.S. and W.W.; Visualization, C.S.; writing—original draft preparation, C.S.; writing—review and editing, W.W., X.Z. and and L.J. All authors have read and agreed to the published version of the manuscript.

Funding

This research was supported by the Science and Technology Development Program of Jilin under Grant 20170204029GX.

Acknowledgments

We acknowledge the anonymous reviewers for their comments and suggestions that significantly improved the quality of this paper. We would also like to thank all authors who provided the program codes and test pictures online.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Lin, C.J.; Lin, C.H.; Wang, S.H. Using a Hybrid of Interval Type-2 RFCMAC and Bilateral Filter for Satellite Image Dehazing. Electronics 2020, 9, 710. [Google Scholar] [CrossRef]
  2. Narasimhan, S.G.; Nayar, S.K. Chromatic framework for vision in bad weather. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. CVPR 2000 (Cat. No. PR00662), Hilton Head Island, SC, USA, 15 June 2000; Volume 1, pp. 598–605. [Google Scholar]
  3. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR 2001), Kauai, HI, USA, USA, 8–14 December 2001; Volume 1, pp. 325–332. [Google Scholar]
  4. Kopf, J.; Neubert, B.; Chen, B.; Cohen, M.; Cohen-Or, D.; Deussen, O.; Uyttendaele, M.; Lischinski, D. Deep photo: Model-based photograph enhancement and viewing. ACM Trans. Graph. (TOG) 2008, 27, 1–10. [Google Scholar] [CrossRef] [Green Version]
  5. Fattal, R. Single image dehazing. ACM Trans. Graph. (TOG) 2008, 27, 1–9. [Google Scholar] [CrossRef]
  6. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  7. Kim, J.H.; Jang, W.D.; Sim, J.Y.; Kim, C.S. Optimized contrast enhancement for real-time image and video dehazing. J. Vis. Commun. Image Represent. 2013, 24, 410–425. [Google Scholar] [CrossRef]
  8. He, K.; Sun, J.; Tang, X. Single image haze removal using dark channel prior. IEEE Trans. Pattern Anal. Mach. Intell. 2010, 33, 2341–2353. [Google Scholar] [PubMed]
  9. He, K.; Sun, J.; Tang, X. Guided image filtering. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 35, 1397–1409. [Google Scholar] [CrossRef] [PubMed]
  10. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient image dehazing with boundary constraint and contextual regularization. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar]
  11. Fattal, R. Dehazing using color-lines. ACM Trans. Graph. (TOG) 2014, 34, 1–14. [Google Scholar] [CrossRef]
  12. Zhu, Q.; Mai, J.; Shao, L. A fast single image haze removal algorithm using color attenuation prior. IEEE Trans. Image Process. 2015, 24, 3522–3533. [Google Scholar] [PubMed] [Green Version]
  13. Berman, D.; Avidan, S. Non-local image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  14. Pei, T.; Ma, Q.; Xue, P.; Ding, Y.; Hao, L.; Yu, T. Nighttime Haze Removal Using Bilateral Filtering and Adaptive Dark Channel Prior. In Proceedings of the 2019 IEEE 4th International Conference on Image, Vision and Computing (ICIVC), Xiamen, China, 5–7 July 2019; pp. 218–222. [Google Scholar]
  15. Wang, Z.; Hou, G.; Pan, Z.; Wang, G. Single image dehazing and denoising combining dark channel prior and variational models. IET Comput. Vis. 2018, 12, 393–402. [Google Scholar] [CrossRef] [Green Version]
  16. Zhu, M.; He, B.; Wu, Q. Single Image Dehazing Based on Dark Channel Prior and Energy Minimization. IEEE Signal Process. Lett. 2018, 25, 174–178. [Google Scholar] [CrossRef]
  17. Bui, T.M.; Kim, W. Single Image Dehazing Using Color Ellipsoid Prior. IEEE Trans. Image Process. 2018, 27, 999–1009. [Google Scholar] [CrossRef] [PubMed]
  18. Golts, A.; Freedman, D.; Elad, M. Unsupervised Single Image Dehazing Using Dark Channel Prior Loss. IEEE Trans. Image Process. 2020, 29, 2692–2701. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  19. Mohamed, A.; Qian, K.; Elhoseiny, M.; Claudel, C. Social-STGCNN: A Social Spatio-Temporal Graph Convolutional Neural Network for Human Trajectory Prediction. arXiv 2020, arXiv:2002.11927. [Google Scholar]
  20. Wang, K.; Peng, X.; Yang, J.; Meng, D.; Qiao, Y. Region attention networks for pose and occlusion robust facial expression recognition. IEEE Trans. Image Process. 2020, 29, 4057–4069. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  21. Wang, S.Y.; Wang, O.; Zhang, R.; Owens, A.; Efros, A.A. CNN-generated images are surprisingly easy to spot... for now. arXiv 2019, arXiv:1912.11035. [Google Scholar]
  22. Li, B.; Ren, W.; Fu, D.; Tao, D.; Feng, D.; Zeng, W.; Wang, Z. Benchmarking single-image dehazing and beyond. IEEE Trans. Image Process. 2018, 28, 492–505. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  23. Li, B.; Peng, X.; Wang, Z.; Xu, J.; Feng, D. Aod-net: All-in-one dehazing network. In Proceedings of the IEEE International Conference on Computer Vision, Venice, Italy, 22–29 October 2017; pp. 4770–4778. [Google Scholar]
  24. Qiao, T.; Zhang, J.; Xu, D.; Tao, D. Mirrorgan: Learning text-to-image generation by redescription. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019; pp. 1505–1514. [Google Scholar]
  25. He, Z.; Zuo, W.; Kan, M.; Shan, S.; Chen, X. Attgan: Facial attribute editing by only changing what you want. IEEE Trans. Image Process. 2019, 28, 5464–5478. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Guo, T.; Xu, C.; Huang, J.; Wang, Y.; Shi, B.; Xu, C.; Tao, D. On Positive-Unlabeled Classification in GAN. arXiv 2020, arXiv:2002.01136. [Google Scholar]
  27. Dong, Y.; Liu, Y.; Zhang, H.; Chen, S.; Qiao, Y. FD-GAN: Generative Adversarial Networks with Fusion-discriminator for Single Image Dehazing. arXiv 2020, arXiv:2001.06968. [Google Scholar] [CrossRef]
  28. Li, R.; Pan, J.; Li, Z.; Tang, J. Single image dehazing via conditional generative adversarial network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 8202–8211. [Google Scholar]
  29. Zhang, H.; Patel, V.M. Densely Connected Pyramid Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar]
  30. Suárez, P.L.; Sappa, A.D.; Vintimilla, B.X.; Hammoud, R.I. Deep learning based single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 1169–1176. [Google Scholar]
  31. Ren, W.; Ma, L.; Zhang, J.; Pan, J.; Cao, X.; Liu, W.; Yang, M.H. Gated fusion network for single image dehazing. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3253–3261. [Google Scholar]
  32. Qu, Y.; Chen, Y.; Huang, J.; Xie, Y. Enhanced Pix2pix Dehazing Network. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Long Beach, CA, USA, 16–20 June 2019. [Google Scholar]
  33. Middleton, W.E.K. Vision through the Atmosphere; University of Toronto Press: Toronto, ON, Canada, 1952. [Google Scholar]
  34. El Khoury, J.; Thomas, J.B.; Mansouri, A. Does dehazing model preserve color information? In Proceedings of the 2014 Tenth International Conference on Signal-Image Technology and Internet-Based Systems, Marrakech, Morocco, 23–27 November 2014; pp. 606–613. [Google Scholar]
  35. Limare, N.; Lisani, J.L.; Morel, J.M.; Petro, A.B.; Sbert, C. Simplest color balance. Image Process. Line 2011, 1, 297–315. [Google Scholar] [CrossRef] [Green Version]
Figure 1. Effect of offset correction in our proposed method. (a) Foggy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction.
Figure 1. Effect of offset correction in our proposed method. (a) Foggy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction.
Electronics 09 01419 g001
Figure 2. Effect of the residual-based approach. (a) The residual map; (b) the weighted residual map; (c) our final result.
Figure 2. Effect of the residual-based approach. (a) The residual map; (b) the weighted residual map; (c) our final result.
Electronics 09 01419 g002
Figure 3. Dehazing example on a image with sky regions. (a) Hazy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction; (f) our weighted residual map; (g) our final result.
Figure 3. Dehazing example on a image with sky regions. (a) Hazy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction; (f) our weighted residual map; (g) our final result.
Electronics 09 01419 g003
Figure 4. Dehazing example on a image without sky regions. (a) Hazy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction; (f) our weighted residual map; (g) our final result.
Figure 4. Dehazing example on a image without sky regions. (a) Hazy image; (b) the transmission of DCP; (c) the result of DCP; (d) our offset-corrected transmission; (e) our result with offset correction; (f) our weighted residual map; (g) our final result.
Electronics 09 01419 g004
Figure 5. Dehazing results.(recommended viewing color pictures on display). (a) Hazy images; (b) results of DCP; (c) Berman et al.’s results; (d) Bui et al.’s results; (e) results of DCPDN; (f) Zhu et al.’s results; (g) Golts et al.’s [18] results; (h) our result.
Figure 5. Dehazing results.(recommended viewing color pictures on display). (a) Hazy images; (b) results of DCP; (c) Berman et al.’s results; (d) Bui et al.’s results; (e) results of DCPDN; (f) Zhu et al.’s results; (g) Golts et al.’s [18] results; (h) our result.
Electronics 09 01419 g005
Figure 6. Desk of the SOTS-indoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) lZhu; (g) Golts; (h) Ours.
Figure 6. Desk of the SOTS-indoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) lZhu; (g) Golts; (h) Ours.
Electronics 09 01419 g006
Figure 7. Drawing room of the SOTS-indoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Figure 7. Drawing room of the SOTS-indoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Electronics 09 01419 g007
Figure 8. Sunlight of the SOTS-outdoor dataset.(a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Figure 8. Sunlight of the SOTS-outdoor dataset.(a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Electronics 09 01419 g008
Figure 9. Bank of the SOTS-outdoor dataset.(a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Figure 9. Bank of the SOTS-outdoor dataset.(a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Electronics 09 01419 g009
Figure 10. Buildings of the SOTS-outdoor dataset. (a) Buildings. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Figure 10. Buildings of the SOTS-outdoor dataset. (a) Buildings. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts; (h) Ours.
Electronics 09 01419 g010
Figure 11. Bird’s Nest stadium of the SOTS-outdoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts [18]; (h) Ours.
Figure 11. Bird’s Nest stadium of the SOTS-outdoor dataset. (a) Ground truth image. (b) input blur image. (c) DCP; (d) Berman; (e) DCPDN; (f) Zhu; (g) Golts [18]; (h) Ours.
Electronics 09 01419 g011
Figure 12. The quantitative results of dehazed images in the RESIDE test dataset.
Figure 12. The quantitative results of dehazed images in the RESIDE test dataset.
Electronics 09 01419 g012
Table 1. Dehazing algorithm implementation with offset correction and a weighted residual map.
Table 1. Dehazing algorithm implementation with offset correction and a weighted residual map.
Parameters:
Hazy image: I ( x ) , Atmospheric light: A, Number of pixels: MN
Procedures:
• Estimation of transmission
      - Calculate the dark channel I d a r k ( x ) :
             I d a r k ( x ) = min c { r , g , b } min y Ω ( x ) I C ( y )
      - Calculate the full scene prior α ( x ) :
             μ c e n = x I d a r k ( x ) M N
             α ( x ) = max I d a r k ( x ) μ c e n , 0
      - Calculate the offset-corrected transmission t ˜ ( x ) :
             t ˜ ( x ) = 1 min C { r , g , b } min y Ω ( x ) I C ( y ) A C 1 min C { r , g , b } α ( x ) A C
• Estimation of dehazed image
      - Estimate the dehazed image J ˜ ( x ) by the offset-correcting approach:
             J ˜ ( x ) = I ( x ) A t ˜ ( x ) + A
      - Calculate the residual map R r e s i d u a l ( x ) :
             R r e s i d u a l ( x ) = I ( x ) J ˜ ( x )
      - Calculate the final dehazed image J ( x ) :
             J ( x ) = J ˜ ( x ) t ˜ ( x ) R r e s i d u a l ( x )
Table 2. Comparison of PSNR of the dehazed images from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
Table 2. Comparison of PSNR of the dehazed images from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
MethodDeskDrawing RoomSunlightBankBuildingsBird’s Nest Stadium
DCP16.319617.636316.773317.515516.979116.1003
Berman et al.14.411615.785220.693113.892921.227314.9177
DCPDN18.455514.282419.534618.563610.873814.5544
Zhu et al.16.217220.055117.754715.975117.287919.1754
Golts et al.14.056914.781618.152819.544923.464517.6003
Our study18.251620.543320.759922.045821.179122.0905
Table 3. Comparison of MOS of the dehazed images from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
Table 3. Comparison of MOS of the dehazed images from Figure 6, Figure 7, Figure 8, Figure 9, Figure 10 and Figure 11.
MethodDeskDrawing RoomSunlightBankBuildingsBird’s Nest Stadium
DCP4.14693.98544.25573.69223.68053.6869
Berman et al.3.75763.92183.63573.47123.82653.9898
DCPDN4.55754.29224.44914.14313.34133.9456
Zhu et al.4.56494.41574.53403.95553.75794.1463
Golts et al.4.55723.64194.27874.10604.21864.2094
Our study4.57064.30034.55954.15774.44034.2952
Table 4. The average PSNR of dehazed images in the RESIDE test dataset.
Table 4. The average PSNR of dehazed images in the RESIDE test dataset.
MethodPSNR
DCP18.7392
Berman et al.17.3103
DCPDN16.7077
Zhu et al.19.4270
Golts et al.19.3258
Our study20.5538

Share and Cite

MDPI and ACS Style

Su, C.; Wang, W.; Zhang, X.; Jin, L. Dehazing with Offset Correction and a Weighted Residual Map. Electronics 2020, 9, 1419. https://doi.org/10.3390/electronics9091419

AMA Style

Su C, Wang W, Zhang X, Jin L. Dehazing with Offset Correction and a Weighted Residual Map. Electronics. 2020; 9(9):1419. https://doi.org/10.3390/electronics9091419

Chicago/Turabian Style

Su, Chang, Wensheng Wang, Xingxiang Zhang, and Longxu Jin. 2020. "Dehazing with Offset Correction and a Weighted Residual Map" Electronics 9, no. 9: 1419. https://doi.org/10.3390/electronics9091419

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop