Next Article in Journal
Hybrid Orientation Based Human Limbs Motion Tracking Method
Next Article in Special Issue
High-Speed Video System for Micro-Expression Detection and Recognition
Previous Article in Journal
Development of Gentle Slope Light Guide Structure in a 3.4 μm Pixel Pitch Global Shutter CMOS Image Sensor with Multiple Accumulation Shutter Technology
Previous Article in Special Issue
Vision System for Coarsely Estimating Motion Parameters for Unknown Fast Moving Objects in Space
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor

Department of Image, Chung-Ang University, 84 Heukseok-ro, Dongjak-gu, Seoul 06974, Korea
*
Author to whom correspondence should be addressed.
Sensors 2017, 17(12), 2861; https://doi.org/10.3390/s17122861
Submission received: 28 October 2017 / Revised: 27 November 2017 / Accepted: 6 December 2017 / Published: 9 December 2017
(This article belongs to the Special Issue Video Analysis and Tracking Using State-of-the-Art Sensors)

Abstract

:
Recently, the stereo imaging-based image enhancement approach has attracted increasing attention in the field of video analysis. This paper presents a dual camera-based stereo image defogging algorithm. Optical flow is first estimated from the stereo foggy image pair, and the initial disparity map is generated from the estimated optical flow. Next, an initial transmission map is generated using the initial disparity map. Atmospheric light is then estimated using the color line theory. The defogged result is finally reconstructed using the estimated transmission map and atmospheric light. The proposed method can refine the transmission map iteratively. Experimental results show that the proposed method can successfully remove fog without color distortion. The proposed method can be used as a pre-processing step for an outdoor video analysis system and a high-end smartphone with a dual camera system.

1. Introduction

Image analysis using multiple images has recently attracted growing attention in the fields of autonomous driving, unmanned surveillance cameras, and drone imaging service. It is important to acquire additional depth information as well as high-quality images in many sophisticated image analysis applications. Another market-leading application is the dual camera in a smartphone. The proposed stereo image-based defogging algorithm can be applied to an asymmetric dual camera system in a smartphone with a proper geometric transformation to improve the visibility of the outdoor foggy scene acquired by a smartphone. Specifically, fog component in the atmosphere decreases contrast, and as a result, makes extracting features or recognizing objects in image analysis difficult. Therefore, an image enhancement method to reduce the fog component is important to increase the reliability of the image analysis system.
Fog particles absorb and scatter the light reflected from the object and then transmitted to the camera. They also distort the original color and edge in a random manner. The amount of atmospheric distortion increases with the distance between a scene point and the camera. This phenomenon can be quantified using a transmission coefficient at each pixel. For this reason, the degraded foggy image is modeled as a combination of the original reflectance of the scene, the atmospheric component, and the transmission coefficient in a pixel-wise manner. Because of its importance in various image analysis applications, the image defogging problem has been intensively studied in the field of image processing and computer vision.
Narasimhan et al. corrected color distortion by estimating the distribution of fog according to the distance [1,2]. They acquired multiple images of the same scene under different weather conditions to construct a scene structure. Shwartz et al. and Schechner et al. proposed a defogging method by measuring the distribution of fog using two different polarized filters in the same scene [3,4]. These methods can successfully remove the fog component using a physically reasonable degradation model at the cost of inconvenience to acquire multiple images of the same scene.
To solve these problems, various single image defogging methods were proposed. Tan proposed a defogging method using the characteristics that a fog-free image has higher contrast than foggy images and color distortion caused by fog increases proportionally to the distance from the camera [5]. Based on these two characteristics, the Markov random field model was estimated and a fog-free image was obtained by maximizing the local contrast at the cost of contrast saturation and halo effect. Fattal removed fog using the property that the surface reflectance of the object is constant and the transmission depends on the density and depth information fog [6]. However, it is difficult to measure the reflectance in a region of dense fog. He et al. proposed a method of estimating the transmission map using the dark channel prior (DCP) [7]. The DCP theory is based on the observation that the minimum intensity of one of the RGB channels in a fog-free region is close to zero. However, it cannot avoid color distortion, since the transmission map is estimated using the color of the object. In addition, in order to remove the blocking and the halo effects appearing in the process of estimating the transmission map, a computationally expensive soft-matting algorithm is used. To solve this problem, Gibson et al. replaced the soft-matting step with a standard median filter [8], Xiao et al. removed the blocking and halo effects using a joint bilateral filtering [9], Chen et al. used a gain intervention refinement filter [10], and Jha et al. used an l 2 norm prior [11]. Yoon et al. proposed a defogging method using the multiphase level set in the HSV color space and corrected colors between adjacent video frames [12]. Meng et al. proposed a method to estimate the transmission map using a boundary constraint, and they refined the transmission map through l 1 norm-based regularization [13]. Ancuti et al. proposed a multiscale fusion-based defogging algorithm using a Laplacian pyramid and a Gaussian pyramid, both of which improved a single foggy image using white balance and contrast enhancement, respectively [14]. Berman et al. proposed a nonlocal-based defogging algorithm using an estimated transmission based on color lines [15]. However, these methods are not free from color distortion, since they do not consider the depth information. Recently, several learning-based defogging methods have been proposed [16,17,18,19]. Chen et al. proposed a radial basis function (RBF) network to restore a foggy image while recovering visible edges [16]. Cai et al. proposed a trainable end-to-end system to estimate the medium transmission, called DehazeNet [17]. Eigen et al. proposed a convolutional neural network (CNN) architecture to remove raindrop and lens dirt [19].
Many defogging methods were proposed using the depth information. Caraffa et al. proposed a depth-based defogging method using Markov random field model to generate the disparity map using a stereo image pair [20]. Lee et al. estimated the scattering coefficient of the atmosphere in the stereo image [21]. Park et al. estimated the depth in the stereo image pair and removed the fog by estimating the atmospheric light in the farthest region [22]. However, accurate estimation of the transmission map is still an open problem, since the features for obtaining the disparity map are generally distorted in the foggy image.
To improve the problem of existing single-image-based defogging algorithms, this paper presents a novel image defogging algorithm using a stereo foggy image pair. The proposed defogging algorithm removes fog by estimating the depth information from the stereo image pair and iteratively improving the depth information. The disparity of an input stereo foggy image pair is first obtained using the optical flow, and the depth map is generated using the disparity. Next, the transmission map is estimated using the generated depth map to remove the foggy component. The optical flow and transmission map estimation steps repeat until the defogged solution converges. The proposed stereo-based defogging algorithm is suitable for dual cameras embedded in high-end smartphone models that were recently released on the consumer market.
The paper is organized as follows: Section 2 describes a physical degradation model for foggy image acquisition, and Section 3 presents the proposed stereo-based defogging algorithm based on the degradation model. Section 4 summarizes experimental results, Section 5 presents an application of the proposed defogging algorithm to an asymmetric dual camera system, and Section 6 concludes the paper.

2. Physical Degradation Model of Foggy Image Acquisition

Figure 1 shows the physical degradation model of foggy image acquisition. The light reflected by the object is absorbed and scattered by fog particles in the atmosphere, and arrives at the camera sensor. Therefore, the greater the distance between the object and the camera is, the greater the atmospheric degradation becomes. The foggy image g is defined according to the Koschmieder model [23] as
g ( x , y ) = f ( x , y ) t ( x , y ) + A ( 1 t ( x , y ) ) ,
where ( x , y ) represents the pixel coordinate, f ( x , y ) the fog-free image, and the constant A the global atmospheric light. t ( x , y ) represents the transmission coefficient at pixel ( x , y ) , and can be expressed as
t ( x , y ) = e β · d ( x , y ) ,
where β represents the scattering coefficient of the atmosphere, and d ( x , y ) the depth between the scene point and the camera. From (1) , an intuitive estimation of the fog-free image is given as
f ^ ( x , y ) = ( g ( x , y ) A ) / max ( t ( x , y ) , t 0 ) + A ,
The defogged image f ^ ( x , y ) is obtained by substituting the estimated A and t ( x , y ) into (3). t 0 is the lower bound of t ( x , y ) , which is set to an arbitrary value to avoid the zero in the denominator.

3. Image Defogging Based on Iteratively Refined Transmission

Most existing defogging algorithms estimate the disparity map from the stereo foggy image pair, and then obtain the defogged image by estimating the transmission map using (2). However, it is difficult to detect the feature to estimate the disparity map, since the foggy image is distorted by the fog component. To solve this problem, the proposed algorithm estimates an accurate transmission map by iteratively improving the disparity map. The disparity map is generated by estimating optical flow from the stereo foggy image pair, and the initial transmission map is generated by the disparity. Atmospheric light A is estimated using the color line theory [24], and each stereo foggy image is restored using (3). By repeating the set of optical flow estimation, transmission map generation and defogging steps, a progressively improved transmission map and better defogged image are obtained. This process repeats until the absolute difference between the kth and ( k 1 )th defogged images is less than a pre-specified threshold τ . Figure 2 shows the block diagram of the proposed algorithm.

3.1. Atmospheric Light Estimation

Most single-image-based defogging algorithms set the atmospheric light A to an arbitrary constant or to the brightest pixel value in the image under the assumption that the fog color is white [5,6,7]. Since these methods do not estimate the accurate atmospheric light, the quality of defogged images is degraded. In this paper, the atmospheric light A is estimated using the color line-based estimation method that was originally proposed by Sulami et al. [25].
In (1), the fog-free image f ( x , y ) can be expressed as
f ( x , y ) = l ( x , y ) R ( x , y ) ,
where surface shading l ( x , y ) is a scalar value indicating the magnitude of the reflected light, and surface albedo R ( x , y ) is an RGB vector representing the chromaticity of the reflected light. In general, when a natural image is divided into small patches, the surface albedo and transmission of each image patch are approximately constant. Therefore, using this characteristic and (4), the foggy image formation model in (1) can be expressed as follows:
g ( x , y ) = t i ( x , y ) l ( x , y ) R i ( x , y ) + A ( 1 t i ) ,
where t i ( x , y ) represents the transmission value of the i-th image patch, R ( x , y ) the surface albedo of the patch. To create color lines in the RGB color space using image patches with the same surface albedo and transmission, image patches satisfying (5) are selected using principal component analysis (PCA). The strongest principal axis of the image patch should correspond to the orientation of the color line, and there should be a single significant principal component. Additionally, the color line of the image patch should not pass the origin in RGB space, and the image patch should not contain an edge. The color lines are generated using image patches that satisfy these conditions, and the orientation and magnitude of vector A is estimated.

3.2. Stereo Image Defogging

To generate the transmission of a stereo image pair, the disparity map is estimated using the combined local–global approach with total variation (CLG–TV) [26]. The CLG–TV approach integrates Lucas–Kanade [27] and Horn–Schunck [28] models to estimate motion boundary-preserved optical flow using a variational method. The 1 norm error function of the Horn–Schunck model is defined as
E H S = Ω λ r ( u , v ) 2 + u 2 + v 2 ,
where r ( u , v ) represents the residual between the left and right images as
r ( u , v ) = g R ( x + u 0 ) g L x + u u 0 T g R ( x + u 0 ) 0 ,
where u = ( u , v ) : Ω R 2 is the optical flow to estimate with the initial value u 0 , and the left and right images are respectively given as
g L x = g R ( x + u ) = g R ( x + u 0 + u u 0 ) g R ( x + u 0 ) + u u 0 T g R ( x + u 0 ) ,
where x = ( x , y ) Ω R 2 , E H S can be minimized by solving the Euler–Lagrange equation using Jacobi iteration. To make the estimated optical flow as uniform as possible in a small region, the residual r ( u , v ) in (6) is substituted by the Lucas–Kanade error function
E L K = w i n d o w w · r ( u , v ) 2 ,
where w represents the weighting factor. E L K is minimized solving a least-squares problem. Based on the Lucas–Kanade model, the total error of the same window is minimized. The following error function combines Horn–Schunck and Lucas–Kanade models.
E C L G H S = Ω λ w i n d o w r ( u , v ) 2 + u 2 + v 2
To solve the over-smoothing problem in motion boundaries regions, the 1 norm is minimized instead of the 2 norm.
E C L G T V = Ω λ w i n d o w r ( u , v ) 2 + u + v
To minimize the CLG–TV error function, we use an alternative Horn–Schunck model [29] where E C L G T V is decomposed into three terms, as shown below.
E C L G T V = Ω λ w i n d o w r ( u , v ) 2 + 1 2 β u u ^ 2 + 1 2 β v v ^ 2
E T V u = Ω 1 2 β u u ^ 2 + u
E T V v = Ω 1 2 β v v ^ 2 + v
E C L G T V is minimized in the point-wise manner, whereas E T V u and E T V v are minimized using the procedure proposed by Chambolle [30]. In this paper, we calculated the disparity map for only u from the stereo image.
Figure 3 shows the result of the transmission map estimation and fog removal using a stereo input image pair. Since the disparity map is generated using the distorted features by foggy component, the initially estimated transmission map is not sufficiently clear.

3.3. Iterative Refinement of Transmission Map

In this subsection, an iterative process is performed to refine the transmission map. The disparity map is estimated again on the defogged image f ^ s ( x , y ) , estimated by using the initial transmission map t s ( x , y ) , and the transmission map is updated. The kth defogged image f ^ s k ( x , y ) is obtained by the updated transmission map t s k ( x , y ) .
Figure 4 shows the estimated transmission map and the result of fog removal through the iterative process. As shown in Figure 4, color distortion in the sky region is gradually reduced.
Figure 5 shows the iteratively refined transmission maps and the correspondingly defogged results using the FRIDA3 dataset (Foggy Road Image DAtabase) [20]. As the fog is removed in the distant region, the transmission map is gradually improved in the iterative process. As a result, the red circle region is iteratively improved and the initially invisible vehicle appears.
Figure 6 shows the iteratively defogged results using real-world videos [31]. We extracted two adjacent frames in a video and assumed a situation of acquiring images using a dual camera. In the iteration process, the transmission map is gradually refined, and the defogged result is improved.

4. Experimental Results

To evaluate the performance of the proposed defogging method, experimental results are compared with those of the state-of-the-art defogging algorithms. Figure 7a shows a set of test foggy images, Figure 7b–f respectively shows results of He’s method [7], Ancuti’s method [14], Meng’s method [13], Berman’s method [15], and the proposed method.
In Cityscape, River1, and River2 results, Figure 7b,d shows that color distortion and low saturation artifacts occur in the sky region because the atmospheric light is not accurately estimated. Figure 7c shows that the color of the sky region is distorted and the color around the building is faded because the same amount of fog is removed without considering spatially different depth information. Figure 7e shows a slight amount of color distortion since the initial transmission map is regularized by using Gaussian Markov random fields with only local neighbors. Figure 7f shows that the defogged result is clearer than any other method in the sky region, and the color contrast is increased.
In Road1 and Road2 results, Figure 7c shows that the color around the road is faded and distorted because it does not consider depth information. Figure 7e shows that the color tends to be oversaturated when the atmosphere light is significantly brighter than the scene. Figure 7b,d shows excellent defogging results because the color of artificially added fog is mostly white, so the transmission map is well estimated by the DCP-based defogging algorithm. Figure 7f shows a well-defogged result without color distortion or saturation. Experimental results demonstrated that the proposed algorithm outperforms existing algorithms in terms of both fog removal and color preservation.
Table 1 shows two quantitative measures for objective evaluation, including no-reference image quality metric for contrast distortion (NIQMC) [32] and entropy for measuring contrast of the defogged results. The higher NIQMC value indicates superior color contrast and edge of the image. A high entropy value indicates that the average amount of information in the image is high. In other words, a greater amount of information about the edges or features results in better color contrast. Based on Table 1, the highest values in each image are shown in bold and the proposed method performs better than other existing methods.

5. Application to Asymmetric Dual Camera System

The proposed stereo-based defogging algorithm is particularly suitable for a dual camera system that has attracted increasing attention in the field of robot vision, autonomous driving, and high-end smartphones. Figure 8 shows the block diagram of the proposed defogging algorithm applied to an asymmetric dual camera system. To estimate the depth information, features in the stereo foggy image pair are first matched, and the scale of the longer focal length image is then corrected. The proposed defogging method is applied to the overlapped regions of two images with different focal lengths. To remove the fog in the non-overlapping region, a single image-based defogging method is first used, and color distortion is then corrected using histogram matching with reference to the overlapped region.
Figure 9 demonstrates that the proposed algorithm can be applied to an asymmetric dual camera system. For the experiment, Figure 9b is obtained by cropping the center region of Figure 9a. As a result, Figure 9a,b can be considered as a stereo pair of input foggy images with different focal lengths. Figure 9c shows the defogged result of Figure 9a by the single image defogging method. Figure 9d shows the defogged result of Figure 9b by the stereo image defogging method proposed in this paper. Figure 9e shows the stitching result of Figure 9c,d. Based on the result, the proposed stereo-based method can remove fog while preserving the original color information in the asymmetric dual camera system.

6. Conclusions

In this paper, a stereo defogging algorithm is proposed to accurately estimate the transmission map based on the depth information. The major contribution of this work is twofold: (i) The stereo-based iterative defogging process can provide greatly enhanced results compared with existing state-of-the-art methods; and (ii) the framework of the stereo-based algorithm is particularly suitable for a dual camera system that is embedded in a high-end consumer smartphone. The proposed method first obtains the disparity map by estimating optical flow from the stereo foggy image pair, and generates the initial transmission map using the disparity. The defogged image is restored by generating the transmission map and estimating atmospheric light A based on the color line theory. By repeating the set of optical flow estimation, transmission map generation, and defogging until convergence, significantly improved defogged results were obtained. Experimental results show that the proposed method successfully removed fog without color distortion, and the transmission map and the defogged image were iteratively refined. The proposed method can be used as a pre-processing step for an outdoor image analysis function in an intelligent video surveillance system, autonomous driving, and asymmetric dual camera in a high-end smartphone.

Acknowledgments

This work was supported by the Institute for Information & communications Technology Promotion (IITP) grant funded by the Korea government (MSIT) (2017-0-00250, Intelligent Defense Boundary Surveillance Technology Using Collaborative Reinforced Learning of Embedded Edge Camera and Image Analysis).

Author Contributions

Heegwang Kim performed the experiments. Jinho Park and Hasil Park initiated the research and designed the experiments. Joonki Paik wrote the paper.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Narasimhan, S.G.; Nayar, S.K. Vision and the Atmosphere. Int. J. Comput. Vis. 2002, 48, 233–254. [Google Scholar] [CrossRef]
  2. Narasimhan, S.G.; Nayar, S.K. Contrast restoration of weather degraded images. IEEE Trans. Pattern Anal. Mach. Intell. 2003, 25, 713–724. [Google Scholar] [CrossRef]
  3. Shwartz, S.; Namer, E.; Schechner, Y.Y. Blind Haze Separation. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’06), New York, NY, USA, 17–22 June 2006; Volume 2, pp. 1984–1991. [Google Scholar]
  4. Schechner, Y.Y.; Narasimhan, S.G.; Nayar, S.K. Instant dehazing of images using polarization. In Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001; Volume 1. [Google Scholar]
  5. Tan, R.T. Visibility in bad weather from a single image. In Proceedings of the 2008 IEEE Conference on Computer Vision and Pattern Recognition, Anchorage, AK, USA, 23–28 June 2008; pp. 1–8. [Google Scholar]
  6. Fattal, R. Single Image Dehazing. ACM Trans. Graph. 2008, 27, 72:1–72:9. [Google Scholar] [CrossRef]
  7. He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
  8. Gibson, K.B.; Vo, D.T.; Nguyen, T.Q. An investigation of dehazing effects on image and video coding. IEEE Trans. Image Process. 2012, 21, 662–673. [Google Scholar] [CrossRef] [PubMed]
  9. Xiao, C.; Gan, J. Fast image dehazing using guided joint bilateral filter. Vis. Comput. 2012, 28, 713–721. [Google Scholar]
  10. Chen, B.H.; Huang, S.C.; Cheng, F.C. A high-efficiency and high-speed gain intervention refinement filter for haze removal. J. Disp. Technol. 2016, 12, 753–759. [Google Scholar] [CrossRef]
  11. Jha, D.K. l2-norm-based prior for haze-removal from single image. IET Comput. Vis. 2016, 10, 331–343. [Google Scholar] [CrossRef]
  12. Yoon, I.; Kim, S.; Kim, D.; Hayes, M.H.; Paik, J. Adaptive defogging with color correction in the HSV color space for consumer surveillance system. IEEE Trans. Consum. Electron. 2012, 58, 111–116. [Google Scholar] [CrossRef]
  13. Meng, G.; Wang, Y.; Duan, J.; Xiang, S.; Pan, C. Efficient Image Dehazing with Boundary Constraint and Contextual Regularization. In Proceedings of the 2013 IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 617–624. [Google Scholar]
  14. Ancuti, C.O.; Ancuti, C. Single Image Dehazing by Multi-Scale Fusion. IEEE Trans. Image Process. 2013, 22, 3271–3282. [Google Scholar] [CrossRef] [PubMed]
  15. Berman, D.; Treibitz, T.; Avidan, S. Non-local Image Dehazing. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 27–30 June 2016; pp. 1674–1682. [Google Scholar]
  16. Chen, B.H.; Huang, S.C.; Li, C.Y.; Kuo, S.Y. Haze removal using radial basis function networks for visibility restoration applications. IEEE Trans. Neural Netw. Learn. Syst. 2017, 1–11. [Google Scholar] [CrossRef] [PubMed]
  17. Cai, B.; Xu, X.; Jia, K.; Qing, C.; Tao, D. Dehazenet: An end-to-end system for single image haze removal. IEEE Trans. Image Process. 2016, 25, 5187–5198. [Google Scholar]
  18. Chen, B.H.; Huang, S.C.; Kuo, S.Y. Error-Optimized Sparse Representation for Single Image Rain Removal. IEEE Trans. Ind. Electron. 2017, 64, 6573–6581. [Google Scholar] [CrossRef]
  19. Eigen, D.; Krishnan, D.; Fergus, R. Restoring an image taken through a window covered with dirt or rain. In Proceedings of the IEEE International Conference on Computer Vision, Sydney, Australia, 1–8 December 2013; pp. 633–640. [Google Scholar]
  20. Caraffa, L.; Tarel, J.P. Stereo Reconstruction and Contrast Restoration in Daytime Fog. In Revised Selected Papers, Part IV, Proceedings of the Computer Vision—ACCV 2012: 11th Asian Conference on Computer Vision, Daejeon, Korea, 5–9 November 2012; Lee, K.M., Matsushita, Y., Rehg, J.M., Hu, Z., Eds.; Springer: Berlin/Heidelberg, Germany, 2013; pp. 13–25. [Google Scholar]
  21. Lee, Y.; Gibson, K.B.; Lee, Z.; Nguyen, T.Q. Stereo image defogging. In Proceedings of the 2014 IEEE International Conference on Image Processing (ICIP), Paris, France, 27–30 October 2014; pp. 5427–5431. [Google Scholar]
  22. Park, H.; Park, J.; Kim, H.; Paik, J. Improved DCP-based image defogging using stereo images. In Proceedings of the 2016 IEEE 6th International Conference on Consumer Electronics, Berlin, Germany, 5–7 September 2016; pp. 48–49. [Google Scholar]
  23. Koschmieder, H. Theorie der horizontalen sichtweite. Beitrage Physik Freien Atmosphare 1924, 12, 33–53. [Google Scholar]
  24. Omer, I.; Werman, M. Color lines: Image specific color representation. In Proceedings of the 2004 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Washington, DC, USA, 27 June–2 July 2004; Volume 2. [Google Scholar]
  25. Sulami, M.; Glatzer, I.; Fattal, R.; Werman, M. Automatic recovery of the atmospheric light in hazy images. In Proceedings of the 2014 IEEE International Conference on Computational Photography (ICCP), Santa Clara, CA, USA, 2–4 May 2014; pp. 1–11. [Google Scholar]
  26. Drulea, M.; Nedevschi, S. Total variation regularization of local-global optical flow. In Proceedings of the 14th International IEEE Conference on Intelligent Transportation Systems (ITSC), Washington, DC, USA, 5–7 October 2011; pp. 318–323. [Google Scholar]
  27. Lucas, B.D.; Kanade, T. An iterative image registration technique with an application to stereo vision. In Proceedings of the 7th International Joint Conference on Artificial Intelligence, Vancouver, BC, Canada, 24–28 August 1981. [Google Scholar]
  28. Horn, B.K.; Schunck, B.G. Determining optical flow. Artif. Intell. 1981, 17, 185–203. [Google Scholar] [CrossRef]
  29. Zach, C.; Pock, T.; Bischof, H. A duality based approach for realtime TV-L1 optical flow. Pattern Recognit. 2007, 4713, 214–223. [Google Scholar]
  30. Chambolle, A. An algorithm for total variation minimization and applications. J. Math. Imaging Vis. 2004, 20, 89–97. [Google Scholar]
  31. Li, Z.; Tan, P.; Tan, R.T.; Zou, D.; Zhou, Z.S.; Cheong, L.F. Simultaneous video defogging and stereo reconstruction. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 4988–4997. [Google Scholar]
  32. Gu, K.; Lin, W.; Zhai, G.; Yang, X.; Zhang, W.; Chen, C.W. No-reference quality metric of contrast-distorted images based on information maximization. IEEE Trans. Cybern. 2016, 47, 4559–4565. [Google Scholar] [CrossRef] [PubMed]
Figure 1. Illustration of the foggy image formation model.
Figure 1. Illustration of the foggy image formation model.
Sensors 17 02861 g001
Figure 2. Block diagram of the proposed defogging algorithm: For s { L , R } , g s is the input stereo image pair, A s is the global atmospheric light, d s k , t s k , and f ^ s k , respectively, are the disparity map, the transmission map, and the defogged image at the k-th iteration.
Figure 2. Block diagram of the proposed defogging algorithm: For s { L , R } , g s is the input stereo image pair, A s is the global atmospheric light, d s k , t s k , and f ^ s k , respectively, are the disparity map, the transmission map, and the defogged image at the k-th iteration.
Sensors 17 02861 g002
Figure 3. Results of defogging using the initial transmission map. (a) A stereo pair of input foggy images; (b) initial transmission maps; and (c) defogged results.
Figure 3. Results of defogging using the initial transmission map. (a) A stereo pair of input foggy images; (b) initial transmission maps; and (c) defogged results.
Sensors 17 02861 g003
Figure 4. Iterative refinement process of transmission map. (ac) The 1-st, 3-rd, and 5-th refined transmission maps; and (df) the 1-st, 3-rd, and 5-th defogged results.
Figure 4. Iterative refinement process of transmission map. (ac) The 1-st, 3-rd, and 5-th refined transmission maps; and (df) the 1-st, 3-rd, and 5-th defogged results.
Sensors 17 02861 g004
Figure 5. Defogging results of synthetic stereo image. (ac) Input, 4-, and 7-th defogged images; (df) enlarged version of (a–c); and (gi) corresponding transmission maps.
Figure 5. Defogging results of synthetic stereo image. (ac) Input, 4-, and 7-th defogged images; (df) enlarged version of (a–c); and (gi) corresponding transmission maps.
Sensors 17 02861 g005
Figure 6. Defogging results using real-world videos from [31]. (a) Input foggy images; (b) defogged results after three iterations; and (c) defogged results after five iterations. Corresponding transmission maps are shown below each image.
Figure 6. Defogging results using real-world videos from [31]. (a) Input foggy images; (b) defogged results after three iterations; and (c) defogged results after five iterations. Corresponding transmission maps are shown below each image.
Sensors 17 02861 g006
Figure 7. Experimental results of various defogging methods. (a) Input foggy images; (b) He et al. [7]; (c) Ancuti et al. [14]; (d) Meng et al. [13]; (e) Berman et al. [15]; (f) the proposed method.
Figure 7. Experimental results of various defogging methods. (a) Input foggy images; (b) He et al. [7]; (c) Ancuti et al. [14]; (d) Meng et al. [13]; (e) Berman et al. [15]; (f) the proposed method.
Sensors 17 02861 g007
Figure 8. Block diagram of the proposed defogging algorithm applied to an asymmetric dual camera system.
Figure 8. Block diagram of the proposed defogging algorithm applied to an asymmetric dual camera system.
Sensors 17 02861 g008
Figure 9. Defogging results using a simulated pair of dual camera images. (a) An input foggy image; (b) simulated longer focal length; (c) defogged version of (a) using a single-image-based method; (d) defogged version of (b) using the proposed stereo-based method; (e) stitched result of (c,d).
Figure 9. Defogging results using a simulated pair of dual camera images. (a) An input foggy image; (b) simulated longer focal length; (c) defogged version of (a) using a single-image-based method; (d) defogged version of (b) using the proposed stereo-based method; (e) stitched result of (c,d).
Sensors 17 02861 g009aSensors 17 02861 g009b
Table 1. Quantitative results for each method using stereo images. NIQMC: no-reference image quality metric for contrast distortion.
Table 1. Quantitative results for each method using stereo images. NIQMC: no-reference image quality metric for contrast distortion.
Foggy ImageCityscapeRiver1River2Road1Road2Average
NIQMCHe et al. [7]5.10985.68125.32484.34604.01994.8963
Ancuti et al. [14]4.81535.28945.24024.32954.45004.8248
Meng et al. [13]5.36445.66085.25003.97863.74484.7997
Berman et al. [15]4.94715.03275.48354.43454.86954.9534
Proposed method4.97575.14635.60144.16734.90044.9582
EntropyHe et al. [7]7.29197.78467.44514.21634.27766.2031
Ancuti et al. [14]7.13637.48267.41173.03744.00655.9594
Meng et al. [13]7.53267.69257.50264.11554.16476.2015
Berman et al. [15]7.66576.72506.61594.43454.45635.9594
Proposed method7.15467.29417.71964.54705.56316.4556

Share and Cite

MDPI and ACS Style

Kim, H.; Park, J.; Park, H.; Paik, J. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor. Sensors 2017, 17, 2861. https://doi.org/10.3390/s17122861

AMA Style

Kim H, Park J, Park H, Paik J. Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor. Sensors. 2017; 17(12):2861. https://doi.org/10.3390/s17122861

Chicago/Turabian Style

Kim, Heegwang, Jinho Park, Hasil Park, and Joonki Paik. 2017. "Iterative Refinement of Transmission Map for Stereo Image Defogging Using a Dual Camera Sensor" Sensors 17, no. 12: 2861. https://doi.org/10.3390/s17122861

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop