Anti-Shake HDR Imaging Using RAW Image Data
Abstract
:1. Introduction
- A new dataset of multi-exposure raw image sequences featuring camera shaking is collected, and each raw image has a corresponding JPEG format.
- A reference image selection method for multi-exposure raw image sequences with camera shaking is proposed based on information entropy.
- A pipeline for raw image sequences with camera shaking to produce ghost-free HDR images is designed, with more robust performance for extreme-exposure raw image pairs.
2. Related Work
3. The Proposed Method
3.1. Reference Image Selection
3.2. Feature Point Detection and Matching
3.3. Image Aligned and Registration
3.4. Image Fusion
4. Experiments and Results
4.1. Dataset
4.2. Experiments
4.2.1. Experimental Results
4.2.2. Comparison with Other Methods
- Entropy. The information entropy describes how much information is provided by the signal or image on average, as defined in Equation (1):
- PSNR. PSNR is the ratio between the maximum possible power of a signal and the power of corrupting noise that affects the fidelity. The larger the PSNR, the better the fusion result. The calculation formulae are shown in Equations (11)–(12):
- 3.
- Average gradient. The average gradient reflects the sharpness and texture of the image, as shown in Equation (13). When the average gradient is higher, the method is regarded as more effective.
- 4.
- SSIM. SSIM describes the similarity between two images. Equation (14) is used to calculate the similarity between the reference images and the fused HDR images. A larger value of SSIM indicates a smaller difference between the two images, and that the fusion result is of higher quality.
- 5.
- HDR-VDP-2. The HDR-VDP-2 [43] metric is based on the human visual perception model that can predict visibility and quality difference between two images. For quality differences, the metric produces a mean-opinion score (Q-score), which computes the quality degradation of the fused HDR images with respect to the reference images.
4.2.3. Image Fusion with the Extreme-Exposure Sequence
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Shafie, S.; Kawahito, S.; Itoh, S. A dynamic range expansion technique for CMOS image sensors with dual charge storage in a pixel and multiple sampling. Sensors 2008, 8, 1915–1926. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Shafie, S.; Kawahito, S.; Halin, I.A.; Hasan, W.Z.W. Non-linearity in wide dynamic range CMOS image sensors utilizing a partial charge transfer technique. Sensors 2009, 9, 9452–9467. [Google Scholar] [CrossRef] [PubMed]
- Martínez-Sánchez, A.; Fernández, C.; Navarro, P.J.; Iborra, A. A novel method to increase LinLog CMOS sensors’ performance in high dynamic range scenarios. Sensors 2011, 11, 8412–8429. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Agusanto, K.; Li, L.; Chuangui, Z.; Sing, N.W. Photorealistic rendering for augmented reality using environment illumination. In Proceedings of the Second IEEE and ACM International Symposium on Mixed and Augmented Reality, Tokyo, Japan, 10 October 2003; pp. 208–216. [Google Scholar]
- Quevedo, E.; Delory, E.; Callicó, G.; Tobajas, F.; Sarmiento, R. Underwater video enhancement using multi-camera super-resolution. Opt. Commun. 2017, 404, 94–102. [Google Scholar] [CrossRef]
- Ward, G.; Reinhard, E.; Debevec, P. High dynamic range imaging & image-based lighting. In ACM SIGGRAPH 2008 Classes; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–137. [Google Scholar]
- Debevec, P. Image-based lighting. In ACM SIGGRAPH 2006 Courses; Association for Computing Machinery: New York, NY, USA, 2006; p. 4. [Google Scholar]
- Debevec, P.; McMillan, L. Image-based modeling, rendering, and lighting. IEEE Comput. Graph. Appl. 2002, 22, 24–25. [Google Scholar] [CrossRef] [Green Version]
- Pece, F.; Kautz, J. Bitmap movement detection: HDR for dynamic scenes. In Proceedings of the 2010 Conference on Visual Media Production, London, UK, 17–18 November 2010; pp. 1–8. [Google Scholar]
- Dong, Y.; Pourazad, M.T.; Nasiopoulos, P. Human visual system-based saliency detection for high dynamic range content. IEEE Trans. Multimed. 2016, 18, 549–562. [Google Scholar] [CrossRef]
- Lin, Y.-T.; Wang, C.-M.; Chen, W.-S.; Lin, F.-P.; Lin, W. A novel data hiding algorithm for high dynamic range images. IEEE Trans. Multimed. 2016, 19, 196–211. [Google Scholar] [CrossRef]
- Ravuri, C.S.; Sureddi, R.; Dendi, S.V.R.; Raman, S.; Channappayya, S.S. Deep no-reference tone mapped image quality assessment. In Proceedings of the 2019 53rd Asilomar Conference on Signals, Systems, and Computers, Systems, and Computers, Pacific Grove, CA, USA, 3–6 November 2019; pp. 1906–1910. [Google Scholar]
- Hadizadeh, H.; Bajić, I.V. Full-reference objective quality assessment of tone-mapped images. IEEE Trans. Multimed. 2017, 20, 392–404. [Google Scholar] [CrossRef]
- Eden, A.; Uyttendaele, M.; Szeliski, R. Seamless image stitching of scenes with large motions and exposure differences. In Proceedings of the 2006 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR‘06), New York, NY, USA, 17–22 June 2006; pp. 2498–2505. [Google Scholar]
- Endo, Y.; Kanamori, Y.; Mitani, J. Deep reverse tone mapping. Acm Trans. Graph. 2017, 36, 177. [Google Scholar] [CrossRef]
- Liu, Y.; Wang, Z. Dense SIFT for ghost-free multi-exposure fusion. J. Vis. Commun. Image Represent. 2015, 31, 208–224. [Google Scholar] [CrossRef]
- Li, Z.; Wei, Z.; Wen, C.; Zheng, J. Detail-enhanced multi-scale exposure fusion. IEEE Trans. Image Process. 2017, 26, 1243–1252. [Google Scholar] [CrossRef] [PubMed]
- Eilertsen, G.; Unger, J.; Mantiuk, R.K. Evaluation of tone mapping operators for HDR video. In High Dynamic Range Video; Academic Press: Cambridge, MA, USA, 2016; pp. 185–207. [Google Scholar]
- Shen, J.; Zhao, Y.; Yan, S.; Li, X. Exposure fusion using boosting Laplacian pyramid. IEEE Trans. Cybern. 2014, 44, 1579–1590. [Google Scholar] [CrossRef] [PubMed]
- Lee, D.-H.; Fan, M.; Kim, S.-W.; Kang, M.-C.; Ko, S.-J. High dynamic range image tone mapping based on asymmetric model of retinal adaptation. Signal Process. Image Commun. 2018, 68, 120–128. [Google Scholar] [CrossRef]
- Ma, K.; Yeganeh, H.; Zeng, K.; Wang, Z. High dynamic range image tone mapping by optimizing tone mapped image quality index. In Proceedings of the 2014 IEEE International Conference on Multimedia and Expo (ICME), Chengdu, China, 14–18 July 2014; pp. 1–6. [Google Scholar]
- Liu, Z.; Yin, H.; Fang, B.; Chai, Y. A novel fusion scheme for visible and infrared images based on compressive sensing. Opt. Commun. 2015, 335, 168–177. [Google Scholar] [CrossRef]
- Kinoshita, Y.; Yoshida, T.; Shiota, S.; Kiya, H. Pseudo multi-exposure fusion using a single image. In Proceedings of the 2017 Asia-Pacific Signal and Information Processing Association Annual Summit and Conference (APSIPA ASC), Kuala Lumpur, Malaysia, 12–15 December 2017; pp. 263–269. [Google Scholar]
- Huo, Y.; Zhang, X. Single image-based HDR imaging with CRF estimation. In Proceedings of the 2016 International Conference On Communication Problem-Solving (ICCP), Taipei, Taiwan, 7–9 September 2016; pp. 1–3. [Google Scholar]
- Reinhard, E.; Stark, M.; Shirley, P.; Ferwerda, J. Photographic tone reproduction for digital images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 23–26 July 2002; pp. 267–276. [Google Scholar]
- Mantiuk, R.; Daly, S.; Kerofsky, L. Display adaptive tone mapping. In ACM SIGGRAPH 2008 Papers; Association for Computing Machinery: New York, NY, USA, 2008; pp. 1–10. [Google Scholar]
- Durand, F.; Dorsey, J. Fast bilateral filtering for the display of high-dynamic-range images. In Proceedings of the 29th Annual Conference on Computer Graphics and Interactive Techniques, San Antonio, TX, USA, 23–26 July 2002; pp. 257–266. [Google Scholar]
- Drago, F.; Myszkowski, K.; Annen, T.; Chiba, N. Adaptive logarithmic mapping for displaying high contrast scenes. In Computer Graphics Forum; Blackwell Publishing: Oxford, UK, 2003; pp. 419–426. [Google Scholar]
- Zhang, W.; Liu, X.; Wang, W.; Zeng, Y. Multi-exposure image fusion based on wavelet transform. Int. J. Adv. Robot. Syst. 2018, 15, 1729881418768939. [Google Scholar] [CrossRef] [Green Version]
- Vanmali, A.V.; Kelkar, S.G.; Gadre, V.M. Multi-exposure image fusion for dynamic scenes without ghost effect. In Proceedings of the 2015 Twenty First National Conference on Communications (NCC), Mumbai, India, 27 February–1 March 2015; pp. 1–6. [Google Scholar]
- Kinoshita, Y.; Shiota, S.; Kiya, H.; Yoshida, T. Multi-exposure image fusion based on exposure compensation. In Proceedings of the 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada, 15–20 April 2018; pp. 1388–1392. [Google Scholar]
- Sen, P.; Kalantari, N.K.; Yaesoubi, M.; Darabi, S.; Goldman, D.B.; Shechtman, E. Robust patch-based hdr reconstruction of dynamic scenes. Acm Trans. Graph. 2012, 31, 203:201–203:211. [Google Scholar] [CrossRef]
- Li, Z.; Zheng, J.; Zhu, Z.; Wu, S. Selectively detail-enhanced fusion of differently exposed images with moving objects. IEEE Trans. Image Process. 2014, 23, 4372–4382. [Google Scholar] [CrossRef]
- Ying, Z.; Li, G.; Ren, Y.; Wang, R.; Wang, W. A new image contrast enhancement algorithm using exposure fusion framework. In Proceedings of the International Conference on Computer Analysis of Images and Patterns, Ystad, Sweden, 22–24 August 2017; pp. 36–46. [Google Scholar]
- Chen, C.; Chen, Q.; Xu, J.; Koltun, V. Learning to see in the dark. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–22 June 2018; pp. 3291–3300. [Google Scholar]
- Shen, L.; Yue, Z.; Feng, F.; Chen, Q.; Liu, S.; Ma, J. Msr-net: Low-light image enhancement using deep convolutional network. arXiv 2017, arXiv:1711.02488. [Google Scholar]
- Yang, B.; Zhong, J.; Li, Y.; Chen, Z. Multi-focus image fusion and super-resolution with convolutional neural network. Int. J. WaveletsMultiresolution Inf. Process. 2017, 15, 1750037. [Google Scholar] [CrossRef]
- Hasinoff, S.W.; Sharlet, D.; Geiss, R.; Adams, A.; Barron, J.T.; Kainz, F.; Chen, J.; Levoy, M. Burst photography for high dynamic range and low-light imaging on mobile cameras. Acm Trans. Graph. (Tog) 2016, 35, 1–12. [Google Scholar] [CrossRef]
- Brooks, T.; Mildenhall, B.; Xue, T.; Chen, J.; Sharlet, D.; Barron, J.T. Unprocessing images for learned raw denoising. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, San Francisco, CA, USA, 13–18 June 2019; pp. 11036–11045. [Google Scholar]
- Ihara, S. Information Theory for Continuous Systems; World Scientific: Hackensack, NJ, USA, 1993. [Google Scholar]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the 2011 International Conference on Computer Vision, Barcelona, Spain, 6–13 November 2011; pp. 2564–2571. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography. Commun. Acm 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Narwaria, M.; Mantiuk, R.; Da Silva, M.P.; Le Callet, P. HDR-VDP-2.2: A calibrated method for objective quality prediction of high-dynamic range and standard images. J. Electron. Imaging 2015, 24, 010501. [Google Scholar] [CrossRef] [Green Version]
Exposure Image Sequence | |||||
---|---|---|---|---|---|
RAW | 4.80025180 | 6.02060459 | 7.64435717 | 7.36287806 | 6.95316271 |
JPEG | 4.66945082 | 5.99022103 | 7.61556499 | 7.32166800 | 6.08631492 |
Method/Metrics | Entropy | PSNR | Average Gradient | SSIM | Q-Score |
---|---|---|---|---|---|
JPEG | 7.24006583 | 20 dB | 0.0089 | 0.35 | 37.35 |
Drago [28] | 4.73977225 | 28 dB | 0.0082 | 0.49 | 49.33 |
Durand [27] | 4.86853024 | 36 dB | 0.0316 | 0.68 | 62.18 |
Reinhard [25] | 6.48690328 | 33 dB | 0.0191 | 0.54 | 53.26 |
Mantiuk [26] | 4.80421760 | 35 dB | 0.0239 | 0.78 | 64.07 |
Our method | 7.71679023 | 41 dB | 0.0451 | 0.91 | 73.50 |
Figure 8 | The Number of Feature Points Detected (Nd) | The Number of Feature Points Rejected (Nr) | Matching Rate (Nd−Nr)/Nd | Time (ms) | ||||
---|---|---|---|---|---|---|---|---|
JPEG | RAW | JPEG | RAW | JPEG | RAW | JPEG | RAW | |
(a) | 2893 | 3415 | 1367 | 1134 | 52.75% | 66.80% | 14.23 | 13.95 |
(b) | 3135 | 3389 | 1945 | 1870 | 37.96% | 44.82% | 14.78 | 14.59 |
(c) | 2520 | 2765 | 1937 | 2087 | 23.13% | 24.52% | 5.69 | 4.15 |
(d) | 2693 | 2905 | 1263 | 1256 | 48.26% | 56.76% | 9.28 | 8.49 |
(e) | 2960 | 3218 | 1458 | 1282 | 50.74% | 60.16% | 12.36 | 11.48 |
(f) | 3045 | 3319 | 1363 | 1427 | 55.24% | 57.01% | 8.04 | 6.01 |
(g) | 3157 | 3578 | 1950 | 2218 | 38.02% | 38.01% | 15.63 | 13.04 |
(h) | 2875 | 3176 | 1469 | 1589 | 48.90% | 49.97% | 18.55 | 14.37 |
Images/Metrics | Entropy | PSNR | Average Gradient | SSIM | Q-Score |
---|---|---|---|---|---|
JPEG | 7.50736193 | 34 dB | 0.3967 | 0.74 | 63.75 |
RAW | 7.70380035 | 40 dB | 0.0463 | 0.87 | 69.29 |
© 2020 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Liu, Y.; Lv, B.; Huang, W.; Jin, B.; Li, C. Anti-Shake HDR Imaging Using RAW Image Data. Information 2020, 11, 213. https://doi.org/10.3390/info11040213
Liu Y, Lv B, Huang W, Jin B, Li C. Anti-Shake HDR Imaging Using RAW Image Data. Information. 2020; 11(4):213. https://doi.org/10.3390/info11040213
Chicago/Turabian StyleLiu, Yan, Bingxue Lv, Wei Huang, Baohua Jin, and Canlin Li. 2020. "Anti-Shake HDR Imaging Using RAW Image Data" Information 11, no. 4: 213. https://doi.org/10.3390/info11040213
APA StyleLiu, Y., Lv, B., Huang, W., Jin, B., & Li, C. (2020). Anti-Shake HDR Imaging Using RAW Image Data. Information, 11(4), 213. https://doi.org/10.3390/info11040213