An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System
Abstract
:1. Introduction
- (1)
- By calculating the retention degree of different features in the source image, the fusion result is adaptively similar to the source image. The end-to-end unsupervised image fusion network is used to overcome the problems that the ground-truth cannot achieve and the reference metric is not available in most image fusion problems;
- (2)
- Multi-exposure image fusion is applied to the transportation field. The source images are preprocessed according to weather conditions and environmental noise, and the fusion result is adaptively optimized. The final generated image can reflect the source image information and has great practicability and effectiveness;
- (3)
- We have released a new multi-exposure image dataset, TrafficSign [7], which is aimed at the fusion of traffic signs in the intelligent transportation field, and provides a new option for image fusion benchmark evaluation.
2. Related Works
3. Proposed Method
Algorithm 1: The description of the training procedure | |
* Training Procedure of the Proposed Method | |
Parameter:ϕₙk means the feature map of the k-th input image before the n-th max-pooling layer. gI means the information measurement of image I. θ denotes the parameters in DenseNet. D is the training dataset. α is set as 20. c is set as 3e3, 3.5e3, and 1e2. | |
Input: RGB images and , n denotes the n-th pair images. | |
Output: the parameters in DenseNet θ. | |
1: | ← bilateral filtering and dehazing on |
2: | ← bilateral filtering and dehazing on . |
3: | Set the number of training iterations; |
4: | for the number of training images do |
5: | Feed the input images into VGGNet-16, and extract the feature maps: and ; |
6: | Compute the gradients , using to measure the information of input images; |
7: | Define two weights ω1 and ω2 as the information preservation degrees, which can compute using , the weights is to preserve the information in source images; |
8: | SSIM and MSE is used to obtain the , can compute by ; |
9: | Update θ; |
10: | The number of training iterations minus 1; |
11: | if the number of training iterations is 0 |
12: | break; |
13: | endif |
14: | end |
15: | return θ; |
3.1. Image Preprocessing
3.2. Fusion Module
3.3. Image Optimization
Algorithm 2: The description of the optimization | |
* Process of Optimization | |
Parameter: denotes the brightness of , Mgray means the gray average of an image. | |
Input: Fused image from DenseNet, the high threshold Th, and the low threshold Tl. | |
Output: Final image after optimization. | |
1: | Compute the Brightness of using Mgray/255.0; |
2: | if 1 > > Th then |
3: | ← Brightness reduction on ; |
4: | return ; |
5: | else if 0 < < Tl then |
6: | ← Brightness enhancement on ; |
7: | return ; |
8: | else ifTl ≤ ≤ Th then |
9: | return ; |
10: | else |
11: | return false; |
12: | end if |
4. Experimental Result and Analysis
4.1. Qualitative Comparisons
4.2. Quantitative Comparisons
4.3. Expended Experiment
5. Conclusions
Author Contributions
Funding
Acknowledgments
Conflicts of Interest
References
- Yang, Z.X.; Wang, Z.M.; Wu, J.; Yang, C.; Yu, Y.; He, Y.H. Image Fusion Scheme in Intelligent Transportation System. In Proceedings of the 2006 6th International Conference on ITS Telecommunications, Chengdu, China, 21–26 June 2006; pp. 935–938. [Google Scholar]
- Kou, F.; Wei, Z.; Chen, W.; Wu, X.; Wen, C.; Li, Z. Intelligent Detail Enhancement for Exposure Fusion. IEEE Trans. Multimed. 2017, 20, 484–495. [Google Scholar] [CrossRef]
- Li, S.; Kang, X.; Fang, L.; Hu, J.; Yin, H. Pixel-level image fusion: A survey of the state of the art. Inf. Fusion 2017, 33, 100–112. [Google Scholar] [CrossRef]
- Gu, B.; Li, W.; Wong, J.; Zhu, M.; Wang, M. Gradient field multi-exposure images fusion for high dynamic range image visualization. J. Vis. Commun. Image Represent. 2012, 23, 604–610. [Google Scholar] [CrossRef]
- Mertens, T.; Kautz, J.; Reeth, F.V. Exposure Fusion. In Proceedings of the 15th Pacific Conference on Computer Graphics and Applications (PG’07), Washington, DC, USA, 29 October–2 November 2007; pp. 382–390. [Google Scholar]
- Xu, H.; Ma, J.; Jiang, J.; Guo, X.; Ling, H. U2Fusion: A Unified Unsupervised Image Fusion Network. IEEE Trans.Pattern Anal. Mach. Intell. 2020, 1. [Google Scholar] [CrossRef] [PubMed]
- Chen, Y. Dataset: TrafficSign. Available online: https://github.com/chenyi-real/TrafficSign (accessed on 27 December 2020).
- Li, H.; Wu, X.; Kittler, J. Infrared and Visible Image Fusion using a Deep Learning Framework. In Proceedings of the 2018 24th International Conference on Pattern Recognition (ICPR), Beijing, China, 20–24 August 2018; pp. 2705–2710. [Google Scholar]
- Paul, S.; Sevcenco, I.S.; Agathoklis, P. Multi-Exposure and Multi-Focus Image Fusion in Gradient Domain. J. Circuits Syst. Comput. 2016, 25, 25. [Google Scholar] [CrossRef] [Green Version]
- Dong, Z.; Lai, C.S.; Qi, D.; Xu, Z.; Li, C.; Duan, S. A general memristor-based pulse coupled neural network with variable linking coefficient for multi-focus image fusion. Neurocomputing 2018, 308, 172–183. [Google Scholar] [CrossRef]
- Ma, J.; Yu, W.; Liang, P.; Li, C.; Jiang, J. FusionGAN: A generative adversarial network for infrared and visible image fusion. Inf. Fusion 2019, 48, 11–26. [Google Scholar] [CrossRef]
- Qiu, X.; Li, M.; Zhang, L.; Yuan, X. Guided filter-based multi-focus image fusion through focus region detection. Signal Process. Image Commun. 2019, 72, 35–46. [Google Scholar] [CrossRef]
- Prabhakar, K.; Srikar, V.; Babu, R. DeepFuse: A Deep Unsupervised Approach for Exposure Fusion with Extreme Exposure Image Pairs. In Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy, 22–29 October 2017; pp. 4724–4732. [Google Scholar]
- Liu, Y.; Liu, S.; Wang, Z. Multi-focus image fusion with dense sift. Inf. Fusion 2015, 23, 139–155. [Google Scholar] [CrossRef]
- Yang, Y.; Cao, W.; Wu, S.; Li, Z. Multi-Scale Fusion of Two Large-Exposure-Ratio Images. IEEE Signal Process. Lett. 2018, 25, 1885–1889. [Google Scholar] [CrossRef]
- Fu, M.; Li, W.; Lian, F. The research of image fusion algorithms for ITS. In Proceedings of the 2010 International Conference on Mechanic Automation and Control Engineering, Wuhan, China, 26–28 June 2010; pp. 2867–2870. [Google Scholar]
- Goshtasby, A.A. Fusion of multi-exposure images. Image Vis. Comput. 2005, 23, 611–618. [Google Scholar] [CrossRef]
- He, K.; Sun, J.; Tang, X. Single Image Haze Removal Using Dark Channel Prior. IEEE Trans. Pattern Anal. Mach. Intell. 2011, 33, 2341–2353. [Google Scholar] [PubMed]
- Dong, Z.; Lai, C.S.; He, Y.; Qi, D.; Duan, S. Hybrid dual-complementary metal–oxide–semiconductor/memristor synapse-based neural network with its applications in image super-resolution. IET Circuits Devices Sys. 2019, 13, 1241–1248. [Google Scholar] [CrossRef]
- Dong, Z.; Du, C.; Lin, H.; Lai, C.S.; Hu, X.; Duan, S. Multi-channel Memristive Pulse Coupled Neural Network Based Multi-frame Images Super-resolution Reconstruction Algorithm. J. Electron. Inf. Technol. 2020, 42, 835–843. [Google Scholar]
- Cai, J.; Gu, S.; Zhang, L. Learning a Deep Single Image Contrast Enhancer from Multi-Exposure Images. IEEE Trans. Image Process. 2018, 27, 2049–2062. [Google Scholar] [CrossRef]
- Tomasi, C.; Manduchi, R. Bilateral filtering for gray and color images. In Proceedings of the Sixth International Conference on Computer Vision, Bombay, India, 4–7 January 1998; pp. 839–846. [Google Scholar]
- Zhou, Z.; Wang, B.; Li, S.; Dong, M. Perceptual fusion of infrared and visible images through a hybrid multi-scale decomposition with Gaussian and bilateral filters. Inf. Fusion 2016, 30, 15–26. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA, 26 June–1 July 2016; pp. 770–778. [Google Scholar]
- Ma, J.; Liang, P.; Yu, W.; Chen, C.; Guo, X.; Wu, J.; Jiang, J. Infrared and visible image fusion via detail preserving adversarial learning. Inf. Fusion 2020, 54, 85–98. [Google Scholar] [CrossRef]
- Li, H.; Wu, X.-J. DenseFuse: A Fusion Approach to Infrared and Visible Images. IEEE Trans. Image Process. 2019, 28, 2614–2623. [Google Scholar] [CrossRef] [Green Version]
- Liu, Y.; Chen, X.; Peng, H.; Wang, Z. Multi-focus image fusion with a deep convolutional neural network. Inf. Fusion 2017, 36, 191–207. [Google Scholar] [CrossRef]
- Wang, Z.; Bovik, A. A universal image quality index. IEEE Signal Process. Lett. 2002, 9, 81–84. [Google Scholar] [CrossRef]
- Ma, K.; Duanmu, Z.; Yeganeh, H.; Wang, Z. Multi-Exposure Image Fusion by Optimizing A Structural Similarity Index. IEEE Trans. Comput. Imaging 2017, 4, 60–72. [Google Scholar] [CrossRef]
- Van Vliet, L.J.; Young, Y.T.; Beckers, G.L. A nonlinear Laplace operator as edge detector in noisy images. Comput. Vis. Gr. Image Process 1989, 45, 167–195. [Google Scholar] [CrossRef] [Green Version]
- Abdullah-Al-Wadud, M.; Kabir, H.; Dewan, M.A.A.; Chae, O. A Dynamic Histogram Equalization for Image Contrast Enhancement. IEEE Trans. Consum. Electron. 2007, 53, 593–600. [Google Scholar] [CrossRef]
- Muniyappan, S.; Allirani, A.; Saraswathi, S. A novel approach for image enhancement by using contrast limited adaptive histogram equalization method. In Proceedings of the 2013 Fourth International Conference on Computing, Communications and Networking Technologies (ICCCNT), Tiruchengode, India, 4–6 July 2013; pp. 1–6. [Google Scholar]
- ACDsee. Available online: https://www.acdsee.cn/ (accessed on 29 October 2020).
- Eskicioglu, A.; Fisher, P. Image quality measures and their performance. IEEE Trans. Commun. 1995, 43, 2959–2965. [Google Scholar] [CrossRef] [Green Version]
- Rao, Y.J. In-fibre bragg grating sensors. Meas. Sci. Technol. 1997, 8, 355. [Google Scholar] [CrossRef]
- Van Aardt, J.; Roberts, J.W.; Ahmed, F.B. Assessment of image fusion procedures using entropy, image quality, and multispectral classification. J. Appl. Remote. Sens. 2008, 2, 1–28. [Google Scholar] [CrossRef]
- Vranjes, M.; Rimac-Drlje, S.; Grgic, K. Locally averaged PSNR as a simple objective Video Quality Metric. In Proceedings of the 2008 50th International Symposium ELMAR, Zadar, Croatia, 10–12 September 2008; pp. 17–20. [Google Scholar]
- Hossain, M.A.; Jia, X.; Pickering, M. Improved feature selection based on a mutual information measure for hyperspectral image classification. In Proceedings of the 2012 IEEE International Geoscience and Remote Sensing Symposium, Munich, Germany, 22–27 June 2012; pp. 3058–3061. [Google Scholar]
- Gao, M.; Chen, C.; Shi, J.; Lai, C.S.; Yang, Y.; Dong, Z. A Multiscale Recognition Method for the Optimization of Traffic Signs Using GMM and Category Quality Focal Loss. Sensors 2020, 20, 4850. [Google Scholar] [CrossRef]
Metrics | SF | STD | EN | SSIM | MI | PSNR |
---|---|---|---|---|---|---|
DSIFT | 8.415 | 32.449 | 6.597 | 0.833 | 1.644 | 11.583 |
FLER | 10.750 | 54.242 | 7.103 | 0.861 | 3.152 | 14.283 |
GBM | 8.877 | 35.244 | 6.420 | 0.888 | 2.407 | 15.364 |
DeepFuse | 9.295 | 67.090 | 6.884 | 0.907 | 5.716 | 14.742 |
Ours | 12.990 | 70.907 | 7.328 | 0.923 | 4.409 | 21.720 |
Traffic Signs | ||||
---|---|---|---|---|
Red Light for Go Straight | Go Straight Slot | No Entry | Bus-only lane | Speed Limit 120 |
Green Light for Go Straight | Left Turn Slot | No Trucks | One-way Road | Weight Limit 15 Tons |
Red Light for Left Turn | Right Turn Slot | No U-Turn | Motor Vehicles Only | Weight Limit 40 Tons |
Green Light for Left Turn | Strictly No Parking | Yield Ahead | Speed Limit 30 | Weight Limit 60 Tons |
Red Light for Right Turn | No left or Right Turn | Keep Right | Speed Limit 60 | School Crossing Ahead |
Green Light for Right Turn | No Motor Vehicles | Stop | Speed Limit 80 | Pedestrian Crossing Ahead |
Traffic Sign | Over-Exposure | Under-Exposure | Fusion |
---|---|---|---|
Green Light for Go Straight | 0.4622 | 0.4618 | 0.4629 |
Red Light for Left Turn | 0.5259 | 0.5333 | 0.5418 |
Green Light for Left Turn | 0.3867 | 0.3867 | 0.3991 |
Green Light for Right Turn | 0.4072 | 0.4627 | 0.4649 |
Go Straight Slot | 0.8468 | 0.8509 | 0.8573 |
Left Turn Slot | 0.8494 | 0.8559 | 0.8588 |
Strictly No Parking | 0.9149 | 0.9153 | 0.9178 |
No left or Right Turn | 0.8669 | 0.8537 | 0.8726 |
School Crossing Ahead | 0.8340 | 0.8335 | 0.8439 |
Stop | 0.7017 | 0.7017 | 0.7037 |
All Classes of Signs | 0.7977 | 0.7797 | 0.8117 |
Traffic Sign | DeepFuse | DSIFT | FLER | GBM | Ours |
---|---|---|---|---|---|
Green Light for Go Straight | 0.4118 | 0.4459 | 0.4514 | 0.4566 | 0.4629 |
Red Light for Left Turn | 0.5363 | 0.5326 | 0.5274 | 0.5303 | 0.5418 |
Green Light for Right Turn | 0.4496 | 0.4209 | 0.4072 | 0.4072 | 0.4649 |
Go Straight Slot | 0.8550 | 0.8580 | 0.8567 | 0.8544 | 0.8573 |
Strictly No Parking | 0.9137 | 0.9071 | 0.9076 | 0.9070 | 0.9178 |
No Trucks | 0.8557 | 0.8574 | 0.8573 | 0.8596 | 0.8605 |
No left or Right Turn | 0.8221 | 0.8706 | 0.8702 | 0.8674 | 0.8726 |
Weight Limit 40 Tons | 0.8193 | 0.8191 | 0.8028 | 0.8140 | 0.8221 |
School Crossing Ahead | 0.8434 | 0.8436 | 0.8457 | 0.8391 | 0.8439 |
Pedestrian Crossing Ahead | 0.8503 | 0.8578 | 0.8569 | 0.8563 | 0.8574 |
All Classes of Signs | 0.8101 | 0.8115 | 0.8095 | 0.8059 | 0.8117 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (http://creativecommons.org/licenses/by/4.0/).
Share and Cite
Gao, M.; Wang, J.; Chen, Y.; Du, C.; Chen, C.; Zeng, Y. An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System. Electronics 2021, 10, 383. https://doi.org/10.3390/electronics10040383
Gao M, Wang J, Chen Y, Du C, Chen C, Zeng Y. An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System. Electronics. 2021; 10(4):383. https://doi.org/10.3390/electronics10040383
Chicago/Turabian StyleGao, Mingyu, Junfan Wang, Yi Chen, Chenjie Du, Chao Chen, and Yu Zeng. 2021. "An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System" Electronics 10, no. 4: 383. https://doi.org/10.3390/electronics10040383
APA StyleGao, M., Wang, J., Chen, Y., Du, C., Chen, C., & Zeng, Y. (2021). An Improved Multi-Exposure Image Fusion Method for Intelligent Transportation System. Electronics, 10(4), 383. https://doi.org/10.3390/electronics10040383