Illumination-Aware Cross-Modality Differential Fusion Multispectral Pedestrian Detection
Abstract
:1. Introduction
- We explore a new mechanism for the adaptive calculation of the two modal weights required depending on the different illumination levels of different scenery.
- This paper proposes the IACMDF algorithm that combines the illumination information of a scenario with the differential information of different modalities of the same scene.
- The experimental results show that the proposed method is competitive and performs better than the baseline methods.
2. Related Work
2.1. Multispectral Pedestrian Detection
2.2. Illumination Awareness in the Fusion Model
3. Methods
3.1. Overall Architecture
3.2. IACMDF Module
3.3. Illumination Intensity Classification Sub-Network
4. Experiments
4.1. Dataset and Metrics
4.2. Implementation Details
4.3. Comparison Experiments
4.4. Ablation Study
5. Conclusions
Author Contributions
Funding
Data Availability Statement
Conflicts of Interest
References
- Li, C.; Guo, C.; Han, L.; Jiang, J.; Cheng, M.M.; Gu, J.; Loy, C.C. Low-Light Image and Video Enhancement Using Deep Learning: A Survey. IEEE Trans. Pattern Anal. Mach. Intell. 2022, 44, 9396–9416. [Google Scholar] [CrossRef] [PubMed]
- Lore, K.G.; Akintayo, A.; Sarkar, S. LLNet: A Deep Autoencoder Approach to Natural Low-light Image Enhancement. Pattern Recognit. 2017, 61, 650–662. [Google Scholar] [CrossRef]
- Chen, Z.; Liang, Y.; Du, M. Attention-based Broad Self-guided Network for Low-light Image Enhancement. In Proceedings of the 26th International Conference on Pattern Recognition (ICPR), Montreal, QC, Canada, 21–25 August 2022; pp. 31–38. [Google Scholar] [CrossRef]
- Wu, W.; Weng, J.; Zhang, P.; Wang, X.; Yang, W.; Jiang, J. URetinex-Net: Retinex-Based Deep Unfolding Network for Low-Light Image Enhancement. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), New Orleans, LA, USA, 18–24 June 2022; pp. 5901–5910. [Google Scholar] [CrossRef]
- Wagner, J.; Fischer, V.; Herman, M.; Behnke, S. Multispectral Pedestrian Detection using Deep Fusion Convolutional Neural Networks. In Proceedings of the European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning, Bruges, Belgium, 27–29 April 2016; Volume 587, pp. 509–514. [Google Scholar]
- Liu, T.; Lam, K.M.; Zhao, R.; Qiu, G. Deep Cross-Modal Representation Learning and Distillation for Illumination-Invariant Pedestrian Detection. IEEE Trans. Circuits Syst. Video Technol. 2022, 32, 315–329. [Google Scholar] [CrossRef]
- Dasgupta, K.; Das, A.; Das, S.; Bhattacharya, U.; Yogamani, S. Spatio-contextual deep network-based multimodal pedestrian detection for autonomous driving. IEEE Trans. Intell. Transp. Syst. 2022, 23, 15940–15950. [Google Scholar] [CrossRef]
- Xu, D.; Ouyang, W.; Ricci, E.; Wang, X.; Sebe, N. Learning cross-modal deep representations for robust pedestrian detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 5363–5371. [Google Scholar] [CrossRef]
- Park, K.; Kim, S.; Sohn, K. Unified multi-spectral pedestrian detection based on probabilistic fusion networks. Pattern Recognit. 2018, 80, 143–155. [Google Scholar] [CrossRef]
- Dai, X.; Yuan, X.; Wei, X. TIRNet: Object detection in thermal infrared images for autonomous driving. Appl. Intell. 2021, 51, 1244–1261. [Google Scholar] [CrossRef]
- Cao, Y.; Luo, X.; Yang, J.; Cao, Y.; Yang, M.Y. Locality guided cross-modal feature aggregation and pixel-level fusion for multispectral pedestrian detection. Inf. Fusion 2022, 88, 1–11. [Google Scholar] [CrossRef]
- Liu, J.; Zhang, S.; Wang, S.; Metaxas, D.N. Multispectral deep neural networks for pedestrian detection. arXiv 2016, arXiv:1611.02644. [Google Scholar] [CrossRef]
- Song, X.; Gao, S.; Chen, C. A multispectral feature fusion network for robust pedestrian detection. Alex. Eng. J. 2021, 60, 73–85. [Google Scholar] [CrossRef]
- Yan, C.; Zhang, H.; Li, X.; Yang, Y.; Yuan, D. Cross-modality complementary information fusion for multispectral pedestrian detection. Neural Comput. Appl. 2023, 35, 10361–10386. [Google Scholar] [CrossRef]
- Chen, Y.; Xie, H.; Shin, H. Multi-layer fusion techniques using a CNN for multispectral pedestrian detection. IET Comput. Vis. 2018, 12, 1179–1187. [Google Scholar] [CrossRef]
- Li, C.; Song, D.; Tong, R.; Tang, M. Illumination-aware faster R-CNN for robust multispectral pedestrian detection. Pattern Recognit. 2019, 85, 161–171. [Google Scholar] [CrossRef]
- Zhang, H.; Fromont, E.; Lefevre, S.; Avignon, B. Multispectral fusion for object detection with cyclic fuse-and-refine blocks. In Proceedings of the 2020 IEEE International Conference on Image Processing (ICIP), Abu Dhabi, United Arab Emirates, 25–28 October 2020; IEEE: New York, NY, USA, 2020; pp. 276–280. [Google Scholar] [CrossRef]
- Wolpert, A.; Teutsch, M.; Sarfraz, M.S.; Stiefelhagen, R. Anchor-free small-scale multispectral pedestrian detection. arXiv 2020, arXiv:2008.08418. [Google Scholar]
- Pei, D.; Jing, M.; Liu, H.; Sun, F.; Jiang, L. A fast RetinaNet fusion framework for multi-spectral pedestrian detection. Infrared Phys. Technol. 2020, 105, 103178. [Google Scholar] [CrossRef]
- Cao, Z.; Yang, H.; Zhao, J.; Guo, S.; Li, L. Attention fusion for one-stage multispectral pedestrian detection. Sensors 2021, 21, 4184. [Google Scholar] [CrossRef]
- Konig, D.; Adam, M.; Jarvers, C.; Layher, G.; Neumann, H.; Teutsch, M. Fully convolutional region proposal networks for multispectral person detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA, 21–26 July 2017; pp. 49–56. [Google Scholar] [CrossRef]
- Zheng, Y.; Izzat, I.H.; Ziaee, S. GFD-SSD: Gated fusion double SSD for multispectral pedestrian detection. arXiv 2019, arXiv:1903.06999. [Google Scholar] [CrossRef]
- Hu, J.; Shen, L.; Sun, G. Squeeze-and-excitation networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA, 18–23 June 2018; pp. 7132–7141. [Google Scholar] [CrossRef]
- Hwang, S.; Park, J.; Kim, N.; Choi, Y.; So Kweon, I. Multispectral pedestrian detection: Benchmark dataset and baseline. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA, 7–12 June 2015; pp. 1037–1045. [Google Scholar] [CrossRef]
- Dollár, P.; Appel, R.; Belongie, S.; Perona, P. Fast feature pyramids for object detection. IEEE Trans. Pattern Anal. Mach. Intell. 2014, 36, 1532–1545. [Google Scholar] [CrossRef]
- Li, C.; Song, D.; Tong, R.; Tang, M. Multispectral pedestrian detection via simultaneous detection and segmentation. arXiv 2018, arXiv:1808.04818. [Google Scholar] [CrossRef]
- Ding, L.; Wang, Y.; Laganiere, R.; Huang, D.; Fu, S. Convolutional neural networks for multispectral pedestrian detection. Signal Process. Image Commun. 2020, 82, 115764. [Google Scholar] [CrossRef]
- Deng, Q.; Tian, W.; Huang, Y.; Xiong, L.; Bi, X. Pedestrian Detection by Fusion of RGB and Infrared Images in Low-Light Environment. In Proceedings of the 2021 IEEE 24th International Conference on Information Fusion (FUSION), Sun City, South Africa, 1–4 November 2021; IEEE: New York, NY, USA, 2021; pp. 1–8. [Google Scholar] [CrossRef]
- Yang, X.; Qian, Y.; Zhu, H.; Wang, C.; Yang, M. BAANet: Learning bi-directional adaptive attention gates for multispectral pedestrian detection. In Proceedings of the 2022 International Conference on Robotics and Automation (ICRA), Philadelphia, PA, USA, 23–27 May 2022; IEEE: New York, NY, USA, 2022; pp. 2920–2926. [Google Scholar] [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Computer Vision—ECCV 2016, Proceedings of the European Conference on Computer Vision, Amsterdam, The Netherlands, 11–14 October 2016; Leibe, B., Matas, J., Sebe, N., Welling, M., Eds.; Springer International Publishing: Cham, Switzerland, 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Zhang, L.; Liu, Z.; Zhang, S.; Yang, X.; Qiao, H.; Huang, K.; Hussain, A. Cross-modality interactive attention network for multispectral pedestrian detection. Inf. Fusion 2019, 50, 20–29. [Google Scholar] [CrossRef]
- Kim, J.; Park, I.; Kim, S. A Fusion Framework for Multi-Spectral Pedestrian Detection using EfficientDet. In Proceedings of the 2021 21st International Conference on Control, Automation and Systems (ICCAS), Jeju, Republic of Korea, 12–15 October 2021; IEEE: New York, NY, USA, 2021; pp. 1111–1113. [Google Scholar] [CrossRef]
- Zhang, H.; Fromont, E.; Lefèvre, S.; Avignon, B. Guided attentive feature fusion for multispectral pedestrian detection. In Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA, 3–8 January 2021; pp. 72–80. [Google Scholar] [CrossRef]
- Kim, J.; Kim, H.; Kim, T.; Kim, N.; Choi, Y. MLPD: Multi-label pedestrian detector in multispectral domain. IEEE Robot. Autom. Lett. 2021, 6, 7846–7853. [Google Scholar] [CrossRef]
- Kim, J.U.; Park, S.; Ro, Y.M. Uncertainty-guided cross-modal learning for robust multispectral pedestrian detection. IEEE Trans. Circuits Syst. Video Technol. 2021, 32, 1510–1523. [Google Scholar] [CrossRef]
- Zhang, L.; Liu, Z.; Zhu, X.; Song, Z.; Yang, X.; Lei, Z.; Qiao, H. Weakly Aligned Feature Fusion for Multimodal Object Detection. IEEE Trans. Neural Netw. Learn. Syst. 2021, 1–15. [Google Scholar] [CrossRef] [PubMed]
- Shojaiee, F.; Baleghi, Y. Pedestrian head direction estimation using weight generation function for fusion of visible and thermal feature vectors. Optik 2022, 254, 168688. [Google Scholar] [CrossRef]
- Roszyk, K.; Nowicki, M.R.; Skrzypczyński, P. Adopting the YOLOv4 architecture for low-latency multispectral pedestrian detection in autonomous driving. Sensors 2022, 22, 1082. [Google Scholar] [CrossRef] [PubMed]
- Yang, Y.; Xu, K.; Wang, K. Cascaded information enhancement and cross-modal attention feature fusion for multispectral pedestrian detection. Front. Phys. 2023, 11, 1121311. [Google Scholar] [CrossRef]
- Guan, D.; Luo, X.; Cao, Y.; Yang, J.; Cao, Y.; Vosselman, G.; Yang, M.Y. Unsupervised Domain Adaptation for Multispectral Pedestrian Detection. In Proceedings of the 2019 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Long Beach, CA, USA, 16–17 June 2019; pp. 434–443. [Google Scholar] [CrossRef]
- Zhou, K.; Chen, L.; Cao, X. Improving multispectral pedestrian detection by addressing modality imbalance problems. In Computer Vision – ECCV 2020, Proceedings of the Computer Vision—ECCV 2020: 16th European Conference, Glasgow, UK, 23–28 August 2020; Part XVIII; Springer: Cham, Switzerland, 2020; pp. 787–803. [Google Scholar] [CrossRef]
- Zhuang, Y.; Pu, Z.; Hu, J.; Wang, Y. Illumination and temperature-aware multispectral networks for edge-computing-enabled pedestrian detection. IEEE Trans. Netw. Sci. Eng. 2021, 9, 1282–1295. [Google Scholar] [CrossRef]
- Tang, L.; Yuan, J.; Zhang, H.; Jiang, X.; Ma, J. PIAFusion: A progressive infrared and visible image fusion network based on illumination aware. Inf. Fusion 2022, 83–84, 79–92. [Google Scholar] [CrossRef]
- González, A.; Fang, Z.; Socarras, Y.; Serrat, J.; Vázquez, D.; Xu, J.; López, A.M. Pedestrian Detection at Day/Night Time with Visible and FIR Cameras: A Comparison. Sensors 2016, 16, 820. [Google Scholar] [CrossRef]
- Jia, X.; Zhu, C.; Li, M.; Tang, W.; Zhou, W. LLVIP: A Visible-infrared Paired Dataset for Low-light Vision. In Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision Workshops (ICCVW), Montreal, BC, Canada, 11–17 October 2021; pp. 3489–3497. [Google Scholar] [CrossRef]
- Dollar, P.; Wojek, C.; Schiele, B.; Perona, P. Pedestrian Detection: An Evaluation of the State of the Art. IEEE Trans. Pattern Anal. Mach. Intell. 2012, 34, 743–761. [Google Scholar] [CrossRef]
- Qingyun, F.; Dapeng, H.; Zhaokui, W. Cross-Modality Fusion Transformer for Multispectral Object Detection. arXiv 2021. [Google Scholar] [CrossRef]
Method | Backbone | Miss Rate (IoU = 0.5) (%) | Platform | Speed (s) | ||
---|---|---|---|---|---|---|
All | Day | Night | ||||
ACF [24] | - | 47.32 | 42.57 | 56.17 | MATLAB | 2.73 |
Halfway fusion [12] | VGG16 | 25.75 | 24.88 | 26.59 | TITAX | 0.43 |
IAF R-CNN [16] | VGG16 | 15.73 | 14.55 | 18.26 | TITAX | 0.25 |
CIAN [31] | VGG16 | 14.12 | 14.77 | 11.13 | 1080 Ti | 0.07 |
MSDS-RCNN [26] | VGG16 | 11.34 | 10.53 | 12.94 | TITAX | 0.22 |
AR-CNN [36] | VGG16 | 9.34 | 9.94 | 8.38 | 1080 Ti | 0.12 |
MBNet [41] | ResNet50 | 8.13 | 8.28 | 7.86 | 1080 Ti | 0.07 |
MLPD [34] | VGG16 | 7.58 | 7.95 | 6.95 | 2080 Ti | 0.012 |
IACMDF (ours) | ResNet50 | 6.93 | 7.36 | 6.11 | P100 | 0.048 |
IACMDF (ours) | VGG16 | 6.17 | 7.06 | 4.33 | P100 | 0.034 |
Method | Scale (MR%) | Occlusion (MR%) | ||||
---|---|---|---|---|---|---|
Near | Medium | Far | No | Part | Heavy | |
ACF [24] | 28.74 | 53.67 | 88.20 | 62.94 | 81.40 | 88.08 |
Halfway fusion [12] | 8.13 | 30.34 | 75.70 | 43.13 | 65.21 | 74.36 |
IAF R-CNN [16] | 0.96 | 25.54 | 77.84 | 40.17 | 48.40 | 69.76 |
CIAN [31] | 3.71 | 19.04 | 55.82 | 30.31 | 41.57 | 62.48 |
MSDS-RCNN [26] | 1.29 | 16.19 | 63.73 | 29.86 | 38.71 | 63.37 |
AR-CNN [36] | 0.00 | 16.08 | 69.00 | 31.40 | 38.63 | 55.73 |
MBNet [41] | 0.00 | 16.07 | 55.99 | 27.74 | 35.43 | 59.14 |
MLPD [34] | 0.00 | 15.89 | 57.45 | 26.46 | 35.34 | 60.28 |
IACMDF (ours) | 0.00 | 15.15 | 55.16 | 24.97 | 33.22 | 53.88 |
Method | mAP50 (%) |
---|---|
ACF [24] | 40.00 |
Halfway fusion [12] | 57.24 |
IAF R-CNN [16] | 56.62 |
CIAN [31] | 69.00 |
MSDS-RCNN [26] | 67.19 |
AR-CNN [36] | 75.31 |
MBNet [41] | 75.93 |
IACMDF (ours) | 76.22 |
Method (Grey + Thermal) | Miss Rate (%) | ||
---|---|---|---|
All | Day | Night | |
MACF [9] | 69.71 | 72.63 | 65.43 |
Halfway fusion [9] | 31.99 | 36.29 | 26.29 |
Park et al. [9] | 26.29 | 28.67 | 23.48 |
AR-CNN [36] | 22.1 | 24.7 | 18.1 |
MBNet [41] | 21.1 | 24.7 | 13.5 |
MLPD [34] | 21.33 | 24.18 | 17.97 |
IACMDF (ours) | 21.15 | 25.06 | 13.31 |
Method | Dataset | mAP50 (%) | mAP75 (%) | mAP (%) |
---|---|---|---|---|
SSD [47] | RGB | 82.6 | 31.8 | 39.8 |
SSD [47] | Thermal | 90.2 | 57.9 | 53.5 |
YOLOv3 [45] | RGB | 85.9 | 37.9 | 43.3 |
YOLOv3 [45] | Thermal | 89.7 | 53.4 | 52.8 |
YOLOv5 [45] | RGB | 90.8 | 51.9 | 50.0 |
YOLOv5 [45] | Thermal | 94.6 | 72.2 | 61.9 |
CFT [47] | RGB + Thermal | 97.5 | 72.9 | 63.6 |
IACMDF (ours) | RGB + Thermal | 97.3 | 74.1 | 65.1 |
IA | CMDF | SE | Miss Rate (%) | ||
---|---|---|---|---|---|
All | Day | Night | |||
- | - | - | 8.19 | 8.45 | 7.01 |
- | - | ✓ | 7.88 | 7.95 | 6.49 |
- | ✓ | ✓ | 6.53 | 7.18 | 5.07 |
✓ | ✓ | ✓ | 6.17 | 7.06 | 4.33 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wang, C.; Qian, J.; Wang, J.; Chen, Y. Illumination-Aware Cross-Modality Differential Fusion Multispectral Pedestrian Detection. Electronics 2023, 12, 3576. https://doi.org/10.3390/electronics12173576
Wang C, Qian J, Wang J, Chen Y. Illumination-Aware Cross-Modality Differential Fusion Multispectral Pedestrian Detection. Electronics. 2023; 12(17):3576. https://doi.org/10.3390/electronics12173576
Chicago/Turabian StyleWang, Chishe, Jinjin Qian, Jie Wang, and Yuting Chen. 2023. "Illumination-Aware Cross-Modality Differential Fusion Multispectral Pedestrian Detection" Electronics 12, no. 17: 3576. https://doi.org/10.3390/electronics12173576
APA StyleWang, C., Qian, J., Wang, J., & Chen, Y. (2023). Illumination-Aware Cross-Modality Differential Fusion Multispectral Pedestrian Detection. Electronics, 12(17), 3576. https://doi.org/10.3390/electronics12173576