Vehicle Target Recognition Method Based on Visible and Infrared Image Fusion Using Bayesian Inference
Abstract
:1. Introduction
2. Related Work
- (1)
- In order to better extract target features from visible and infrared images, information fusion based on IOU judgment is used to obtain real target features by comparing the relationship between IOU and the set threshold.
- (2)
- Based on the IOU judgment results as parameters for Bayesian inference, vehicle target recognition is achieved, and the feasibility of the method is verified through experiments.
3. Visible Light and Infrared Image Feature Fusion Algorithm
3.1. Visible Light Image Recognition Algorithm
Algorithm 1 The SR Algorithm | |
Step 1: | Visible light image is transformed into frequency domain space through Fourier transform. |
Step 2: | The amplitude spectrum , phase spectrum , and logarithmic amplitude spectrum in the frequency domain are calculated, which are , , . |
Step 3: | The spectral residual is calculated, which is . |
Step 4: | The significance map is obtained, which is . |
Algorithm 2 The SR EdgeBox Algorithm | |
Step 1: | candidate boxes are entered. |
Step 2: | All in the queue is placed, the first candidate box is selected as the benchmark, and calculated in sequence whether there is overlap with the other . If two candidate boxes overlap, the minimum rectangle is calculated that surrounds the two candidate boxes, candidate box is replaced. At the same time, the current candidate box is deleted, and then continue to calculate whether there is overlap between and the other ; |
Step 3: | The overlap result between candidate boxes and all other are calculated and the corresponding minimum bounding rectangles are obtained, candidate box is added to the last and the original is removed, candidate box becomes , candidate box becomes , and so on. Then, continue with step 2, repeating the process times; |
Step 4: | The final possible target boxes are outputted. |
Algorithm 3 Segmentation Threshold Algorithm | |
Step 1: | The image is defined as a matrix of size, that is, the pixels in the image, with pixel values ranging between . |
Step 2: | The segmentation threshold between target and background is set to , the proportion of pixels belonging to the target is set to , and the average grayscale of the target to ; the proportion of background pixel points is set to , and the average background grayscale is set to , which are , , , . |
Step 3: | The total average grayscale of the image is denoted as , and the inter class variance is denoted as , which are , . |
Step 4: | The number of pixels in the image with a grayscale value less than threshold is denoted as , and the number of pixels with a grayscale value greater than or equal to threshold is denoted as , which is . |
3.2. Infrared Image Feature Extraction
3.3. Information Fusion Based on IOU Judgment
4. Fusion Image Target Recognition Based on Bayesian Inference Method
5. Experimental Analysis
6. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- Li, M.; Zhang, T.; Cui, W. Research of infrared small pedestrian target detection based on YOLOv3. Infrared Technol. 2020, 42, 176–181. [Google Scholar]
- Ma, J.; Ma, Y.; Li, C. Infrared and visible image fusion methods and applications: A survey. Inf. Fusion 2019, 45, 153–178. [Google Scholar] [CrossRef]
- Han, J.; Bhanu, B. Fusion of color and infrared video for moving human detection. Pattern Recognit. 2007, 40, 1771–1784. [Google Scholar] [CrossRef]
- Chen, C.; Meng, X.; Shao, F.; Fu, R. Infrared and visible image fusion method based on multiscale low-rank decomposition. Acta Opt. Sin. 2020, 40, 72–80. [Google Scholar]
- He, N.; Shi, S. Research on visible image object detection algorithm based on infrared features fusion. Microelectron. Comput. 2023, 40, 29–36. [Google Scholar]
- Wang, N.; Zhou, M.; Du, Q. An infrared visible image fusion and target recognition method. J. Air Space Early Warn. Res. 2019, 33, 328–332. [Google Scholar]
- Liu, J.; Yi, G.; Huang, D. Object detection in visible light and infrared images based on feature fusion. Laser Infrared 2023, 53, 394–401. [Google Scholar]
- Li, H.; Zhang, X. Flight parameter calculation method of multi-projectiles using temporal and spatial information constraint. Def. Technol. 2023, 19, 63–75. [Google Scholar] [CrossRef]
- Bai, Y.; Hou, Z.; Liu, X.; Ma, S.; Yu, W.; Pu, L. An object detection algorithm based on decision-level fusion of visible light image and infrared image. J. Air Force Eng. Univ. 2020, 21, 53–59+100. [Google Scholar]
- Zhou, H.; Hou, J.; Wu, W.; Zhang, Y.; Wu, Y.; Ma, J. Infrared and visible image fusion based on semantic segmentation. J. Comput. Res. Dev. 2021, 58, 436–443. [Google Scholar]
- Zhang, Y.; Wu, X.; Li, H.; Xu, T. Infrared image and visible image fusion algorithm based on unsupervised deep learning. J. Nanjing Norm. Univ. (Eng. Technol. Ed.) 2023, 23, 1–9. [Google Scholar]
- Ning, D.; Zheng, S. An object detection algorithm based on decision-level fusion of visible and infrared images. Infrared Technol. 2023, 45, 282–291. [Google Scholar]
- Ding, Q.; Qi, H.; Zhao, J.; Li, J. Research on infrared and visible image fusion algorithmbased on target enhancement. Laser Infrared 2023, 53, 457–463. [Google Scholar]
- Luo, W.; Liu, M.; Li, L.; Wang, C. Research on region matching of infrared and visible images based on vision detection. Laser J. 2023, 44, 186–190. [Google Scholar]
- Li, Y.; Wang, Y.; Yan, Y.; Hu, M.; Liu, B.; Chen, P. Infrared and visible images fusion from different views based on saliency detection. Laser Infrared 2021, 51, 465–470. [Google Scholar]
- Zhu, Y.; Gao, L. Infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set. J. Northwestern Polytech. Univ. 2021, 39, 930–936. [Google Scholar] [CrossRef]
- Han, L.; Yao, J.; Wang, K. Visible and infrared image fusion by preserving gradients and contours. J. Comput. Appl. 2023. [Google Scholar] [CrossRef]
- Wang, T.; Luo, X.; Zhang, Z. Infrared and visible image fusion based on self-attention learning. Infrared Technol. 2023, 45, 171–177. [Google Scholar]
- Ren, Y.; Zhang, J. Infrared and visible image fusion based on NSST multi-scale entropy. J. Ordnance Equip. Eng. 2022, 43, 278–285. [Google Scholar]
- Sun, Y.; Wang, R.; Zhang, Q.; Lin, R. A cross-modality person re-identification method for visible-infrared images. J. Beijing Univ. Aeronaut. Astronaut. 2022. [Google Scholar] [CrossRef]
- Huang, Y.; Mei, L.; Wang, Y.; He, P.; Lian, B.; Wang, Y. Research on UAV detection based on infrared and visible image fusion. Comput. Knowl. Technol. 2022, 18, 1–8. [Google Scholar]
- Hao, Y.; Cao, Z.; Bai, F.; Sun, H.; Wang, X.; Qin, J. Research on infrared visible image fusion and target recognition algorithm based on region of interest mask convolution neural network. Acta Photonica Sin. 2021, 50, 84–98. [Google Scholar]
- Shen, Y.; Chen, X.; Yuan, Y.; Wang, L.; Zhang, H. Infrared and visible image fusion based on significant matrix and neural network. Laser Optoelectron. Prog. 2020, 57, 76–86. [Google Scholar]
- Zhao, G.; Fu, Y.; Chen, Y. A method for tracking object in infrared and visible image based on multiple features. Acta Armamentarii 2011, 32, 445–451. [Google Scholar]
- Tang, C.; Ling, Y.; Yang, H.; Yang, X.; Tong, W. Decision-level fusion tracking for infrared and visible spectra based on deep learning. Laser Optoelectron. Prog. 2019, 56, 217–224. [Google Scholar]
- Wang, F.; Song, Y.; Zhao, Y.; Yang, X.; Zhang, Z.S. IR saliency detection based on a GCF-SB visual attention framework. Aerosp. Control. Appl. 2020, 46, 28–36. [Google Scholar]
- Yang, J.; Li, Z. Infrared dim small target detection algorithm based on Bayesian estimation. Foreign Electron. Meas. Technol. 2021, 40, 19–23. [Google Scholar]
- Shen, Y.; Jin, T.; Dan, J. Semi-supervised infrared image objcet detection algorithm based on key points. Laser Optoelectron. Prog. 2023, 59, 1–18. [Google Scholar]
- Lan, Y.; Yang, L. Application research of infrared image target tracking in intelligent network vehicle. Laser J. 2019, 40, 60–64. [Google Scholar]
- Miao, X.; Wang, C. Single Frame infrared (IR) dim small target detection based on improved sobel operator. Opto-Electron. Eng. 2016, 43, 119–125. [Google Scholar]
The Vehicle Image Is Complete | The Vehicle Image Is Incomplete | |
---|---|---|
Recognition probability in this paper | 0.77 | 0.74 |
Recognition probability of Reference [24] | 0.68 | 0.65 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Wu, J.; Zhang, X. Vehicle Target Recognition Method Based on Visible and Infrared Image Fusion Using Bayesian Inference. Appl. Sci. 2023, 13, 8334. https://doi.org/10.3390/app13148334
Wu J, Zhang X. Vehicle Target Recognition Method Based on Visible and Infrared Image Fusion Using Bayesian Inference. Applied Sciences. 2023; 13(14):8334. https://doi.org/10.3390/app13148334
Chicago/Turabian StyleWu, Jie, and Xiaoqian Zhang. 2023. "Vehicle Target Recognition Method Based on Visible and Infrared Image Fusion Using Bayesian Inference" Applied Sciences 13, no. 14: 8334. https://doi.org/10.3390/app13148334
APA StyleWu, J., & Zhang, X. (2023). Vehicle Target Recognition Method Based on Visible and Infrared Image Fusion Using Bayesian Inference. Applied Sciences, 13(14), 8334. https://doi.org/10.3390/app13148334