Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11
Abstract
Highlights
- Innovation of portable area array acquisition equipment: Independently developed a comprehensive panoramic image collection and identification system for vehicle chassis, which directly obtains panoramic images of the underside of vehicles through a lateral area array camera and mirror structure. The system can be deployed in 20 min without road embedding.
- ORB feature matching + FeatureBooster feature enhancement + YOLOv11n image detection scheme achieves breakthrough accuracy: In the identification of vehicle axle types, the model achieved a precision of 0.98, a recall of 0.99, and an mAP@50 of 0.989 ± 0.010, demonstrating superior performance compared to traditional methods.
- Addressing toll dispute pain points: Accurately distinguishing between drive axles and driven axles, providing reliable visual evidence for toll booth entrances that charge based on axle types.
- Empowering real-time traffic management: The processing result for a single vehicle can be output within 1.5 s after the vehicle passes (with 99% accuracy), which can help toll booths quickly identify vehicle types and confirm charging standards on-site.
Abstract
1. Introduction
2. Related Work
2.1. Image Stitching Algorithms
2.2. Object Detection Algorithms
2.3. Research Gap and Contributions
3. Methods
3.1. Equipment Development
- Accurate identification and acquisition of axle type information are essential for axle-based charging. For industry management and equipment performance testing departments, having clear, intuitive, and easily distinguishable evidence is vital to resolving any potential disputes that may arise. Therefore, this article designs an imaging system that captures panoramic vehicle underside images through an image stitching algorithm, providing direct visual evidence for axle-based toll collection. Moreover, it supplies high-quality, realistic data to support subsequent axle feature recognition models in practical application scenarios.
- The installation of embedded axle-type identification devices, a process involving positioning, trenching, embedding, sealing, and backfilling within the lane, typically requires no less than one day to complete. During this period, traffic flow is interrupted, significantly impacting toll station operations. To facilitate rapid deployment and minimize disruption, the equipment proposed in this article is designed to be surface-mounted rather than embedded, achieving operational readiness in under 20 min. This approach substantially reduces both installation complexity and cost by avoiding extensive roadwork. Nevertheless, a key challenge in non-embedded systems is the limited field of view resulting from the low clearance of vehicle chassis. To overcome this limitation, the developed system employs a horizontally oriented camera combined with a reflective mirror, expanding the effective viewing range.
- At toll station entrances, especially in high-traffic scenarios, vehicles often queue in close proximity, which complicates the task of distinguishing individual vehicles by the acquisition device. This proximity increases the risk of multiple adjacent vehicles being misidentified as a single entity. Therefore, a laser vehicle separator is deployed to support vehicle distinction.
3.2. Image Processing Algorithm
3.2.1. Image Feature Enhancement
- Some areas may be too dark or too bright, making it difficult to capture detailed information.
- The presence of salt-and-pepper noise or other random noise in the images can affect the accurate detection of feature points.
- Insufficient overall contrast can lead to key details being blurred, making it challenging to differentiate between various structural features.
- Utilizes the Median Filtering algorithm to remove salt-and-pepper noise from the input images while smoothing the images to preserve edge information.
- Employs the CLAHE algorithm (Contrast Limited Adaptive Histogram Equalization) to enhance the contrast of the image in local regions through histogram equalization. This approach reduces the impact of lighting on image quality while avoiding noise amplification. The formula for CLAHE is:
- Utilizes Gamma Correction for non-linear adjustments of image brightness to improve the visibility of details in the dark regions of the chassis images. The formula for gamma correction is:
3.2.2. Vehicle Chassis Image Generation
- Image Feature Matching
- 2.
- Image Filtering Based on Video Keyframes
- 3.
- Image Stitching
3.2.3. Axle Type Feature Recognition Algorithm Based on YOLOv11
4. Results
4.1. FeatureBooster Fine-Tuning for Linear Scan Data
4.2. Model Comparison on ORB Baseline Dataset
- On the dataset constructed from images stitched solely by the ORB algorithm, the detection accuracies (mAP@50 and mAP@50:95) of the various YOLO versions were generally similar, with YOLOv11n achieving the best performance (mAP@50: 0.916 ± 0.012, mAP@50:95: 0.633 ± 0.015). Notably, YOLOv11n also achieved the highest Precision (0.93) and Recall (0.89) among the YOLO-based models, reflecting its enhanced ability to accurately detect target objects while reducing false negatives, which is consistent with its mAP performance.
- Notably, when using images stitched by the ORBE+ FB-FT to build the dataset and employing the YOLOv11n model for detection, the performance of the model was improved (mAP@50: 0.989 ± 0.010, mAP@50:95: 0.780 ± 0.012). This performance leap is further underscored by Precision (0.98) and Recall (0.99) achieved by the ORB + FB + YOLOv11n model. These values indicate that FB enhances the results of image feature matching, leading to a clearer and more complete dataset. As a result, the detection model exhibits a reduction in both false positives and false negatives.
- The confusion matrix (Shown as Figure 11) revealed specific error patterns: 2 drive axles were misclassified as driven axles, while 6 driven axles were misclassified as drive axles. Additionally, there were 2 undetected driven axles (FN). Further examination of these errors suggests that geometric distortion and compression in chassis images—caused by high vehicle speeds during image acquisition—are likely the main contributing factors. Elevated vehicle speeds reduce the spatial resolution of the image sequence, resulting in loss of detail and obscuring discriminative features. This compression effect particularly blurs geometric and structural details that differentiate drive axles from driven axles, thereby amplifying confusion between the two classes. Meanwhile, the compressed axle may resemble the vehicle frame, leading to its non-recognition.
4.3. Robustness Analysis in Nighttime Scenarios
- Under low-light conditions at night, the detection accuracy of the YOLOv11n model based on pure ORB-stitched images decreases (compared to daytime/overall test set, mAP@50 drops from 0.916 to 0.853, a decline of approximately 6.3%; mAP@50:95 decreases from 0.633 to 0.499, a reduction of about 13.4%). This indicates that insufficient lighting will affect the feature extraction and matching quality of the ORB algorithm, resulting in stitched images with more noise, blurring, or mismatched regions, which in turn degrades the performance of subsequent object detection models. However, compared to SSD and Faster R-CNN, YOLOv11n demonstrates relatively superior speed and accuracy, indicating that its performance better meets the requirements for algorithm precision and real-time processing in on-site vehicle chassis image recognition.
- In stark contrast, the YOLOv11n model using ORB + FB stitched images maintains high detection accuracy and stability in nighttime scenarios (P: 0.98, R: 0.99, mAP@50: 0.977 ± 0.011, mAP@50:95: 0.743 ± 0.012). Compared to the pure ORB approach under nighttime conditions, its P and R improved by 0.12 and 0.16, while mAP@50 and mAP@50:95 improved by 12.4% and 24.4%, respectively. This fully demonstrates that the FB module effectively enhances image features, improving the robustness of the ORB algorithm in challenging environments such as low illumination.
4.4. On-Site Real-Time Performance Validation
5. Discussion
6. Conclusions
- Enhanced feature matching through fine-tuning: After domain-specific fine-tuning of the FB on area scan data, the system achieved an average of 151 ± 18 feature matches, outperforming both the pre-trained FB enhancement (133 ± 18) and the baseline ORB on enhanced images (ORBE: 112 ± 21). This fine-tuning process further optimized the model’s adaptability to vehicle chassis imagery, contributing to more stable and accurate stitching results.
- Feature enhancement: Incorporating the FB module into the ORB algorithm substantially improves the accuracy (overall P increased to 0.98, R increased to 0.99, mAP@50 increased to 0.989 ± 0.010, mAP@50:95 increased to 0.780 ± 0.012) and robustness (nighttime scene P maintained at 0.98, R at 0.99,mAP@50 at 0.977 ± 0.011, mAP@50:95 at 0.743 ± 0.012) of the subsequent YOLOv11n model in vehicle chassis target detection tasks.
- Effectiveness of algorithm combination: The proposed ORBE + FB-FT + YOLOv11n scheme effectively overcomes the instability of traditional ORB feature extraction under low-light conditions, achieving high-quality image stitching and high-precision, real-time axle recognition of vehicle chassis images in complex field environments.
- In summary, the main contributions of this study are as follows:
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Appendix A
Total Number of Axles | Vehicle Model | Illustration | Axle Type Example | Driven Axle Count | Drive Axle Count | Weight Limit (Tons) |
---|---|---|---|---|---|---|
2-Axle | Goods Vehicle | 1 | 1 | 18 | ||
3-Axle | Centre-axle Trailer Combination | 2 | 1 | 27 | ||
Articulated Vehicle | 2 | 1 | ||||
Goods Vehicle | 1 | 2 | 25 | |||
2 | 1 | |||||
4-Axle | Centre-axle Trailer Combination | 3 | 1 | 36 | ||
2 | 2 | 35 | ||||
Articulated Vehicle | 3 | 1 | 36 | |||
Full Trailer Truck | 3 | 1 | ||||
Goods Vehicle | 2 | 2 | 31 | |||
5-Axle | Centre-axle Trailer Combination | 3 | 2 | 43 | ||
4 | 1 | |||||
Articulated Vehicle | 3 | 2 | ||||
4 | 1 | |||||
4 | 1 | 42 | ||||
Full Trailer Train | 3 | 2 | 43 | |||
4 | 1 | |||||
6-Axle | Centre-axle Trailer Combination | 4 | 2 | 49 | ||
5 | 1 | 46 | ||||
4 | 2 | 49 | ||||
5 | 1 | 46 | ||||
Articulated Vehicle | 4 | 2 | 49 | |||
5 | 1 | 46 | ||||
5 | 1 | 46 | ||||
Full Trailer Train | 4 | 2 | 49 | |||
5 | 1 | 46 | ||||
Remarks |
|
References
- Wang, X.; Zhang, Z.; Li, X.; Yuan, G. Research on the Damage Mechanics Model of Asphalt Pavement Based on Asphalt Pavement Potential Damage Index. Sci. Adv. Mater. 2024, 16, 63–75. [Google Scholar] [CrossRef]
- Shen, K.; Wang, H. Impact of Wide-Base Tire on Flexible Pavement Responses: Coupling Effects of Multiaxle and Dynamic Loading. J. Transp. Eng. Part B Pavements 2025, 151, 04024057. [Google Scholar] [CrossRef]
- JT/T 489-2019; Classification of Vehicle Types for Toll Road Fees. Ministry of Transport of the People’s Republic of China: Beijing, China, 2019.
- GB 1589-2016; External Dimensions, Axle Loads, and Mass Limits of Motor Vehicles, Trailers, and Road Trains. General Administration of Quality Supervision, Inspection and Quarantine of the People’s Republic of China. National Standardization Administration of China; China Standards Press: Beijing, China, 2016.
- The State Council of the People’s Republic of China. Administration Regulations on Road Transport of Over-dimensional and Overweight Vehicles. 2021. Available online: https://www.gov.cn/zhengce/zhengceku/2021-08/26/content_5633469.htm (accessed on 18 March 2025).
- Sivakoti, K. Vehicle Detection and Classification for Toll Collection Using YOLOv11 and Ensemble OCR. arXiv 2024, arXiv:2412.12191. [Google Scholar] [CrossRef]
- Marszalek, Z.; Zeglen, T.; Sroka, R.; Gajda, J. Inductive Loop Axle Detector based on Resistance and Reactance Vehicle Magnetic Profiles. Sensors 2018, 18, 2376. [Google Scholar] [CrossRef]
- Avelar, R.E.; Petersen, S.; Lindheimer, T.; Ashraf, S.; Minge, E. Methods for Estimating Axle Factors and Axle Classes from Vehicle Length Data. Transp. Res. Rec. J. Transp. Res. Board 2018, 2672, 110–121. [Google Scholar] [CrossRef]
- Zhang, J.X.; Zhang, J.; Dai, Z.C. Vehicle Classification System Based on Pressure Sensor Array. J. Highw. Traffic Technol. 2006, 23, 5. [Google Scholar] [CrossRef]
- Hu, Q. Research on Automatic Vehicle Recognition System Using Lidar. Traffic World 2017, 34, 2. [Google Scholar]
- Wu, Z.; Xu, D.H. Construction and Application of Vehicle Side Image Stitching System. China Transp. Informatiz. 2023, 9, 96–99. [Google Scholar] [CrossRef]
- Lowe, D.G. Distinctive Image Features from Scale-Invariant Keypoints. Int. J. Comput. Vis. 2004, 60, 91–110. [Google Scholar] [CrossRef]
- Bay, H.; Ess, A.; Tuytelaars, T.; Van Gool, L. SURF: Speeded Up Robust Features. Comput. Vis. Image Underst. 2008, 110, 346–359. [Google Scholar] [CrossRef]
- Rublee, E.; Rabaud, V.; Konolige, K.; Bradski, G. ORB: An efficient alternative to SIFT or SURF. In Proceedings of the IEEE International Conference on Computer Vision, ICCV 2011, Barcelona, Spain, 6–13 November 2011. [Google Scholar] [CrossRef]
- Alcantarilla, P.F.; Bartoli, A.; Davison, A.J. KAZE Features. In Computer Vision—ECCV 2012; Springer: Berlin/Heidelberg, Germany, 2012; pp. 214–227. [Google Scholar] [CrossRef]
- Detone, D.; Malisiewicz, T.; Rabinovich, A. SuperPoint: Self-Supervised Interest Point Detection and Description. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Salt Lake City, UT, USA, 18–22 June 2018. [Google Scholar] [CrossRef]
- Adel, E.; Elmogy, M.; Elbakry, H.M. Image Stitching System Based on ORB Feature-Based Technique and Compensation Blending. Int. J. Adv. Comput. Sci. Appl. 2015, 6, 55–62. [Google Scholar] [CrossRef]
- Zhang, K.; Huo, J.; Wang, S.; Zhang, X.; Feng, Y. Quantitative Assessment of Spacecraft Damage in Large-Size Inspections. Front. Inf. Technol. Electron. Eng. 2022, 23, 542–555. [Google Scholar] [CrossRef]
- Luo, X.; Wei, Z.; Jin, Y.; Wang, X.; Lin, P.; Wei, X.; Zhou, W. Fast Automatic Registration of UAV Images via Bidirectional Matching. Sensors 2023, 23, 8566. [Google Scholar] [CrossRef]
- Li, R. ORB Image Feature Extraction Algorithm Based on Fuzzy Control. Proc. SPIE 2024, 13230, 14. [Google Scholar] [CrossRef]
- Zhao, Y.; Su, J. Improved PCB image stitching algorithm based on enhanced ORB. In Proceedings of the Fourth International Conference on Signal Image Processing and Communication (ICSIPC 2024), Xi’an, China, 17–19 May 2024; SPIE: Bellingham, WA, USA, 2024; Volume 13253, p. 132530G. [Google Scholar] [CrossRef]
- Chen, L.; You, S.; Chen, K.; Chen, J.; Cheng, Z. A novel ORB feature-based real-time panoramic video stitching algorithm for robotic embedded devices. In Proceedings of the 2025 IEEE International Conference on Real-time Computing and Robotics (RCAR), Toyama, Japan, 1–6 June 2025; pp. 612–617. [Google Scholar] [CrossRef]
- Mallegowda, M.; Viswanath, N.G.; Polepalli, N.; Ganga, N. Improving vehicle perception through image stitching: A serial and parallel evaluation. In Proceedings of the 2025 4th OPJU International Technology Conference (OTCON) on Smart Computing for Innovation and Advancement in Industry 5.0, Raigarh, India, 9–11 April 2025. [Google Scholar] [CrossRef]
- Ren, S.; He, K.; Girshick, R.; Sun, J. Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks. IEEE Trans. Pattern Anal. Mach. Intell. 2017, 39, 1137–1149. [Google Scholar] [CrossRef]
- He, K.; Gkioxari, G.; Dollár, P.; Girshick, R. Mask R-CNN. IEEE Trans. Pattern Anal. Mach. Intell. 2017. [CrossRef]
- Liu, W.; Anguelov, D.; Erhan, D.; Szegedy, C.; Reed, S.; Fu, C.Y.; Berg, A.C. SSD: Single Shot MultiBox Detector. In Proceedings of the European Conference on Computer Vision (ECCV), Amsterdam, The Netherlands, 11–14 October 2016; pp. 21–37. [Google Scholar] [CrossRef]
- Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A.N.; Kaiser, Ł.; Polosukhin, I. Attention Is All You Need. arXiv 2017, arXiv:1706.03762. [Google Scholar] [CrossRef]
- Redmon, J.; Divvala, S.; Girshick, R.; Farhadi, A. You Only Look Once: Unified, Real-Time Object Detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016. [Google Scholar] [CrossRef]
- Ye, F.; Yuan, M.; Luo, C.; Li, S.; Pan, D.; Wang, W.; Cao, F.; Chen, D. Enhanced YOLO and Scanning Portal System for Vehicle Component Detection. Sensors 2025, 25, 4809. [Google Scholar] [CrossRef] [PubMed]
- Mo, J.; Wu, G.; Li, R. An Enhanced YOLOv11-based Algorithm for Vehicle and Pedestrian Detection in Complex Traffic Scenarios. In Proceedings of the 2025 IEEE 6th International Seminar on Artificial Intelligence, Networking and Information Technology (AINIT), Shenzhen, China, 11–13 April 2025. [Google Scholar] [CrossRef]
- Zhao, D.; Cheng, Y.; Mao, S. Improved Algorithm for Vehicle Bottom Safety Detection Based on YOLOv8n: PSP-YOLO. Appl. Sci. 2024, 14, 11257. [Google Scholar] [CrossRef]
- Almujally, N.A.; Qureshi, A.M.; Alazeb, A.; Rahman, H.; Sadiq, T.; Alonazi, M.; Algarni, A.; Jalal, A. A Novel Framework for Vehicle Detection and Tracking in Night Ware Surveillance Systems. IEEE Access 2024, 12, 11. [Google Scholar] [CrossRef]
- Raza, N.; Ahmad, M.; Habib, M.A. Assessment of Efficient and Cost-Effective Vehicle Detection in Foggy Weather. In Proceedings of the 2024 18th International Conference on Open Source Systems and Technologies (ICOSST), Lahore, Pakistan, 17–18 December 2024; pp. 1–6. [Google Scholar] [CrossRef]
- Zhang, Q.; Guo, W.; Lin, M. LLD-YOLO: A Multi-Module Network for Robust Vehicle Detection in Low-Light Conditions. Signal Image Video Process. 2025, 19, 271. [Google Scholar] [CrossRef]
- Pravesh, R.; Sahana, B.C. Robust Firearm Detection in Low-Light Surveillance Conditions Using YOLOv11 with Image Enhancement. Int. J. Saf. Secur. Eng. 2025, 15, 797. [Google Scholar] [CrossRef]
- He, L.H.; Zhou, Y.Z.; Liu, L.; Cao, W.; Ma, J.H. Research on object detection and recognition in remote sensing images based on YOLOv11. Sci. Rep. 2025, 15, 14032. [Google Scholar] [CrossRef]
- Nguyen, B.A.; Kha, M.B.; Dao, D.M.; Nguyen, H.K.; Nguyen, M.D.; Nguyen, T.V.; Rathnayake, N.; Hoshino, Y.; Dang, T.L. UFR-GAN: A lightweight multi-degradation image restoration model. Pattern Recognit. Lett. 2025, 197, 282–287. [Google Scholar] [CrossRef]
- Chen, L.; Deng, H.; Liu, G.; Law, R.; Li, D.; Wu, E.Q.; Zhu, L. Retinex-guided illumination recovery and progressive feature adaptation for real-world nighttime UAV-based vehicle detection. Expert Syst. Appl. 2025, 297, 129476. [Google Scholar] [CrossRef]
- Chen, F.X. Research on Vehicle Axle Type Recognition Algorithm Based on Improved YOLOv5. Ph.D. Thesis, Chang’an University, Xi’an, China, 2024. [Google Scholar] [CrossRef]
- Li, C. Research on Vehicle Axle Identification Technology Based on Object Detection. Ph.D. Thesis, Chang’an University, Xi’an, China, 2024. [Google Scholar] [CrossRef]
- Zhang, X. Research on Vehicle Axle Measurement System Based on Machine Vision. Ph.D. Thesis, Chang’an University, Xi’an, China, 2024. [Google Scholar] [CrossRef]
- Wang, Z.J. Research on Vehicle Chassis Contour Reconstruction and Passability Analysis Method Based on LiDAR. Ph.D. Thesis, Beijing University of Technology, Beijing, China, 2022. [Google Scholar]
- Standards for Identifying Overloading of Road Freight Vehicles. Available online: https://xxgk.mot.gov.cn/2020/jigou/glj/202006/t20200623_3312494.html (accessed on 18 August 2016).
- Wang, X.; Liu, Z.; Hu, Y.; Xi, W.; Yu, W.; Zou, D. FeatureBooster: Boosting Feature Descriptors with a Lightweight Neural Network. In Proceedings of the 2023 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Vancouver, BC, Canada, 18–22 June 2023; pp. 7630–7639. [Google Scholar] [CrossRef]
- Zhang, Y.Y.; Yin, Q.H.; Jing, G.Q.; Yan, L.X.; Wang, X.X. Imaging Measurement Method for Vehicle Chassis under Non-Uniform Conditions. J. Metrol. 2024, 45, 178–185. [Google Scholar]
- Fischler, M.A.; Bolles, R.C. Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography. Commun. ACM 1981, 24, 381–395. [Google Scholar] [CrossRef]
- Zhu, Z.H.; Lu, Z.X.; Guo, Y.; Gao, Z. Image Recognition Method for Workpieces Based on Improved ORB-FLANN Algorithm. Electron. Sci. Technol. 2024, 37, 55. [Google Scholar] [CrossRef]
Models | Features | Matches | Average Feature Matching Time (Millisecond) |
---|---|---|---|
ORBR | 281 ± 31 | 60 ± 37 | 5.79 |
ORBE | 422 ± 21 | 112 ± 21 | 7.11 |
ORBE + FB-PT (Pre-trained) | 422 ± 21 | 133 ± 18 | 10.06 |
ORBE + FB-FT (Fine-tuned) | 422 ± 21 | 151 ± 20 | 9.97 |
Models | P | R | MAP50 | MAP50-95 | FPS |
---|---|---|---|---|---|
ORBE + YOLOv5s | 0.82 | 0.81 | 0.837 ± 0.015 | 0.478 ± 0.018 | 121 |
ORBE + YOLOv6n | 0.81 | 0.77 | 0.866 ± 0.013 | 0.504 ± 0.016 | 130 |
ORBE + YOLOv7tiny | 0.77 | 0.72 | 0.834 ± 0.016 | 0.454 ± 0.019 | 103 |
ORBE + YOLOv8n | 0.91 | 0.84 | 0.893 ± 0.012 | 0.564 ± 0.014 | 119 |
ORBE + YOLOv9t | 0.81 | 0.77 | 0.825 ± 0.017 | 0.491 ± 0.018 | 125 |
ORBE + YOLOv10n | 0.91 | 0.85 | 0.908 ± 0.011 | 0.594 ± 0.013 | 133 |
ORBE + YOLOv11n | 0.93 | 0.89 | 0.916 ± 0.012 | 0.633 ± 0.015 | 142 |
ORBE + FB-FT +YOLOv11n | 0.98 | 0.99 | 0.989 ± 0.010 | 0.780 ± 0.012 | 140 |
Models | P | R | MAP50 | MAP50-95 | FPS |
---|---|---|---|---|---|
ORBE + FB-FT + YOLOv8n | 0.95 | 0.96 | 0.953 ± 0.014 | 0.682 ± 0.015 | 116 |
ORBE + FB-FT + YOLOv10n | 0.93 | 0.92 | 0.926 ± 0.015 | 0.641 ± 0.017 | 129 |
ORBE + FB-FT + YOLOv11n | 0.98 | 0.99 | 0.989 ± 0.010 | 0.780 ± 0.012 | 140 |
Models | P | R | MAP50 | MAP50-95 | FPS |
---|---|---|---|---|---|
ORBE + SSD | 0.76 | 0.71 | 0.817 ± 0.018 | 0.412 ± 0.021 | 62 |
ORBE + Faster R-CNN | 0.79 | 0.76 | 0.848 ± 0.014 | 0.465 ± 0.018 | 54 |
ORBE + YOLOv11n | 0.86 | 0.83 | 0.853 ± 0.017 | 0499 ± 0.019 | 138 |
ORBE + FB-FT + YOLOv11n | 0.98 | 0.99 | 0.977 ± 0.011 | 0.743 ± 0.012 | 137 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2025 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Feng, X.; Peng, L.; Tang, Y.; Liu, C.; An, H. Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11. Sensors 2025, 25, 6211. https://doi.org/10.3390/s25196211
Feng X, Peng L, Tang Y, Liu C, An H. Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11. Sensors. 2025; 25(19):6211. https://doi.org/10.3390/s25196211
Chicago/Turabian StyleFeng, Xiaofan, Lu Peng, Yu Tang, Chang Liu, and Huazhen An. 2025. "Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11" Sensors 25, no. 19: 6211. https://doi.org/10.3390/s25196211
APA StyleFeng, X., Peng, L., Tang, Y., Liu, C., & An, H. (2025). Research on Axle Type Recognition Technology for Under-Vehicle Panorama Images Based on Enhanced ORB and YOLOv11. Sensors, 25(19), 6211. https://doi.org/10.3390/s25196211