Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain
Abstract
:1. Introduction
2. Research Equipment and Materials
2.1. Blockage-Based Camera Recognition Performance Evaluation Device
2.2. Interface between the Camera to Be Evaluated and the Evaluation System
3. Research Conditions and Methods
3.1. Scenario Construction
3.2. Blockage Environment Implementation
3.3. Data Characteristics
4. Results and Discussions
4.1. Analysis of the Importance of the Major Variables
4.2. Analysis of the Effects of the Object and Dust Colors on Object Recognition
4.3. Effects of Blockage on Different Objects
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Conflicts of Interest
References
- An, P.; Liang, J.; Yu, K.; Fang, B.; Ma, J. Deep Structural Information Fusion for 3D Object Detection on LiDAR—Camera System. Comput. Vis. Image Underst. 2022, 214, 103295. [Google Scholar] [CrossRef]
- Chen, M.; Liu, P.; Zhao, H. LiDAR-Camera Fusion: Dual Transformer Enhancement for 3D Object Detection. Eng. Appl. Artif. Intell. 2023, 120, 105815. [Google Scholar] [CrossRef]
- Liu, L.; He, J.; Ren, K.; Xiao, Z.; Hou, Y. A LiDAR—Camera Fusion 3D Object Detection Algorithm. Information 2022, 13, 169. [Google Scholar] [CrossRef]
- Yeong, J.; Velasco-Hernandez, G.; Barry, J.; Walsh, J. Sensor and Sensor Fusion Technology in Autonomous Vehicles: A Review. Sensors 2021, 21, 2140. [Google Scholar] [CrossRef] [PubMed]
- Das, A. SoildNet: Soiling Degradation Detection in Autonomous Driving. arXiv 2019, arXiv:1911.01054. [Google Scholar] [CrossRef]
- Uřičář, M.; Křížek, P.; Sistu, G.; Yogamani, S. SoilingNet: Soiling Detection on Automotive Surround-View Cameras. In Proceedings of the IEEE Intelligent Transportation Systems Conference (ITSC), Auckland, New Zealand, 27 October 2019. [Google Scholar] [CrossRef]
- Kenk, M.A.; Hassaballah, M. DAWN: Vehicle Detection in Adverse Weather Nature Dataset. arXiv 2008, arXiv:2008.05402. [Google Scholar] [CrossRef]
- Yu, F.; Chen, H.; Wang, X.; Xian, W.; Chen, Y.; Liu, F.; Madhavan, V.; Darrell, T. BDD100K: A Diverse Driving Dataset for Heterogeneous Multitask Learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), Seattle, WA, USA, 14–19 June 2020; pp. 2636–2645. [Google Scholar]
- Agunbiade, Y.O.; Dehinbo, J.O.; Zuva, T.; Akanbi, A.K. Road Detection Technique Using Filters with Application to Autonomous Driving System. arXiv 2018, arXiv:1809.05878. [Google Scholar] [CrossRef]
- Freimuth, H.; König, M. A Framework for Automated Acquisition and Processing of as-Built Data with Autonomous Unmanned Aerial Vehicles. Sensors 2019, 19, 4513. [Google Scholar] [CrossRef] [PubMed]
- Mohd Ansari Shajahan, J.; Mamani Reyes, S.; Xiao, J. Camera Lens Dust Detection and Dust Removal for Mobile Robots in Dusty Fields. In Proceedings of the IEEE International Conference on Robotics and Biomimetics (ROBIO), Sanya, China, 27 December 2021. [Google Scholar] [CrossRef]
- Huang, Z.-Y.; Lai, Y.-C. Image-Based Sense and Avoid of Small Scale UAV Using Deep Learning Approach. In Proceedings of the International Conference on Unmanned Aircraft Systems (ICUAS), Athens, Greece, 1 September 2020. [Google Scholar] [CrossRef]
- Premebida, C.; Monteiro, G.; Nunes, U.; Peixoto, P. A Lidar and Vision-Based Approach for Pedestrian and Vehicle Detection and Tracking. In Proceedings of the IEEE Intelligent Transportation Systems Conference, Bellevue, WA, USA, 30 September 2007. [Google Scholar] [CrossRef]
- Wu, X.; Wang, L. Camera Simulator for Benchmarking Computational Photography Algorithms. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition Workshops, Honolulu, HI, USA, 21–26 July 2017; pp. 322–327. [Google Scholar]
- Kim, K.; Davis, L.S. Multi-camera Tracking and Segmentation of Occluded People on Ground Plane Using Search-Guided Particle Filtering. In Proceedings of the Computer Vision—ECCV 2006: 9th European Conference on Computer Vision, Graz, Austria, 7–13 May 2006. [Google Scholar] [CrossRef]
- Arulkumar, V.; Aruna, M.; Lakshmi, M.A.; Rao, B.H. Super Resolution and Demosaicing Based Self Learning Adaptive Dictionary Image Denoising Framework. In Proceedings of the 2021 5th International Conference on Intelligent Computing and Control Systems (ICICCS), Madurai, India, 6–8 May 2021; pp. 1891–1897. [Google Scholar] [CrossRef]
- He, K.; Zhang, X.; Ren, S.; Sun, J. Deep Residual Learning for Image Recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA, 27–30 June 2016; pp. 770–778. [Google Scholar]
- Du, K.; Bobkov, A. An Overview of Object Detection and Tracking Algorithms. Eng. Proc. 2023, 33, 22. [Google Scholar] [CrossRef]
- Ma, L.; Meng, D.; Zhao, S.; An, B. Visual Localization with a Monocular Camera for Unmanned Aerial Vehicle Based on Landmark Detection and Tracking Using YOLOv5 and DeepSORT. Int. J. Adv. Robot. Syst. 2023, 20, 17298806231164831. [Google Scholar] [CrossRef]
- Ghaderzadeh, M.; Aria, M.; Hosseini, A.; Asadi, F.; Bashash, D.; Abolghasemi, H. A Fast and Efficient CNN Model for B-ALL Diagnosis and its Subtypes Classification Using Peripheral Blood Smear Images. Int. J. Intell. Syst. 2022, 37, 5113–5133. [Google Scholar] [CrossRef]
- Garavand, A.; Behmanesh, A.; Aslani, N.; Sadeghsalehi, H.; Ghaderzadeh, M. Towards Diagnostic Aided Systems in Coronary Artery Disease Detection: A Comprehensive Multiview Survey of the State of the Art. Int. J. Intell. Syst. 2023, 2023, 6442756. [Google Scholar] [CrossRef]
- Hosseini, A.; Eshraghi, M.A.; Taami, T.; Sadeghsalehi, H.; Hoseinzadeh, Z.; Ghaderzadeh, M.; Rafiee, M. A Mobile Application Based on Efficient Lightweight CNN Model for Classification of B-ALL Cancer from Non-Cancerous Cells: A Design and Implementation Study. Inform. Med. Unlocked 2023, 39, 101244. [Google Scholar] [CrossRef]
- Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; et al. Scikit-Learn: Machine Learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830. [Google Scholar]
- Rogers, J.; Gunn, S. Identifying Feature Relevance Using a Random Forest. In Subspace, Latent Structure and Feature Selection; Saunders, C., Grobelnik, M., Gunn, S., Shawe-Taylor, J., Eds.; SLSFS 2005. Lecture Notes in Computer Science; Springer: Berlin/Heidelberg, Germany, 2005; Volume 3940, pp. 173–184. [Google Scholar] [CrossRef]
- Guo, Y.; Yang, Y. Improved Box-Cox Transformation for Non-normal Data. Stat. Probab. Lett. 2002, 57, 273–280. [Google Scholar]
- Hong, S.; Malik, M.L.; Lee, M.-K. Testing Configural, Metric, Scalar, and Latent Mean Invariance Across Genders in Sociotropy and Autonomy Using a Non-western Sample. Educ. Psychol. Meas. 2003, 63, 636–654. [Google Scholar] [CrossRef]
- Cheddad, A. On Box-Cox Transformation for Image Normality and Pattern Classification. IEEE Access 2020, 8, 154975–154983. [Google Scholar] [CrossRef]
- Hautamäki, V.; Pöllänen, A.; Kinnunen, T.; Lee, K.A.; Li, H.; Fränti, P. A Comparison of Categorical Attribute Data Clustering Methods. In Structural, Syntactic, and Statistical Pattern Recognition: Joint IAPR International Workshop, S+; SSPR: Joensuu, Finland, 2014; pp. 53–62. [Google Scholar] [CrossRef]
- von Luxburg, U. A Tutorial on Spectral Clustering. Stat. Comput. 2007, 17, 395–416. [Google Scholar] [CrossRef]
Maker | Camera | Radar | LiDAR | Total |
---|---|---|---|---|
Tesla (Austin, TX, USA) | 8 | 0 | 0 | 8 |
AutoX (Shenzhen, China) | 8 | 6 | 1 | 15 |
Pony.ai (Fremont, CA, USA) | 7 | 4 | 4 | 15 |
Baidu (Beijing, China) | 6 | 4 | 5 | 15 |
Waymo (San Francisco, CA, USA) | 8 | 6 | 4 | 18 |
Mobileye (New York, NY, USA) | 11 | 6 | 3 | 20 |
Aptiv (Hanover, Ireland) | 2 | 10 | 9 | 21 |
Sony (Tokyo, Japan) | 16 | 5 | 4 | 25 |
NVIDIA (Santa Clara, CA, USA) | 14 | 9 | 3 | 26 |
Cruise (San Francisco, CA, USA) | 14 | 21 | 5 | 40 |
Manufacturer | Techways (Yongin-si, Gyeonggi-do, Republic of Korea) |
---|---|
Dimension (W) × (D) × (H) | 1800 mm × 1600 mm × 2000 mm |
Driving display | 65″ UHD (3840 × 2160), 120 f/s |
System display | 24″ internal status monitoring, 24″ device control |
Camera | Autonomous a2z |
Camera position precision control unit | Pitch, yaw, roll, X, Y, Z |
Camera controller | Camera ECU, I/O controller |
Power | AC power control, DC 12V, 24V sub-power |
Object Type | Symbol | No. Objects | |
---|---|---|---|
Pedestrian | Adult | P-1 | 6 |
P-2 | |||
P-3 | |||
Child | P-4 | ||
Adult | P-5 | ||
P-6 | |||
Vehicle | V-1 | 4 | |
V-2 | |||
V-3 | |||
V-4 | |||
Cyclist | Motorcycle | C-1 | 3 |
Bicycle | C-2 | ||
Motorcycle | C-3 | ||
Animal | A-1 | 1 | |
Traffic signal | S-1 | 1 | |
Total no. objects | 15 |
Blockage Color_Object Color | N | Average | Standard Deviation | Standard Error | 95% Confidence Interval | Min | Max | |
---|---|---|---|---|---|---|---|---|
Low | High | |||||||
Black_Dark | 855 | 47.3610 | 19.88092 | 0.67991 | 46.0265 | 48.6954 | 1.65 | 72.83 |
Black_Light | 866 | 50.6915 | 19.43979 | 0.66059 | 49.3950 | 51.9881 | 0.93 | 72.83 |
Gray_Dark | 580 | 30.2051 | 18.53614 | 0.76967 | 28.6934 | 31.7168 | 0.93 | 70.02 |
Gray_Light | 593 | 46.6915 | 22.88584 | 0.93981 | 44.8457 | 48.5372 | 0.93 | 72.83 |
Yellow_Dark | 574 | 37.4648 | 18.88618 | 0.78829 | 35.9165 | 39.0131 | 1.33 | 70.02 |
Yellow_Light | 704 | 44.0406 | 24.03756 | 0.90595 | 42.2619 | 45.8193 | 1.05 | 72.83 |
Total | 4172 | 43.6503 | 21.73887 | 0.33656 | 42.9904 | 44.3101 | 0.93 | 72.83 |
Method | Statistics | Degree of Freedom (between Groups) | Degree of Freedom (within Groups) | Classical Test Theory Significance Probability |
---|---|---|---|---|
Welch | 103.595 | 5 | 1858.401 | <0.001 |
Brown–Forsythe | 87.159 | 5 | 3825.313 | <0.001 |
(I) Blockage Color_Object Color | (J) Blockage Color_Object Color | Average Difference (I − J) | Standard Error | Classical Test Theory Probability | 95% Confidence Interval | |
---|---|---|---|---|---|---|
Low | High | |||||
Black_Dark | Black_Light | 3.33056 * | 0.94798 | 0.006 | −6.0351 | −0.6260 |
Black_Dark | Gray_Dark | 17.15586 * | 1.02697 | <0.001 | 14.2249 | 20.0868 |
Black_Dark | Gray_Light | 0.66948 | 1.15997 | 0.993 | −2.6416 | 3.9806 |
Black_Dark | Yellow_Dark | 9.89612 * | 1.04100 | <0.001 | 6.9250 | 12.8672 |
Black_Dark | Yellow_Light | 3.32037 * | 1.13271 | 0.040 | 0.0879 | 6.5529 |
Black_Light | Gray_Dark | 20.48642 * | 1.01428 | <0.001 | 17.5916 | 23.3812 |
Black_Light | Gray_Light | 4.00004 * | 1.14875 | 0.007 | 0.7208 | 7.2793 |
Black_Light | Yellow_Dark | 13.22668 * | 1.02849 | <0.001 | 10.2912 | 16.1621 |
Black_Light | Yellow_Light | 6.65092 * | 1.12122 | <0.001 | 3.4512 | 9.8507 |
Gray_Dark | Gray_Light | 16.48638 * | 1.21476 | <0.001 | 19.9540 | 13.0187 |
Gray_Dark | Yellow_Dark | 7.25974 * | 1.10173 | <0.001 | 10.4046 | 4.1148 |
Gray_Dark | Yellow_Light | 13.83549 * | 1.18876 | <0.001 | 17.2283 | 10.4427 |
Gray_Light | Yellow_Dark | 9.22664 * | 1.22664 | <0.001 | 5.7251 | 12.7282 |
Gray_Light | Yellow_Light | 2.65088 | 1.30537 | 0.325 | 1.0747 | 6.3765 |
Yellow_Dark | Yellow_Light | −6.57575 * | 1.20090 | <0.001 | 10.0032 | 3.1483 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2023 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Son, S.; Lee, W.; Jung, H.; Lee, J.; Kim, C.; Lee, H.; Park, H.; Lee, H.; Jang, J.; Cho, S.; et al. Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain. Sensors 2023, 23, 8027. https://doi.org/10.3390/s23198027
Son S, Lee W, Jung H, Lee J, Kim C, Lee H, Park H, Lee H, Jang J, Cho S, et al. Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain. Sensors. 2023; 23(19):8027. https://doi.org/10.3390/s23198027
Chicago/Turabian StyleSon, Sungho, Woongsu Lee, Hyungi Jung, Jungki Lee, Charyung Kim, Hyunwoo Lee, Hyungwon Park, Hyunmi Lee, Jeongah Jang, Sungwan Cho, and et al. 2023. "Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain" Sensors 23, no. 19: 8027. https://doi.org/10.3390/s23198027
APA StyleSon, S., Lee, W., Jung, H., Lee, J., Kim, C., Lee, H., Park, H., Lee, H., Jang, J., Cho, S., & Ryu, H.-C. (2023). Evaluation of Camera Recognition Performance under Blockage Using Virtual Test Drive Toolchain. Sensors, 23(19), 8027. https://doi.org/10.3390/s23198027