Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles
Abstract
:1. Introduction
- We proposed an empirical projection method in which the LiDAR points to the 2D image and used the IoU to fuse the two sensors’ data. The proposed method showed good performance with a low-channel LiDAR.
- We proposed a BEV estimation and CIPV calculation method of the objects (e.g., cars) using vision and low-channel LiDAR data.
- We validated our method in a real-world situation through the AEB test. We showed that the CIPV through the proposed method contributed to improving the performance of AEB.
2. Materials and Methods
2.1. Test Environment
2.1.1. Sensors’ Description
2.1.2. Proving Ground for the Experiments
2.2. LiDAR Object Tracking
2.2.1. Point Cloud Segmentation and Tracking
2.2.2. Distance Accuracy from Tracked Data
2.3. Vision Object Tracking
2.3.1. Object Detection
2.3.2. Distance Estimation with Regression
2.3.3. Object Tracking
Algorithm 1 Simple tracking algorithm. |
Input list of objects (O) Output list of objects with distance information (O)
|
2.4. Object 3D Coordinate Estimation
2.4.1. The Alignment Method of the Camera and LiDAR Point Cloud
2.4.2. Transforming the Image Pixel to the BEV
2.5. Fusion of LiDAR and VISION
2.5.1. Fusion of the Camera and LiDAR Tracking Data with the IoU
Algorithm 2 Fusion algorithm. |
Input object data tracked by LiDAR L, object data tracked by vision V Output FusionData
|
2.5.2. Result of Fusion Data
2.6. ACC
2.6.1. Implementation of ACC
2.6.2. AEB Test
- Response to a vehicle in front in a stationary state: car-to-car rear stationary (CCRs)
- Response to a vehicle in front at a slow speed: car-to-car rear moving (CCRm)
- Response to a vehicle in front decelerating: car-to-car rear braking (CCRb)
Algorithm 3 ACC algorithm. |
Input Fusion data O and vehicle current velocity Output Vehicle desired velocity
|
3. Experimental Results
3.1. Qualitative Evaluation
3.2. CIPV
3.2.1. Scenario
3.2.2. The CIPV Result of the Scenario
3.3. ACC
3.3.1. Scenario
3.3.2. The Result of the Scenario
- CCRs
- CCRm
- CCRb
4. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Conflicts of Interest
Appendix A
References
- Meyer, G.P.; Charland, J.; Hegde, D.; Laddha, A.; Vallespi-Gonzalez, C. Sensor fusion for joint 3d object detection and semantic segmentation. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA, 16–17 June 2019. [Google Scholar]
- Huang, L.; Barth, M. A novel multi-planar LIDAR and computer vision calibration procedure using 2D patterns for automated navigation. In Proceedings of the 2009 IEEE Intelligent Vehicles Symposium, Xi’an, China, 3–5 June 2009; pp. 117–122. [Google Scholar]
- Zhou, L.; Deng, Z. Extrinsic calibration of a camera and a lidar based on decoupling the rotation from the translation. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 642–648. [Google Scholar]
- García-Moreno, A.; González-Barbosa, J.; Ornelas-Rodriguez, F.J.; Hurtado-Ramos, J.; Primo-Fuentes, M.N. LIDAR and Panoramic camera extrinsic calibration approach using a pattern plane. In Proceedings of the Mexican Conference on Pattern Recognition, Queretaro, Mexico, 26–29 June 2013. [Google Scholar]
- Park, Y.; Yun, S.M.; Won, C.; Cho, K.; Um, K.; Sim, S. Calibration between color camera and 3D LIDAR instruments with a polygonal planar board. Sensors 2014, 14, 5333–5353. [Google Scholar] [CrossRef] [PubMed] [Green Version]
- Geiger, A.; Moosmann, F.; Car, Ö.; Schuster, B. Automatic camera and range sensor calibration using a single shot. In Proceedings of the 2012 IEEE International Conference on Robotics and Automation, Saint Paul, MN, USA, 14–18 May 2012; pp. 3936–3943. [Google Scholar]
- Vahidi, A.; Eskandarian, A. Research advances in intelligent collision avoidance and adaptive cruise control. IEEE Trans. Intell. Transp. Syst. 2003, 4, 143–153. [Google Scholar] [CrossRef] [Green Version]
- Ang, K.H.; Chong, G.; Li, Y. PID control system analysis, design, and technology. IEEE Trans. Control. Syst. Technol. 2005, 13, 559–576. [Google Scholar]
- Qin, S.J.; Badgwell, T.A. A survey of industrial model predictive control technology. Control. Eng. Pract. 2003, 11, 733–764. [Google Scholar] [CrossRef]
- Eker, I.; Torun, Y. Fuzzy logic control to be conventional method. Energy Convers. Manag. 2006, 47, 377–394. [Google Scholar] [CrossRef]
- He, Y.; Ciuffo, B.; Zhou, Q.; Makridis, M.; Mattas, K.; Li, J.; Li, Z.; Yan, F.; Xu, H. Adaptive cruise control strategies implemented on experimental vehicles: A review. IFAC Pap. 2019, 52, 21–27. [Google Scholar] [CrossRef]
- ISO. Intelligent Transport Systems—Adaptive Cruise Control Systems—Performance Requirements and Test Procedures; ISO: Geneva, Switzerland, 2018. [Google Scholar]
- Park, H.S.; Kim, D.J.; Kang, C.M.; Kee, S.C.; Chung, C.C. Object detection in adaptive cruise control using multi-class support vector machine. In Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC), Yokohama, Japan, 16–19 October 2017; pp. 1–6. [Google Scholar]
- LidarPerception. Available online: https://github.com/LidarPerception (accessed on 5 December 2020).
- Zermas, D.; Izzat, I.; Papanikolopoulos, N. Fast segmentation of 3d point clouds: A paradigm on lidar data for autonomous vehicle applications. In Proceedings of the 2017 IEEE International Conference on Robotics and Automation (ICRA), Singapore, 29 May–3 June 2017; pp. 5067–5073. [Google Scholar]
- Euclidean Cluster Extraction. Available online: https://github.com/PointCloudLibrary/pcl/blob/pcl-1.11.1/doc/tutorials/content/cluster_extraction.rst#id1 (accessed on 5 December 2020).
- Rusu, R.B. Semantic 3d object maps for everyday manipulation in human living environments. KI-Künstliche Intell. 2010, 24, 345–348. [Google Scholar] [CrossRef] [Green Version]
- Himmelsbach, M.; Wuensche, H.J. Tracking and classification of arbitrary objects with bottom-up/top-down detection. In Proceedings of the 2012 IEEE Intelligent Vehicles Symposium, Madrid, Spain, 3–7 June 2012; pp. 577–582. [Google Scholar]
- Redmon, J.; Farhadi, A. YOLOv3: An incremental improvement. arXiv 2018, arXiv:1804.02767. [Google Scholar]
- Lee, M.H.; Park, H.G.; Lee, S.H.; Yoon, K.S.; Lee, K.S. An adaptive cruise control system for autonomous vehicles. Int. J. Precis. Eng. Manuf. 2013, 14, 373–380. [Google Scholar] [CrossRef]
- Magdici, S.; Althoff, M. Adaptive cruise control with safety guarantees for autonomous vehicles. IFAC Pap. 2017, 50, 5774–5781. [Google Scholar] [CrossRef]
- Euro, N. Euro NCAP AEB Car-to-Car Test Protocol v3.0.2. Available online: https://www.euroncap.com/en/for-engineers/protocols/safety-assist/ (accessed on 30 April 2021).
- Hou, Y.; Ma, Z.; Liu, C.; Loy, C.C. Learning lightweight lane detection cnns by self attention distillation. In Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Korea, 27 October-2 November 2019; pp. 1013–1021. [Google Scholar]
Sensor | Product Name | Specification |
---|---|---|
LiDAR | Velodyne Puck LiDAR (previously VLP-16) | 16 channels, measurement range up to 100 m with 10 fps |
Camera | Logitech StreamCam | FoV 78, resolution 720p with 60 fps |
GPS | RTK GNSS GPS (MRP-2000) | resolution 0.010m with 10 fps |
Data Type | Only Vision | LiDAR + Vision | Only LiDAR |
---|---|---|---|
Name | “V” | “VL” | “L” |
Bounding box 2d | [x1, y1, x2, y2] | Choose the LiDAR data | [x1, y1, x2, y2] |
Bird’s eye-view | [x1, y1, x2, y2] | Choose the LiDAR data | [x1, y1, x2, y2, x3, y3, x4, y4] |
Object’s closest point | [x, y, z] | Choose the LiDAR data | [x, y, z] |
Distance | meters | Choose the LiDAR data | meters |
Velocity | - | Choose the LiDAR data | meter per s |
In-path | 0 or 1 | Choose the LiDAR data | 0 or 1 |
Moving state | - | Choose the LiDAR data | 0 or 1 |
Type ID | Result of YOLOv3 | Choose the vision data | - |
Time to collision | - | Choose the LiDAR data | s |
Type of Path | Type of Fusion Data | Distance Error ↓ | Lateral Error ↓ | Longitudinal Error ↓ | ||||||
---|---|---|---|---|---|---|---|---|---|---|
Min | Max | MAE | Min | Max | MAE | Min | Max | MAE | ||
Scenario (a) | vision + LiDAR | 0.0010 | 0.4999 | 0.2767 | 0.0286 | 1.3587 | 0.3764 | 0.0025 | 1.3587 | 0.1625 |
only vision | 0.0077 | 0.4990 | 0.2464 | 0.0704 | 2.9954 | 0.4567 | 0.0024 | 2.9954 | 0.1902 | |
Scenario (b) | vision + LiDAR | 0.0031 | 0.0957 | 0.0498 | 0.6957 | 1.1594 | 0.9437 | 0.0001 | 1.1594 | 0.0697 |
only vision | 0.0045 | 0.4957 | 0.2424 | 0.1050 | 3.4802 | 0.8706 | 0.0011 | 3.4802 | 0.2029 | |
Scenario (c) | vision + LiDAR | 0.0033 | 0.0967 | 0.0532 | 0.1697 | 6.4679 | 3.0378 | 0.0309 | 6.4649 | 0.1857 |
only vision | 0.0041 | 0.4821 | 0.2432 | 0.0572 | 5.4662 | 1.4343 | 0.0034 | 5.4662 | 0.2386 |
Publisher’s Note: MDPI stays neutral with regard to jurisdictional claims in published maps and institutional affiliations. |
© 2021 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Bae, H.; Lee, G.; Yang, J.; Shin, G.; Choi, G.; Lim, Y. Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles. Sensors 2021, 21, 3124. https://doi.org/10.3390/s21093124
Bae H, Lee G, Yang J, Shin G, Choi G, Lim Y. Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles. Sensors. 2021; 21(9):3124. https://doi.org/10.3390/s21093124
Chicago/Turabian StyleBae, Hyunjin, Gu Lee, Jaeseung Yang, Gwanjun Shin, Gyeungho Choi, and Yongseob Lim. 2021. "Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles" Sensors 21, no. 9: 3124. https://doi.org/10.3390/s21093124
APA StyleBae, H., Lee, G., Yang, J., Shin, G., Choi, G., & Lim, Y. (2021). Estimation of the Closest In-Path Vehicle by Low-Channel LiDAR and Camera Sensor Fusion for Autonomous Vehicles. Sensors, 21(9), 3124. https://doi.org/10.3390/s21093124