Next Article in Journal
Soft-Error Vulnerability Estimation Approach Based on the SET Susceptibility of Each Gate
Previous Article in Journal
Stability Analysis of Grid-Connected Photovoltaic Systems with Dynamic Phasor Model
Previous Article in Special Issue
Learning to See the Hidden Part of the Vehicle in the Autopilot Scene
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Editorial

Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS)

Electrical and Computer Engineering, Mississippi State University, 406 Hardy Road, Mississippi State, MS 39762, USA
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Electronics 2019, 8(7), 748; https://doi.org/10.3390/electronics8070748
Submission received: 25 June 2019 / Accepted: 26 June 2019 / Published: 2 July 2019

1. Introduction

Advanced driver assistance systems (ADAS) are rapidly being developed for autonomous vehicles. Two driving factors enabling these efforts are machine learning and embedded computing. Advanced machine learning algorithms allow the ADAS system to detect objects, obstacles, other vehicles, pedestrians, and lanes, and also enables the estimation of object trajectories and intents (e.g., this car will change lanes ahead). The Special Issue [1] has 18 high-quality papers covering a diversity of focus areas in ADAS:
  • Communications: [2,3];
  • Object detection and tracking: [4,5,6,7,8,9,10];
  • Sensor modeling and simulation: [11,12];
  • Decision-making: [13,14];
  • New datasets: [9,10,15,16];
  • Driver monitoring: [17];
  • New applied hardware for ADAS: [9,18,19].
Some papers fit into multiple categories (e.g., [9,10]). It is also worth noting that three papers were selected as feature papers for the Special Issue:
  • “Performance Comparison of Geobroadcast Strategies for Winding Roads” by Talavera et al. [2];
  • “LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System” by Wei et al. [4]; and
  • “A New Dataset and Performance Evaluation of a Region-Based CNN for Urban Object Detection” by Dominguez-Sanchez et al. [15].

2. The Present Special Issue

2.1. Communications

V2X (vehicle to another vehicle (X = V) or infrastructure (X = I)) communications is a very important part of ADAS, because these communications can improve vehicle safety and alert the autonomous system to potentially dangerous situations.
A V2X communication module was implemented and validated on a close curve in a winding road where poor visibility causes a safety risk [2]. A combination of cooperative systems is proposed to offer a wider range of information to the vehicle than on-board sensors currently provide to help support systems to transition from Society of Automotive Engineers (SAE) levels 2 and 3 to level 4 [3].

2.2. Object Detection and Tracking

Object tracking is a critical component in ADAS applications. Objects must be detected and tracked for obstacle avoidance, collision detection, and path planning, to name a few.
A kernel-based multiple instance learning (MIL) tracker was developed that is computationally fast and robust to partial occlusions, pose variations, and illumination changes [5]. To help combat partial object occlusions, a tracking-by-detection framework which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain, is shown to produce good results compared to state-of-the-art trackers [6].
Object detection is also a critical component. A LiDAR’s 3D point cloud was categorized into drivable and non-drivable regions, and an expectation-maximization method was utilized to detect parallel lines and update the 3D line parameters in real time, which allowed the generation of accurate lane-level maps of two complex urban routes [7]. To improve multi-object detection, a detection framework denoted adaptive perceive-single shot multi-box detector (AP-SSD) is proposed, where custom multi-shape Gabor filters to improve low-level object detection, bottleneck-long short term memory (LSTM) is used to to refine and propagate the feature mapping between frames, and a dynamic region amplification network framework all work together to achieve better detection results when small objects, multiple objects, cluttered background, and large-area occlusions are present in the scenery [8]. To improve the quality and lower the cost of blind spot detection, a camera-based deep learning method is proposed using a lightweight and computationally efficient neural network. Camera-based methods will be much more cost-effective than using a dedicated radar in this application. In addition, a dataset with more than 10,000 labeled images was generated using a blind spot view camera mounted on a test vehicle [9].
Sensor fusion is a third important area in ADAS units because sensors have different strengths and most experts agree that fusion is required to achieve the best performance. Camera and LiDAR fusion were utilized to make object detection more robust in [4].

2.3. Sensor Modeling and Simulation

Many modern machine learning algorithms require significant amounts of training data, which may not be available or may be too expensive and time-consuming to collect.
To aid in LiDAR-based algorithm development, a real-time physics-based LiDAR simulator for densely vegetated environments including an improved statistical model for the range distribution of LiDAR returns in grass was developed and validated [11]. A mathematical model was developed for the performance degradation of LiDAR as a function of rain rate. This model was used to quantitatively evaluate how rain influences a LiDAR-based obstacle-detection system [12].

2.4. Decision-Making

Decision systems in ADAS are complicated. They require information analyzed from sensor data, proprioceptive data from the vehicle, and data from other sources (e.g., V2X communications).
To investigate crash severity prediction in emergency decisions, several support vector machine (SVM)-based decision models were analyzed to estimate crash severity prediction involving braking, turning, and braking plus turning actions [14]. Ethical and legal issues in decision systems were analyzed by using a T-S fuzzy neural network that was developed incorporating ethical and legal factors into the driving decision-making model under emergency situations evoked by red-light-running behaviors [13].

2.5. New Datasets

A critical aspect of developing and testing deep learning systems is the availability of high-quality datasets for algorithm training and testing.
A new dataset which includes all of the essential urban objects was collected, including weakly annotated data for training and testing weakly supervised learning techniques. Furthermore, a faster region-based convolutional neural networks (R-CNN) was evaluated using this dataset and a new R-CNN plus tracking technique to accelerate the process of real-time urban object detection was developed and evaluated [15]. A blind spot detection dataset is introduced in [9]. Refer to Section 2.2 for more information, as this paper belongs in both categories. A new benchmark dataset named Pano-RSOD was created for 360 panoramic road scene object detection. The dataset contains vehicles, pedestrians, traffic signs and guiding arrows, small objects, and imagery from diverse road scenes. Furthermore, the usefulness of the dataset was demonstrated by training state-of-the-art deep-learning algorithms for object detection in panoramic imagery [16].

2.6. Driver Monitoring

As ADAS levels are not yet at full autonomy (level 5), driver monitoring is critical to safety.
To investigate robust and distinguishable patterns of heart rate variability, wearable electrocardiogram (ECG) or photoplethysmogram (PPG) sensors were utilized to generate recurrence plots, which were then analyzed by a CNN to detect drowsy drivers. The proposed method showed significant improvement over conventional models [17].

2.7. New Applied Hardware for ADAS

An algorithm based on the mathematical p-norm was developed which improved both the traction power and the trajectory smoothness of joystick-controlled two-wheeled vehicles, such as tanks and wheelchairs [18].
To address challenges and issues in the challenging area of torque vectoring on multi-electric-motor vehicles for enhanced vehicle dynamics, a neural network is proposed for batch predictions for real-time optimization on a parallel embedded platform with a GPU and an FPGA. This work will help others who are conducting research in this technical area [19].

3. Concluding Remarks

The Guest Editors were pleased with the quality and breadth of the accepted papers. We were also delighted to have three papers with high-quality and very useful new datasets [9,15,16]. Looking to the future, we believe all research works enclosed in this Special Issue will promote further study in the area of ADAS.

Author Contributions

The authors worked together and contributed equally during the editorial process of this Special Issue.

Funding

This research received no external funding.

Acknowledgments

The Guest Editors thank all of the authors for their excellent contributions to this Special Issue. We also thank the reviewers for their dedication and suggestions to improve each of the papers. We finally thank the Editorial Board of MDPI’s Electronics for allowing us to be Guest Editors for this Special Issue, and to the Electronics Editorial Office for their guidance, dedication, and support.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Electronics Special Issue: Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS), 2019. Available online: https://www.mdpi.com/journal/electronics/special_issues/ML_EmbeddedComputing_ADAS (accessed on 25 June 2019).
  2. Talavera, E.; Anaya, J.J.; Gómez, O.; Jiménez, F.; Naranjo, J.E. Performance Comparison of Geobroadcast Strategies for Winding Roads. Electronics 2018, 7, 32. [Google Scholar] [CrossRef]
  3. Jiménez, F.; Naranjo, J.E.; Sánchez, S.; Serradilla, F.; Pérez, E.; Hernández, M.J.; Ruiz, T. Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation. Electronics 2018, 7, 228. [Google Scholar] [CrossRef]
  4. Wei, P.; Cagle, L.; Reza, T.; Ball, J.; Gafford, J. LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System. Electronics 2018, 7, 84. [Google Scholar] [CrossRef]
  5. Han, T.; Wang, L.; Wen, B. The Kernel Based Multiple Instances Learning Algorithm for Object Tracking. Electronics 2018, 7, 97. [Google Scholar] [CrossRef]
  6. Wei, J.; Liu, F. Coupled-Region Visual Tracking Formulation Based on a Discriminative Correlation Filter Bank. Electronics 2018, 7, 244. [Google Scholar] [CrossRef]
  7. Jung, J.; Bae, S.H. Real-Time Road Lane Detection in Urban Areas Using LiDAR Data. Electronics 2018, 7, 276. [Google Scholar] [CrossRef]
  8. Wang, X.; Hua, X.; Xiao, F.; Li, Y.; Hu, X.; Sun, P. Multi-Object Detection in Traffic Scenes Based on Improved SSD. Electronics 2018, 7, 302. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Bai, L.; Lyu, Y.; Huang, X. Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network. Electronics 2019, 8, 233. [Google Scholar] [CrossRef]
  10. Xu, Y.; Wang, H.; Liu, X.; He, H.R.; Gu, Q.; Sun, W. Learning to See the Hidden Part of the Vehicle in the Autopilot Scene. Electronics 2019, 8, 331. [Google Scholar] [CrossRef]
  11. Goodin, C.; Doude, M.; Hudson, C.R.; Carruth, D.W. Enabling Off-Road Autonomous Navigation-Simulation of LIDAR in Dense Vegetation. Electronics 2018, 7, 154. [Google Scholar] [CrossRef]
  12. Goodin, C.; Carruth, D.; Doude, M.; Hudson, C. Predicting the Influence of Rain on LIDAR in ADAS. Electronics 2019, 8, 89. [Google Scholar] [CrossRef]
  13. Li, S.; Zhang, J.; Wang, S.; Li, P.; Liao, Y. Ethical and Legal Dilemma of Autonomous Vehicles: Study on Driving Decision-Making Model under the Emergency Situations of Red Light-Running Behaviors. Electronics 2018, 7, 264. [Google Scholar] [CrossRef]
  14. Liao, Y.; Zhang, J.; Wang, S.; Li, S.; Han, J. Study on Crash Injury Severity Prediction of Autonomous Vehicles for Different Emergency Decisions Based on Support Vector Machine Model. Electronics 2018, 7, 381. [Google Scholar] [CrossRef]
  15. Dominguez-Sanchez, A.; Cazorla, M.; Orts-Escolano, S. A New Dataset and Performance Evaluation of a Region-Based CNN for Urban Object Detection. Electronics 2018, 7, 301. [Google Scholar] [CrossRef]
  16. Li, Y.; Tong, G.; Gao, H.; Wang, Y.; Zhang, L.; Chen, H. Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection. Electronics 2019, 8, 329. [Google Scholar] [CrossRef]
  17. Lee, H.; Lee, J.; Shin, M. Using Wearable ECG/PPG Sensors for Driver Drowsiness Detection Based on Distinguishable Pattern of Recurrence Plots. Electronics 2019, 8, 192. [Google Scholar] [CrossRef]
  18. Said, A.; Davizón, Y.; Soto, R.; Félix-Herrán, C.; Hernández-Santos, C.; Espino-Román, P. An Infinite-Norm Algorithm for Joystick Kinematic Control of Two-Wheeled Vehicles. Electronics 2018, 7, 164. [Google Scholar] [CrossRef]
  19. Dendaluce Jahnke, M.; Cosco, F.; Novickis, R.; Pérez Rastelli, J.; Gomez-Garay, V. Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles. Electronics 2019, 8, 250. [Google Scholar] [CrossRef]

Share and Cite

MDPI and ACS Style

Ball, J.E.; Tang, B. Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS). Electronics 2019, 8, 748. https://doi.org/10.3390/electronics8070748

AMA Style

Ball JE, Tang B. Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS). Electronics. 2019; 8(7):748. https://doi.org/10.3390/electronics8070748

Chicago/Turabian Style

Ball, John E., and Bo Tang. 2019. "Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS)" Electronics 8, no. 7: 748. https://doi.org/10.3390/electronics8070748

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop