Machine Learning and Embedded Computing in Advanced Driver Assistance Systems ( ADAS )

Advanced driver assistance systems (ADAS) are rapidly being developed for autonomous vehicles. Two driving factors enabling these efforts are machine learning and embedded computing. Advanced machine learning algorithms allow the ADAS system to detect objects, obstacles, other vehicles, pedestrians, and lanes, and also enables the estimation of object trajectories and intents (e.g., this car will change lanes ahead). The Special Issue [1] has 18 high-quality papers covering a diversity of focus areas in ADAS:


Introduction
Advanced driver assistance systems (ADAS) are rapidly being developed for autonomous vehicles.Two driving factors enabling these efforts are machine learning and embedded computing.Advanced machine learning algorithms allow the ADAS system to detect objects, obstacles, other vehicles, pedestrians, and lanes, and also enables the estimation of object trajectories and intents (e.g., this car will change lanes ahead).The Special Issue [1] has 18 high-quality papers covering a diversity of focus areas in ADAS: 1.
Some papers fit into multiple categories (e.g., [9,10]).It is also worth noting that three papers were selected as feature papers for the Special Issue:

Communications
V2X (vehicle to another vehicle (X = V) or infrastructure (X = I)) communications is a very important part of ADAS, because these communications can improve vehicle safety and alert the autonomous system to potentially dangerous situations.
A V2X communication module was implemented and validated on a close curve in a winding road where poor visibility causes a safety risk [2].A combination of cooperative systems is proposed to offer a wider range of information to the vehicle than on-board sensors currently provide to help support systems to transition from Society of Automotive Engineers (SAE) levels 2 and 3 to level 4 [3].

Object Detection and Tracking
Object tracking is a critical component in ADAS applications.Objects must be detected and tracked for obstacle avoidance, collision detection, and path planning, to name a few.
A kernel-based multiple instance learning (MIL) tracker was developed that is computationally fast and robust to partial occlusions, pose variations, and illumination changes [5].To help combat partial object occlusions, a tracking-by-detection framework which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain, is shown to produce good results compared to state-of-the-art trackers [6].
Object detection is also a critical component.A LiDAR's 3D point cloud was categorized into drivable and non-drivable regions, and an expectation-maximization method was utilized to detect parallel lines and update the 3D line parameters in real time, which allowed the generation of accurate lane-level maps of two complex urban routes [7].To improve multi-object detection, a detection framework denoted adaptive perceive-single shot multi-box detector (AP-SSD) is proposed, where custom multi-shape Gabor filters to improve low-level object detection, bottleneck-long short term memory (LSTM) is used to to refine and propagate the feature mapping between frames, and a dynamic region amplification network framework all work together to achieve better detection results when small objects, multiple objects, cluttered background, and large-area occlusions are present in the scenery [8].To improve the quality and lower the cost of blind spot detection, a camera-based deep learning method is proposed using a lightweight and computationally efficient neural network.Camera-based methods will be much more cost-effective than using a dedicated radar in this application.In addition, a dataset with more than 10,000 labeled images was generated using a blind spot view camera mounted on a test vehicle [9].
Sensor fusion is a third important area in ADAS units because sensors have different strengths and most experts agree that fusion is required to achieve the best performance.Camera and LiDAR fusion were utilized to make object detection more robust in [4].

Sensor Modeling and Simulation
Many modern machine learning algorithms require significant amounts of training data, which may not be available or may be too expensive and time-consuming to collect.
To aid in LiDAR-based algorithm development, a real-time physics-based LiDAR simulator for densely vegetated environments including an improved statistical model for the range distribution of LiDAR returns in grass was developed and validated [11].A mathematical model was developed for the performance degradation of LiDAR as a function of rain rate.This model was used to quantitatively evaluate how rain influences a LiDAR-based obstacle-detection system [12].

Decision-Making
Decision systems in ADAS are complicated.They require information analyzed from sensor data, proprioceptive data from the vehicle, and data from other sources (e.g., V2X communications).
To investigate crash severity prediction in emergency decisions, several support vector machine (SVM)-based decision models were analyzed to estimate crash severity prediction involving braking, turning, and braking plus turning actions [14].Ethical and legal issues in decision systems were analyzed by using a T-S fuzzy neural network that was developed incorporating ethical and legal factors into the driving decision-making model under emergency situations evoked by red-light-running behaviors [13].

New Datasets
A critical aspect of developing and testing deep learning systems is the availability of high-quality datasets for algorithm training and testing.
A new dataset which includes all of the essential urban objects was collected, including weakly annotated data for training and testing weakly supervised learning techniques.Furthermore, a faster region-based convolutional neural networks (R-CNN) was evaluated using this dataset and a new R-CNN plus tracking technique to accelerate the process of real-time urban object detection was developed and evaluated [15].A blind spot detection dataset is introduced in [9].Refer to Section 2.2 for more information, as this paper belongs in both categories.A new benchmark dataset named Pano-RSOD was created for 360 • panoramic road scene object detection.The dataset contains vehicles, pedestrians, traffic signs and guiding arrows, small objects, and imagery from diverse road scenes.Furthermore, the usefulness of the dataset was demonstrated by training state-of-the-art deep-learning algorithms for object detection in panoramic imagery [16].

Driver Monitoring
As ADAS levels are not yet at full autonomy (level 5), driver monitoring is critical to safety.To investigate robust and distinguishable patterns of heart rate variability, wearable electrocardiogram (ECG) or photoplethysmogram (PPG) sensors were utilized to generate recurrence plots, which were then analyzed by a CNN to detect drowsy drivers.The proposed method showed significant improvement over conventional models [17].

New Applied Hardware for ADAS
An algorithm based on the mathematical p-norm was developed which improved both the traction power and the trajectory smoothness of joystick-controlled two-wheeled vehicles, such as tanks and wheelchairs [18].
To address challenges and issues in the challenging area of torque vectoring on multi-electricmotor vehicles for enhanced vehicle dynamics, a neural network is proposed for batch predictions for real-time optimization on a parallel embedded platform with a GPU and an FPGA.This work will help others who are conducting research in this technical area [19].

Concluding Remarks
The Guest Editors were pleased with the quality and breadth of the accepted papers.We were also delighted to have three papers with high-quality and very useful new datasets [9,15,16].Looking to the future, we believe all research works enclosed in this Special Issue will promote further study in the area of ADAS.