Special Issue "Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS)"

A special issue of Electronics (ISSN 2079-9292). This special issue belongs to the section "Electrical and Autonomous Vehicles".

Deadline for manuscript submissions: closed (31 December 2018).

Special Issue Editors

Guest Editor
Dr. John Ball

Department of Electrical and Computer Engineering, Mississippi State University, 406 Hardy Road, 216 Simrall Hall, Mississippi State, MS 39762
Website | E-Mail
Phone: 662 325 4169
Fax: 662 325 2298
Interests: Advanced Driver Assistance Systems (ADAS), Scene Understanding, Sensor Processing (Radar, LiDAR, Camera, Hyperspectral, Thermal), Machine Learning, Digital Image and Signal Processing
Guest Editor
Dr. Bo Tang

Department of Electrical and Computer Engineering, Mississippi State University, 216 Simrall Bldg., 406 Hardy Rd., Box 9571, Mississippi State, MS 39762, USA
Website | E-Mail
Interests: statistical machine learning; data mining; adaptive control; deep learning; cybersecurity

Special Issue Information

Dear Colleagues,

Advanced Driver Assistance Systems (ADAS) are being integrated into more and more vehicles, which offer enhanced safety (collision avoidance, route following, obstacle detection, automatic braking), driver assistance (lane keeping, lane following, adaptive cruise control), etc. Fully autonomous vehicles are still not fully available and much research is being conducted in these areas. Three main things are driving this revolution: (1) The availability of inexpensive sensors such as cameras, LiDARs, automotive radars, etc. (2) advanced machine learning methods such as deep learning, and (3) inexpensive and highly capable computing platforms that can handle large amounts of data and processing, utilizing both CPUs and GPUs.

This Special Issue aims to cover the most recent advances in autonomous and automated vehicles of all kinds (commercial, industrial) including their interaction with other vehicles, road users or infrastructure. Novel theoretical approaches or practical applications of all aspects of ADAS systems are welcomed. Reviews and surveys of the state-of-the-art are also welcomed. Topics of interest to this Special Issue include, but are not limited to, the following topics:

  • Deep learning and machine learning in ADAS systems
  • Intelligent navigation and localization
  • Scene understanding (e.g., driver intent, pedestrian intent, etc.)
  • Obstacle detection, classification, and avoidance
  • Pedestrian and bicyclist detection, classification, and avoidance
  • Vehicle detection and avoidance
  • Animal detection, classification, and avoidance
  • Object tracking
  • Road traffic sign detection and classification
  • Autonomous parking
  • Multi-sensor data processing and data fusion
  • Collision avoidance algorithms
  • Actuation systems for autonomous vehicles
  • Vehicle-to-vehicle and vehicle-to-infrastructure communication
  • Advanced vehicle control systems
  • Optimal maneuver algorithms
  • Real-time embedded control systems
  • Computing platforms and running complex ADAS software in real-time
  • Perception in challenging conditions
  • Dynamic path planning algorithms
Dr. John E. Ball
Dr. Bo Tang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Electronics is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep learning and machine learning in ADAS systems
  • Intelligent navigation and localization
  • Scene understanding (e.g. driver intent, pedestrian intent, etc.)
  • Obstacle detection, classification, and avoidance
  • Pedestrian and bicyclist detection, classification, and avoidance
  • Vehicle detection and avoidance
  • Animal detection, classification, and avoidance
  • Object tracking
  • Road traffic sign detection and classification
  • Autonomous parking
  • Multi-sensor data processing and data fusion
  • Collision avoidance algorithms
  • Actuation systems for autonomous vehicles
  • Vehicle-to-vehicle and vehicle-to-infrastructure communication
  • Advanced vehicle control systems
  • Optimal maneuver algorithms
  • Real-time embedded control systems
  • Computing platforms and running complex ADAS software in real-time
  • Perception in challenging conditions
  • Dynamic path planning algorithms

Published Papers (19 papers)

View options order results:
result details:
Displaying articles 1-19
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Machine Learning and Embedded Computing in Advanced Driver Assistance Systems (ADAS)
Electronics 2019, 8(7), 748; https://doi.org/10.3390/electronics8070748
Received: 25 June 2019 / Accepted: 26 June 2019 / Published: 2 July 2019
PDF Full-text (173 KB) | HTML Full-text | XML Full-text
Abstract
Advanced driver assistance systems (ADAS) are rapidly being developed for autonomous vehicles [...] Full article

Research

Jump to: Editorial

Open AccessArticle
Learning to See the Hidden Part of the Vehicle in the Autopilot Scene
Electronics 2019, 8(3), 331; https://doi.org/10.3390/electronics8030331
Received: 31 December 2018 / Revised: 25 February 2019 / Accepted: 11 March 2019 / Published: 18 March 2019
Cited by 1 | PDF Full-text (7363 KB) | HTML Full-text | XML Full-text
Abstract
Recent advances in deep learning have shown exciting promise in low-level artificial intelligence tasks such as image classification, speech recognition, object detection, and semantic segmentation, etc. Artificial intelligence has made an important contribution to autopilot, which is a complex high-level intelligence task. However, [...] Read more.
Recent advances in deep learning have shown exciting promise in low-level artificial intelligence tasks such as image classification, speech recognition, object detection, and semantic segmentation, etc. Artificial intelligence has made an important contribution to autopilot, which is a complex high-level intelligence task. However, the real autopilot scene is quite complicated. The first accident of autopilot occurred in 2016. It resulted in a fatal crash where the white side of a vehicle appeared similar to a brightly lit sky. The root of the problem is that the autopilot vision system cannot identify the part of a vehicle when the part is similar to the background. A method called DIDA was first proposed based on the deep learning network to see the hidden part. DIDA cascades the following steps: object detection, scaling, image inpainting assuming a hidden part beside the car, object re-detection from inpainted image, zooming back to the original size, and setting an alarm region by comparing two detected regions. DIDA was tested in a similar scene and achieved exciting results. This method solves the aforementioned problem only by using optical signals. Additionally, the vehicle dataset captured in Xi’an, China can be used in subsequent research. Full article
Figures

Figure 1

Open AccessArticle
Pano-RSOD: A Dataset and Benchmark for Panoramic Road Scene Object Detection
Electronics 2019, 8(3), 329; https://doi.org/10.3390/electronics8030329
Received: 18 February 2019 / Revised: 4 March 2019 / Accepted: 11 March 2019 / Published: 18 March 2019
Cited by 1 | PDF Full-text (7290 KB) | HTML Full-text | XML Full-text
Abstract
Panoramic images have a wide range of applications in many fields with their ability to perceive all-round information. Object detection based on panoramic images has certain advantages in terms of environment perception due to the characteristics of panoramic images, e.g., lager perspective. In [...] Read more.
Panoramic images have a wide range of applications in many fields with their ability to perceive all-round information. Object detection based on panoramic images has certain advantages in terms of environment perception due to the characteristics of panoramic images, e.g., lager perspective. In recent years, deep learning methods have achieved remarkable results in image classification and object detection. Their performance depends on the large amount of training data. Therefore, a good training dataset is a prerequisite for the methods to achieve better recognition results. Then, we construct a benchmark named Pano-RSOD for panoramic road scene object detection. Pano-RSOD contains vehicles, pedestrians, traffic signs and guiding arrows. The objects of Pano-RSOD are labelled by bounding boxes in the images. Different from traditional object detection datasets, Pano-RSOD contains more objects in a panoramic image, and the high-resolution images have 360-degree environmental perception, more annotations, more small objects and diverse road scenes. The state-of-the-art deep learning algorithms are trained on Pano-RSOD for object detection, which demonstrates that Pano-RSOD is a useful benchmark, and it provides a better panoramic image training dataset for object detection tasks, especially for small and deformed objects. Full article
Figures

Figure 1

Open AccessArticle
Efficient Neural Network Implementations on Parallel Embedded Platforms Applied to Real-Time Torque-Vectoring Optimization Using Predictions for Multi-Motor Electric Vehicles
Electronics 2019, 8(2), 250; https://doi.org/10.3390/electronics8020250
Received: 31 December 2018 / Revised: 5 February 2019 / Accepted: 13 February 2019 / Published: 22 February 2019
Cited by 1 | PDF Full-text (3831 KB) | HTML Full-text | XML Full-text
Abstract
The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on [...] Read more.
The combination of machine learning and heterogeneous embedded platforms enables new potential for developing sophisticated control concepts which are applicable to the field of vehicle dynamics and ADAS. This interdisciplinary work provides enabler solutions -ultimately implementing fast predictions using neural networks (NNs) on field programmable gate arrays (FPGAs) and graphical processing units (GPUs)- while applying them to a challenging application: Torque Vectoring on a multi-electric-motor vehicle for enhanced vehicle dynamics. The foundation motivating this work is provided by discussing multiple domains of the technological context as well as the constraints related to the automotive field, which contrast with the attractiveness of exploiting the capabilities of new embedded platforms to apply advanced control algorithms for complex control problems. In this particular case we target enhanced vehicle dynamics on a multi-motor electric vehicle benefiting from the greater degrees of freedom and controllability offered by such powertrains. Considering the constraints of the application and the implications of the selected multivariable optimization challenge, we propose a NN to provide batch predictions for real-time optimization. This leads to the major contribution of this work: efficient NN implementations on two intrinsically parallel embedded platforms, a GPU and a FPGA, following an analysis of theoretical and practical implications of their different operating paradigms, in order to efficiently harness their computing potential while gaining insight into their peculiarities. The achieved results exceed the expectations and additionally provide a representative illustration of the strengths and weaknesses of each kind of platform. Consequently, having shown the applicability of the proposed solutions, this work contributes valuable enablers also for further developments following similar fundamental principles. Full article
Figures

Graphical abstract

Open AccessArticle
Camera-Based Blind Spot Detection with a General Purpose Lightweight Neural Network
Electronics 2019, 8(2), 233; https://doi.org/10.3390/electronics8020233
Received: 1 January 2019 / Revised: 1 February 2019 / Accepted: 13 February 2019 / Published: 19 February 2019
Cited by 1 | PDF Full-text (3983 KB) | HTML Full-text | XML Full-text
Abstract
Blind spot detection is an important feature of Advanced Driver Assistance Systems (ADAS). In this paper, we provide a camera-based deep learning method that accurately detects other vehicles in the blind spot, replacing the traditional higher cost solution using radars. The recent breakthrough [...] Read more.
Blind spot detection is an important feature of Advanced Driver Assistance Systems (ADAS). In this paper, we provide a camera-based deep learning method that accurately detects other vehicles in the blind spot, replacing the traditional higher cost solution using radars. The recent breakthrough of deep learning algorithms shows extraordinary performance when applied to many computer vision tasks. Many new convolutional neural network (CNN) structures have been proposed and most of the networks are very deep in order to achieve the state-of-art performance when evaluated with benchmarks. However, blind spot detection, as a real-time embedded system application, requires high speed processing and low computational complexity. Hereby, we propose a novel method that transfers blind spot detection to an image classification task. Subsequently, a series of experiments are conducted to design an efficient neural network by comparing some of the latest deep learning models. Furthermore, we create a dataset with more than 10,000 labeled images using the blind spot view camera mounted on a test vehicle. Finally, we train the proposed deep learning model and evaluate its performance on the dataset. Full article
Figures

Graphical abstract

Open AccessArticle
Using Wearable ECG/PPG Sensors for Driver Drowsiness Detection Based on Distinguishable Pattern of Recurrence Plots
Electronics 2019, 8(2), 192; https://doi.org/10.3390/electronics8020192
Received: 30 December 2018 / Revised: 27 January 2019 / Accepted: 1 February 2019 / Published: 7 February 2019
Cited by 1 | PDF Full-text (8548 KB) | HTML Full-text | XML Full-text
Abstract
This paper aims to investigate the robust and distinguishable pattern of heart rate variability (HRV) signals, acquired from wearable electrocardiogram (ECG) or photoplethysmogram (PPG) sensors, for driver drowsiness detection. As wearable sensors are so vulnerable to slight movement, they often produce more noise [...] Read more.
This paper aims to investigate the robust and distinguishable pattern of heart rate variability (HRV) signals, acquired from wearable electrocardiogram (ECG) or photoplethysmogram (PPG) sensors, for driver drowsiness detection. As wearable sensors are so vulnerable to slight movement, they often produce more noise in signals. Thus, from noisy HRV signals, we need to find good traits that differentiate well between drowsy and awake states. To this end, we explored three types of recurrence plots (RPs) generated from the R–R intervals (RRIs) of heartbeats: Bin-RP, Cont-RP, and ReLU-RP. Here Bin-RP is a binary recurrence plot, Cont-RP is a continuous recurrence plot, and ReLU-RP is a thresholded recurrence plot obtained by filtering Cont-RP with a modified rectified linear unit (ReLU) function. By utilizing each of these RPs as input features to a convolutional neural network (CNN), we examined their usefulness for drowsy/awake classification. For experiments, we collected RRIs at drowsy and awake conditions with an ECG sensor of the Polar H7 strap and a PPG sensor of the Microsoft (MS) band 2 in a virtual driving environment. The results showed that ReLU-RP is the most distinct and reliable pattern for drowsiness detection, regardless of sensor types (i.e., ECG or PPG). In particular, the ReLU-RP based CNN models showed their superiority to other conventional models, providing approximately 6–17% better accuracy for ECG and 4–14% for PPG in drowsy/awake classification. Full article
Figures

Figure 1

Open AccessArticle
Predicting the Influence of Rain on LIDAR in ADAS
Electronics 2019, 8(1), 89; https://doi.org/10.3390/electronics8010089
Received: 19 December 2018 / Revised: 9 January 2019 / Accepted: 10 January 2019 / Published: 15 January 2019
Cited by 4 | PDF Full-text (2024 KB) | HTML Full-text | XML Full-text
Abstract
While it is well known that rain may influence the performance of automotive LIDAR sensors commonly used in ADAS applications, there is a lack of quantitative analysis of this effect. In particular, there is very little published work on physically-based simulation of the [...] Read more.
While it is well known that rain may influence the performance of automotive LIDAR sensors commonly used in ADAS applications, there is a lack of quantitative analysis of this effect. In particular, there is very little published work on physically-based simulation of the influence of rain on terrestrial LIDAR performance. Additionally, there have been few quantitative studies on how rain-rate influences ADAS performance. In this work, we develop a mathematical model for the performance degradation of LIDAR as a function of rain-rate and incorporate this model into a simulation of an obstacle-detection system to show how it can be used to quantitatively predict the influence of rain on ADAS that use LIDAR. Full article
Figures

Figure 1

Open AccessArticle
Study on Crash Injury Severity Prediction of Autonomous Vehicles for Different Emergency Decisions Based on Support Vector Machine Model
Electronics 2018, 7(12), 381; https://doi.org/10.3390/electronics7120381
Received: 24 October 2018 / Revised: 28 November 2018 / Accepted: 29 November 2018 / Published: 3 December 2018
Cited by 2 | PDF Full-text (2826 KB) | HTML Full-text | XML Full-text
Abstract
Motor vehicle crashes remain a leading cause of life and property loss to society. Autonomous vehicles can mitigate the losses by making appropriate emergency decision, and the crash injury severity prediction model is the basis for autonomous vehicles to make decisions in emergency [...] Read more.
Motor vehicle crashes remain a leading cause of life and property loss to society. Autonomous vehicles can mitigate the losses by making appropriate emergency decision, and the crash injury severity prediction model is the basis for autonomous vehicles to make decisions in emergency situations. In this paper, based on the support vector machine (SVM) model and NASS/GES crash data, three SVM crash injury severity prediction models (B-SVM, T-SVM, and BT-SVM) corresponding to braking, turning, and braking + turning respectively are established. The vehicle relative speed (REL_SPEED) and the gross vehicle weight rating (GVWR) are introduced into the impact indicators of the prediction models. Secondly, the ordered logit (OL) and back propagation neural network (BPNN) models are established to validate the accuracy of the SVM models. The results show that the SVM models have the best performance than the other two. Next, the impact of REL_SPEED and GVWR on injury severity is analyzed quantitatively by the sensitivity analysis, the results demonstrate that the increase of REL_SPEED and GVWR will make vehicle crash more serious. Finally, the same crash samples under normal road and environmental conditions are input into B-SVM, T-SVM, and BT-SVM respectively, the output results are compared and analyzed. The results show that with other conditions being the same, as the REL_SPEED increased from the low (0–20 mph) to middle (20–45 mph) and then to the high range (45–75 mph), the best emergency decision with the minimum crash injury severity will gradually transition from braking to turning and then to braking + turning. Full article
Figures

Graphical abstract

Open AccessArticle
Multi-Object Detection in Traffic Scenes Based on Improved SSD
Electronics 2018, 7(11), 302; https://doi.org/10.3390/electronics7110302
Received: 9 September 2018 / Revised: 1 November 2018 / Accepted: 2 November 2018 / Published: 6 November 2018
Cited by 3 | PDF Full-text (5870 KB) | HTML Full-text | XML Full-text
Abstract
In order to solve the problem that, in complex and wide traffic scenes, the accuracy and speed of multi-object detection can hardly be balanced by the existing object detection algorithms that are based on deep learning and big data, we improve the object [...] Read more.
In order to solve the problem that, in complex and wide traffic scenes, the accuracy and speed of multi-object detection can hardly be balanced by the existing object detection algorithms that are based on deep learning and big data, we improve the object detection framework SSD (Single Shot Multi-box Detector) and propose a new detection framework AP-SSD (Adaptive Perceive). We design a feature extraction convolution kernel library composed of multi-shape Gabor and color Gabor and then we train and screen the optimal feature extraction convolution kernel to replace the low-level convolution kernel of the original network to improve the detection accuracy. After that, we combine the single image detection framework with convolution long-term and short-term memory networks and by using the Bottle Neck-LSTM memory layer to refine and propagate the feature mapping between frames, we realize the temporal association of network frame-level information, reduce the calculation cost, succeed in tracking and identifying the targets affected by strong interference in video and reduce the missed alarm rate and false alarm rate by adding an adaptive threshold strategy. Moreover, we design a dynamic region amplification network framework to improve the detection and recognition accuracy of low-resolution small objects. Therefore, experiments on the improved AP-SSD show that this new algorithm can achieve better detection results when small objects, multiple objects, cluttered background and large-area occlusion are involved, thus ensuring this algorithm a good engineering application prospect. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
A New Dataset and Performance Evaluation of a Region-Based CNN for Urban Object Detection
Electronics 2018, 7(11), 301; https://doi.org/10.3390/electronics7110301
Received: 13 September 2018 / Revised: 23 October 2018 / Accepted: 31 October 2018 / Published: 6 November 2018
Cited by 3 | PDF Full-text (14542 KB) | HTML Full-text | XML Full-text
Abstract
In recent years, we have seen a large growth in the number of applications which use deep learning-based object detectors. Autonomous driving assistance systems (ADAS) are one of the areas where they have the most impact. This work presents a novel study evaluating [...] Read more.
In recent years, we have seen a large growth in the number of applications which use deep learning-based object detectors. Autonomous driving assistance systems (ADAS) are one of the areas where they have the most impact. This work presents a novel study evaluating a state-of-the-art technique for urban object detection and localization. In particular, we investigated the performance of the Faster R-CNN method to detect and localize urban objects in a variety of outdoor urban videos involving pedestrians, cars, bicycles and other objects moving in the scene (urban driving). We propose a new dataset that is used for benchmarking the accuracy of a real-time object detector (Faster R-CNN). Part of the data was collected using an HD camera mounted on a vehicle. Furthermore, some of the data is weakly annotated so it can be used for testing weakly supervised learning techniques. There already exist urban object datasets, but none of them include all the essential urban objects. We carried out extensive experiments demonstrating the effectiveness of the baseline approach. Additionally, we propose an R-CNN plus tracking technique to accelerate the process of real-time urban object detection. Full article
Figures

Figure 1

Open AccessArticle
Real-Time Road Lane Detection in Urban Areas Using LiDAR Data
Electronics 2018, 7(11), 276; https://doi.org/10.3390/electronics7110276
Received: 6 September 2018 / Revised: 14 October 2018 / Accepted: 24 October 2018 / Published: 26 October 2018
Cited by 2 | PDF Full-text (11004 KB) | HTML Full-text | XML Full-text
Abstract
The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be [...] Read more.
The generation of digital maps with lane-level resolution is rapidly becoming a necessity, as semi- or fully-autonomous driving vehicles are now commercially available. In this paper, we present a practical real-time working prototype for road lane detection using LiDAR data, which can be further extended to automatic lane-level map generation. Conventional lane detection methods are limited to simple road conditions and are not suitable for complex urban roads with various road signs on the ground. Given a 3D point cloud scanned by a 3D LiDAR sensor, we categorized the points of the drivable region and distinguished the points of the road signs on the ground. Then, we developed an expectation-maximization method to detect parallel lines and update the 3D line parameters in real time, as the probe vehicle equipped with the LiDAR sensor moved forward. The detected and recorded line parameters were integrated to build a lane-level digital map with the help of a GPS/INS sensor. The proposed system was tested to generate accurate lane-level maps of two complex urban routes. The experimental results showed that the proposed system was fast and practical in terms of effectively detecting road lines and generating lane-level maps. Full article
Figures

Figure 1

Open AccessArticle
Ethical and Legal Dilemma of Autonomous Vehicles: Study on Driving Decision-Making Model under the Emergency Situations of Red Light-Running Behaviors
Electronics 2018, 7(10), 264; https://doi.org/10.3390/electronics7100264
Received: 7 September 2018 / Revised: 15 October 2018 / Accepted: 19 October 2018 / Published: 22 October 2018
Cited by 1 | PDF Full-text (4584 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous vehicles (AVs) are supposed to identify obstacles automatically and form appropriate emergency strategies constantly to ensure driving safety and improve traffic efficiency. However, not all collisions will be avoidable, and AVs are required to make difficult decisions involving ethical and legal factors [...] Read more.
Autonomous vehicles (AVs) are supposed to identify obstacles automatically and form appropriate emergency strategies constantly to ensure driving safety and improve traffic efficiency. However, not all collisions will be avoidable, and AVs are required to make difficult decisions involving ethical and legal factors under emergency situations. In this paper, the ethical and legal factors are introduced into the driving decision-making (DDM) model under emergency situations evoked by red light-running behaviors. In this specific situation, 16 factors related to vehicle-road-environment are considered as impact indicators of DDM, especially the duration of red light (RL), the type of abnormal target (AT-T), the number of abnormal target (AT-N) and the state of abnormal target (AT-S), which indicate legal and ethical components. Secondly, through principal component analysis, seven indicators are selected as input variables of the model. Furthermore, feasible DDM, including braking + going straight, braking + turning left, braking + turning right, is taken as the output variable of the model. Finally, the model chosen to establish DDM is the T-S fuzzy neural network (TSFNN), which has better performance, compared to back propagation neural network (BPNN) to verify the accuracy of TSFNN. Full article
Figures

Figure 1

Open AccessArticle
Coupled-Region Visual Tracking Formulation Based on a Discriminative Correlation Filter Bank
Electronics 2018, 7(10), 244; https://doi.org/10.3390/electronics7100244
Received: 22 August 2018 / Revised: 1 October 2018 / Accepted: 6 October 2018 / Published: 11 October 2018
Cited by 1 | PDF Full-text (9080 KB) | HTML Full-text | XML Full-text
Abstract
The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter [...] Read more.
The visual tracking algorithm based on discriminative correlation filter (DCF) has shown excellent performance in recent years, especially as the higher tracking speed meets the real-time requirement of object tracking. However, when the target is partially occluded, the traditional single discriminative correlation filter will not be able to effectively learn information reliability, resulting in tracker drift and even failure. To address this issue, this paper proposes a novel tracking-by-detection framework, which uses multiple discriminative correlation filters called discriminative correlation filter bank (DCFB), corresponding to different target sub-regions and global region patches to combine and optimize the final correlation output in the frequency domain. In tracking, the sub-region patches are zero-padded to the same size as the global target region, which can effectively avoid noise aliasing during correlation operation, thereby improving the robustness of the discriminative correlation filter. Considering that the sub-region target motion model is constrained by the global target region, adding the global region appearance model to our framework will completely preserve the intrinsic structure of the target, thus effectively utilizing the discriminative information of the visible sub-region to mitigate tracker drift when partial occlusion occurs. In addition, an adaptive scale estimation scheme is incorporated into our algorithm to make the tracker more robust against potential challenging attributes. The experimental results from the OTB-2015 and VOT-2015 datasets demonstrate that our method performs favorably compared with several state-of-the-art trackers. Full article
Figures

Figure 1

Open AccessArticle
Communications and Driver Monitoring Aids for Fostering SAE Level-4 Road Vehicles Automation
Electronics 2018, 7(10), 228; https://doi.org/10.3390/electronics7100228
Received: 3 August 2018 / Revised: 20 September 2018 / Accepted: 27 September 2018 / Published: 2 October 2018
Cited by 1 | PDF Full-text (3402 KB) | HTML Full-text | XML Full-text
Abstract
Road vehicles include more and more assistance systems that perform tasks to facilitate driving and make it safer and more efficient. However, the automated vehicles currently on the market do not exceed SAE level 2 and only in some cases reach level 3. [...] Read more.
Road vehicles include more and more assistance systems that perform tasks to facilitate driving and make it safer and more efficient. However, the automated vehicles currently on the market do not exceed SAE level 2 and only in some cases reach level 3. Nevertheless, the qualitative and technological leap needed to reach level 4 is significant and numerous uncertainties remain. In this sense, a greater knowledge of the environment is needed for better decision making and the role of the driver changes substantially. This paper proposes the combination of cooperative systems with automated driving to offer a wider range of information to the vehicle than on-board sensors currently provide. This includes the actual deployment of a cooperative corridor on a highway. It also takes into account that in some circumstances or scenarios, pre-set or detected by on-board sensors or previous communications, the vehicle must hand back control to the driver, who may have been performing other tasks completely unrelated to supervising the driving. It is thus necessary to assess the driver’s condition as regards retaking control and to provide assistance for a safe transition. Full article
Figures

Figure 1

Open AccessArticle
An Infinite-Norm Algorithm for Joystick Kinematic Control of Two-Wheeled Vehicles
Electronics 2018, 7(9), 164; https://doi.org/10.3390/electronics7090164
Received: 25 July 2018 / Revised: 20 August 2018 / Accepted: 23 August 2018 / Published: 27 August 2018
Cited by 1 | PDF Full-text (23556 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose an algorithm based on the mathematical p-norm which has been applied to improve both the traction power and the trajectory smoothness of joystick-controlled two-wheeled vehicles. This algorithm can theoretically supply 100% of available power to each of the [...] Read more.
In this paper, we propose an algorithm based on the mathematical p-norm which has been applied to improve both the traction power and the trajectory smoothness of joystick-controlled two-wheeled vehicles. This algorithm can theoretically supply 100% of available power to each of the actuators if the infinity-norm is used, i.e., when the p-norm tends to infinity. Furthermore, a geometrical model using the radius of curvature has been developed to track the effect of the proposed algorithm on the vehicle’s trajectory. Findings in this research work contribute to the kinematic control and path planning algorithms for vehicles actuated by two wheels, such as tanks and electric wheelchairs, both of vital importance for the security and heath industry. Computer simulations and experiments with a real robot are performed to verify the results. Full article
Figures

Figure 1

Open AccessArticle
Enabling Off-Road Autonomous Navigation-Simulation of LIDAR in Dense Vegetation
Electronics 2018, 7(9), 154; https://doi.org/10.3390/electronics7090154
Received: 18 July 2018 / Revised: 9 August 2018 / Accepted: 16 August 2018 / Published: 21 August 2018
Cited by 5 | PDF Full-text (3664 KB) | HTML Full-text | XML Full-text
Abstract
Machine learning techniques have accelerated the development of autonomous navigation algorithms in recent years, especially algorithms for on-road autonomous navigation. However, off-road navigation in unstructured environments continues to challenge autonomous ground vehicles. Many off-road navigation systems rely on LIDAR to sense and classify [...] Read more.
Machine learning techniques have accelerated the development of autonomous navigation algorithms in recent years, especially algorithms for on-road autonomous navigation. However, off-road navigation in unstructured environments continues to challenge autonomous ground vehicles. Many off-road navigation systems rely on LIDAR to sense and classify the environment, but LIDAR sensors often fail to distinguish navigable vegetation from non-navigable solid obstacles. While other areas of autonomy have benefited from the use of simulation, there has not been a real-time LIDAR simulator that accounted for LIDAR–vegetation interaction. In this work, we outline the development of a real-time, physics-based LIDAR simulator for densely vegetated environments that can be used in the development of LIDAR processing algorithms for off-road autonomous navigation. We present a multi-step qualitative validation of the simulator, which includes the development of an improved statistical model for the range distribution of LIDAR returns in grass. As a demonstration of the simulator’s capability, we show an example of the simulator being used to evaluate autonomous navigation through vegetation. The results demonstrate the potential for using the simulation in the development and testing of algorithms for autonomous off-road navigation. Full article
Figures

Figure 1

Open AccessArticle
The Kernel Based Multiple Instances Learning Algorithm for Object Tracking
Electronics 2018, 7(6), 97; https://doi.org/10.3390/electronics7060097
Received: 24 April 2018 / Revised: 7 June 2018 / Accepted: 13 June 2018 / Published: 16 June 2018
Cited by 2 | PDF Full-text (1992 KB) | HTML Full-text | XML Full-text
Abstract
To realize real time object tracking in complex environments, a kernel based MIL (KMIL) algorithm is proposed. The KMIL employs the Gaussian kernel function to deal with the inner product used in the weighted MIL (WMIL) algorithm. The method avoids computing the pos-likely-hood [...] Read more.
To realize real time object tracking in complex environments, a kernel based MIL (KMIL) algorithm is proposed. The KMIL employs the Gaussian kernel function to deal with the inner product used in the weighted MIL (WMIL) algorithm. The method avoids computing the pos-likely-hood and neg-likely-hood many times, which results in a much faster tracker. To track an object with different motion, the searching areas for cropping the instances are varied according to the object’s size. Furthermore, an adaptive classifier updating strategy is presented to handle with the occlusion, pose variations and illumination changes. A similar score range is defined with respect to two given thresholds and a similar score from the second frame. Then, the learning rate will be set to be a small value when a similar score is out of the range. In contrast, a big learning rate is used. Finally, we compare its performance with that of the state-of-art algorithms on several classical videos. The experimental results show that the presented KMIL algorithm is faster and robust to the partial occlusion, pose variations and illumination changes. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
LiDAR and Camera Detection Fusion in a Real-Time Industrial Multi-Sensor Collision Avoidance System
Electronics 2018, 7(6), 84; https://doi.org/10.3390/electronics7060084
Received: 14 May 2018 / Revised: 24 May 2018 / Accepted: 26 May 2018 / Published: 30 May 2018
Cited by 5 | PDF Full-text (4521 KB) | HTML Full-text | XML Full-text
Abstract
Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas [...] Read more.
Collision avoidance is a critical task in many applications, such as ADAS (advanced driver-assistance systems), industrial automation and robotics. In an industrial automation setting, certain areas should be off limits to an automated vehicle for protection of people and high-valued assets. These areas can be quarantined by mapping (e.g., GPS) or via beacons that delineate a no-entry area. We propose a delineation method where the industrial vehicle utilizes a LiDAR (Light Detection and Ranging) and a single color camera to detect passive beacons and model-predictive control to stop the vehicle from entering a restricted space. The beacons are standard orange traffic cones with a highly reflective vertical pole attached. The LiDAR can readily detect these beacons, but suffers from false positives due to other reflective surfaces such as worker safety vests. Herein, we put forth a method for reducing false positive detection from the LiDAR by projecting the beacons in the camera imagery via a deep learning method and validating the detection using a neural network-learned projection from the camera to the LiDAR space. Experimental data collected at Mississippi State University’s Center for Advanced Vehicular Systems (CAVS) shows the effectiveness of the proposed system in keeping the true detection while mitigating false positives. Full article
Figures

Figure 1

Open AccessFeature PaperArticle
Performance Comparison of Geobroadcast Strategies for Winding Roads
Electronics 2018, 7(3), 32; https://doi.org/10.3390/electronics7030032
Received: 16 December 2017 / Revised: 22 February 2018 / Accepted: 1 March 2018 / Published: 3 March 2018
Cited by 2 | PDF Full-text (6512 KB) | HTML Full-text | XML Full-text
Abstract
Vehicle-to-X (V2X) communications allow real-time information sharing between vehicles and Roadside Units (RSUs). These kinds of technologies allow for the improvement of road safety and can be used in combination with other systems. Advanced Driver Assistance Systems (ADAS) are an example and can [...] Read more.
Vehicle-to-X (V2X) communications allow real-time information sharing between vehicles and Roadside Units (RSUs). These kinds of technologies allow for the improvement of road safety and can be used in combination with other systems. Advanced Driver Assistance Systems (ADAS) are an example and can be used along with V2X communications to improve performance and enable Cooperative Systems. A key element of vehicular communications is that the information transmitted through the network is always linked to a GPS position related to origin and destination (GeoNetworking protocol) in order to adjust the data broadcast to the dynamic road environment needs. In this paper, we present the implementation and development of Institute for Automobile Research (INSIA) V2X communication modules that follow the European vehicular networking standards in a close curve in a winding road where poor visibility causes a risk to the safety of road users. The technology chosen to support these communications is ETSI ITS-G5, which has the capability to enable specific services that support GeoNetworking protocols, specifically the Geobroadcast (GBC) algorithm. These functionalities have been implemented and validated in a real environment in order to demonstrate the performance of the communication devices in real V2V (Vehicle-to-Vehicle) and V2I (Vehicle-to-Infrastructure) situations. GBC messages are also compared with two different configurations of emission area. A comparison with/without RSU modules in critical areas of the road with previous knowledge of the road cartography has also been made. Full article
Figures

Figure 1

Electronics EISSN 2079-9292 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top