sensors-logo

Journal Browser

Journal Browser

Deep Learning Based Sensing Technologies for Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Sensor Networks".

Deadline for manuscript submissions: closed (15 February 2019) | Viewed by 32737

Special Issue Editors

Machine Learning and Control Systems Laboratory (MLCS), School of Mechanical Engineering, Yonsei University, 50 Yonsei Ro, Seodaemun Gu, Seoul 03722, Korea
Interests: machine learning and control systems; mobile sensor networks; autonomous vehicles and robots; biomedical engineering
Music and Audio Research Group, Graduate School of Convergence Science and Technology, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul 08826, Korea
Interests: machine learning and signal processing applied to audio and music

Special Issue Information

Dear Colleagues,

We are witnessing the era of self-driving vehicles and autonomous robots: Autonomous vehicles and robots are becoming indispensable parts of everyday life. The heart of the vehicle, with a high degree of autonomy, are the various sensing algorithms and devices weaved into its platform, providing perception capabilities. Recent advances in such deep learning sensing technologies are realizing the autonomous vehicle era; however, there are still many intriguing research problems and concerns on robustness under uncertainties. Deep neural network based sensor fusion techniques that fuse different modalities for different tasks, such as self-driving, intention learning, situation awareness, and risk assessment become more important. Additionally, intelligent or active sensing, with autonomous decision-making and control, has gained wide attention. This Special Issue is focused on such sensing technologies for autonomous vehicles and robots with an emphasis on deep learning based sensing algorithms. The topics of interest include, but not limited to:

  • Deep learning based perception algorithms for autonomous vehicles and robots
  • Deep learning based sensor fusion for multimodal sensors
  • Sensing algorithms for intention learning, situation awareness, and risk assessment
  • Emerging sensor technologies for autonomous vehicles and robots
  • Bayesian algorithms and Gaussian process regression for sensor fusion
  • V2V/V2X technologies for inter-vehicle sensor fusion
  • Deep learning based end-to-end control for autonomous vehicles and robots

Prof. Joon-Sang Park
Prof. Jongeun Choi
Prof. Kyogu Lee
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Deep Learning
  • Machine perception
  • Sensor fusion
  • Robotics
  • Autonomous vehicles

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

20 pages, 1137 KiB  
Article
Sensor Data Acquisition and Multimodal Sensor Fusion for Human Activity Recognition Using Deep Learning
by Seungeun Chung, Jiyoun Lim, Kyoung Ju Noh, Gague Kim and Hyuntae Jeong
Sensors 2019, 19(7), 1716; https://doi.org/10.3390/s19071716 - 10 Apr 2019
Cited by 122 | Viewed by 13212
Abstract
In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for [...] Read more.
In this paper, we perform a systematic study about the on-body sensor positioning and data acquisition details for Human Activity Recognition (HAR) systems. We build a testbed that consists of eight body-worn Inertial Measurement Units (IMU) sensors and an Android mobile device for activity data collection. We develop a Long Short-Term Memory (LSTM) network framework to support training of a deep learning model on human activity data, which is acquired in both real-world and controlled environments. From the experiment results, we identify that activity data with sampling rate as low as 10 Hz from four sensors at both sides of wrists, right ankle, and waist is sufficient in recognizing Activities of Daily Living (ADLs) including eating and driving activity. We adopt a two-level ensemble model to combine class-probabilities of multiple sensor modalities, and demonstrate that a classifier-level sensor fusion technique can improve the classification performance. By analyzing the accuracy of each sensor on different types of activity, we elaborate custom weights for multimodal sensor fusion that reflect the characteristic of individual activities. Full article
(This article belongs to the Special Issue Deep Learning Based Sensing Technologies for Autonomous Vehicles)
Show Figures

Figure 1

15 pages, 806 KiB  
Article
Traffic Light Recognition Based on Binary Semantic Segmentation Network
by Hyun-Koo Kim, Kook-Yeol Yoo, Ju H. Park and Ho-Youl Jung
Sensors 2019, 19(7), 1700; https://doi.org/10.3390/s19071700 - 10 Apr 2019
Cited by 16 | Viewed by 4531
Abstract
A traffic light recognition system is a very important building block in an advanced driving assistance system and an autonomous vehicle system. In this paper, we propose a two-staged deep-learning-based traffic light recognition method that consists of a pixel-wise semantic segmentation technique and [...] Read more.
A traffic light recognition system is a very important building block in an advanced driving assistance system and an autonomous vehicle system. In this paper, we propose a two-staged deep-learning-based traffic light recognition method that consists of a pixel-wise semantic segmentation technique and a novel fully convolutional network. For candidate detection, we employ a binary-semantic segmentation network that is suitable for detecting small objects such as traffic lights. Connected components labeling with an eight-connected neighborhood is applied to obtain bounding boxes of candidate regions, instead of the computationally demanding region proposal and regression processes of conventional methods. A fully convolutional network including a convolution layer with three filters of (1 × 1) at the beginning is designed and implemented for traffic light classification, as traffic lights have only a set number of colors. The simulation results show that the proposed traffic light recognition method outperforms the conventional two-staged object detection method in terms of recognition performance, and remarkably reduces the computational complexity and hardware requirements. This framework can be a useful network design guideline for the detection and recognition of small objects, including traffic lights. Full article
(This article belongs to the Special Issue Deep Learning Based Sensing Technologies for Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 8441 KiB  
Article
A Fast Learning Method for Accurate and Robust Lane Detection Using Two-Stage Feature Extraction with YOLO v3
by Xiang Zhang, Wei Yang, Xiaolin Tang and Jie Liu
Sensors 2018, 18(12), 4308; https://doi.org/10.3390/s18124308 - 06 Dec 2018
Cited by 70 | Viewed by 11041
Abstract
To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look [...] Read more.
To improve the accuracy of lane detection in complex scenarios, an adaptive lane feature learning algorithm which can automatically learn the features of a lane in various scenarios is proposed. First, a two-stage learning network based on the YOLO v3 (You Only Look Once, v3) is constructed. The structural parameters of the YOLO v3 algorithm are modified to make it more suitable for lane detection. To improve the training efficiency, a method for automatic generation of the lane label images in a simple scenario, which provides label data for the training of the first-stage network, is proposed. Then, an adaptive edge detection algorithm based on the Canny operator is used to relocate the lane detected by the first-stage model. Furthermore, the unrecognized lanes are shielded to avoid interference in subsequent model training. Then, the images processed by the above method are used as label data for the training of the second-stage model. The experiment was carried out on the KITTI and Caltech datasets, and the results showed that the accuracy and speed of the second-stage model reached a high level. Full article
(This article belongs to the Special Issue Deep Learning Based Sensing Technologies for Autonomous Vehicles)
Show Figures

Figure 1

20 pages, 3013 KiB  
Article
Fully Bayesian Prediction Algorithms for Mobile Robotic Sensors under Uncertain Localization Using Gaussian Markov Random Fields
by Mahdi Jadaliha, Jinho Jeong, Yunfei Xu, Jongeun Choi and Junghoon Kim
Sensors 2018, 18(9), 2866; https://doi.org/10.3390/s18092866 - 30 Aug 2018
Cited by 2 | Viewed by 3149
Abstract
In this paper, we present algorithms for predicting a spatio-temporal random field measured by mobile robotic sensors under uncertainties in localization and measurements. The spatio-temporal field of interest is modeled by a sum of a time-varying mean function and a Gaussian Markov random [...] Read more.
In this paper, we present algorithms for predicting a spatio-temporal random field measured by mobile robotic sensors under uncertainties in localization and measurements. The spatio-temporal field of interest is modeled by a sum of a time-varying mean function and a Gaussian Markov random field (GMRF) with unknown hyperparameters. We first derive the exact Bayesian solution to the problem of computing the predictive inference of the random field, taking into account observations, uncertain hyperparameters, measurement noise, and uncertain localization in a fully Bayesian point of view. We show that the exact solution for uncertain localization is not scalable as the number of observations increases. To cope with this exponentially increasing complexity and to be usable for mobile sensor networks with limited resources, we propose a scalable approximation with a controllable trade-off between approximation error and complexity to the exact solution. The effectiveness of the proposed algorithms is demonstrated by simulation and experimental results. Full article
(This article belongs to the Special Issue Deep Learning Based Sensing Technologies for Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop