E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Perception Sensors for Road Applications"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 1 October 2019.

Special Issue Editor

Guest Editor
Dr. Felipe Jimenez

University Institute for Automobile Research (INSIA), Technical University of Madrid. INSIA. Campus Sur UPM. Carretera de Valencia km 7 28031, Madrid (Spain)
E-Mail
Phone: +34 913365317
Fax: +34 913365302
Interests: intelligent transport systems, advanced driver assistance systems, vehicle positioning, GNSS, inertial sensors, digital maps, vehicle dynamics, driver monitoring, vehicle perception, connected vehicles, cooperative services, autonomous vehicles

Special Issue Information

Dear Colleagues,

New assistance systems and the applications of autonomous driving of road vehicles imply ever greater requirements for perception systems in order to increase the robustness of decisions and avoid false positives or false negatives.

In this sense, there are many technologies that can be used, both in the vehicle and infrastructure. In a first case, technologies, such as LiDAR or computer vision, are the basis for growth in automation levels of vehicles, although their actual deployment is also demonstrating the problems that can be found in real scenarios, and that must be solved to continue on the path of improving the safety and efficiency of road traffic.

Usually, given the limitations of each of the technologies, it is common to resort to sensorial fusion, both of the same type sensors and of different types.

Additionally, obtaining data for decision-making does not come from only on-board sensors, but wireless communication with the outside world allow a vehicle to offer a greater electronic horizon. In the same way, positioning in precise and detailed digital maps provides additional information that can be very useful to interpret the environment.

The sensors also cover the driver, in order to analyse their ability to perform tasks safely.

In all areas, it is crucial to study the limitations of each of the solutions and sensors, as well as to establish tools that try to alleviate these issues, either through improvements in hardware or in software. In this sense, the specifications requested of sensors must be established and specific methods must be developed for the validation of said specifications for the sensors and complete systems.

Finally, studies of the state-of-the-art, in relation to the evolution of perception sensors and their impact on the evolution of road transport, are also welcome.

In conclusion, this Special Issue aims to bring together innovative developments in areas related to sensors in vehicles and in infrastructure, including, but not limited to:

  • environment perception
  • LiDAR
  • Computer vision
  • Radar
  • vehicle dynamics sensors
  • driver surveillance
  • infrastructure sensors
  • new assistance systems based on perception sensors
  • Sensor fusion techniques for autonomous systems
  • Interaction of autonomous systems and driver
  • Decision algorithms for autonomous actions
  • Cooperation between autonomous vehicles and infrastructure
  • sensors requirements
  • state-of-the-art review of perception sensors and technologies

Authors are invited to contact the guest editor prior to submission if they are uncertain whether their work falls within the general scope of this Special Issue.

Dr. Felipe Jimenez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Road vehicles
  • Sensors
  • Environment perception sensors
  • Vehicle surroundings surveillance
  • Vehicle dynamics sensors
  • Driver assistance systems
  • Positioning
  • LIDAR
  • Computer vision
  • Radar
  • Sensor fusion
  • Infrastructure sensors
  • Autonomous vehicles
  • Sensor fusion

Published Papers (10 papers)

View options order results:
result details:
Displaying articles 1-10
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
Implementation of a Potential Field-Based Decision-Making Algorithm on Autonomous Vehicles for Driving in Complex Environments
Sensors 2019, 19(15), 3318; https://doi.org/10.3390/s19153318
Received: 17 May 2019 / Revised: 19 July 2019 / Accepted: 25 July 2019 / Published: 28 July 2019
PDF Full-text (3978 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous driving is undergoing huge developments nowadays. It is expected that its implementation will bring many benefits. Autonomous cars must deal with tasks at different levels. Although some of them are currently solved, and perception systems provide quite an accurate and complete description [...] Read more.
Autonomous driving is undergoing huge developments nowadays. It is expected that its implementation will bring many benefits. Autonomous cars must deal with tasks at different levels. Although some of them are currently solved, and perception systems provide quite an accurate and complete description of the environment, high-level decisions are hard to obtain in challenging scenarios. Moreover, they must comply with safety, reliability and predictability requirements, road user acceptance, and comfort specifications. This paper presents a path planning algorithm based on potential fields. Potential models are adjusted so that their behavior is appropriate to the environment and the dynamics of the vehicle and they can face almost any unexpected scenarios. The response of the system considers the road characteristics (e.g., maximum speed, lane line curvature, etc.) and the presence of obstacles and other users. The algorithm has been tested on an automated vehicle equipped with a GPS receiver, an inertial measurement unit and a computer vision system in real environments with satisfactory results. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Moving Object Detection Based on Optical Flow Estimation and a Gaussian Mixture Model for Advanced Driver Assistance Systems
Sensors 2019, 19(14), 3217; https://doi.org/10.3390/s19143217
Received: 18 June 2019 / Revised: 18 July 2019 / Accepted: 19 July 2019 / Published: 22 July 2019
PDF Full-text (1502 KB) | HTML Full-text | XML Full-text
Abstract
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion [...] Read more.
Most approaches for moving object detection (MOD) based on computer vision are limited to stationary camera environments. In advanced driver assistance systems (ADAS), however, ego-motion is added to image frames owing to the use of a moving camera. This results in mixed motion in the image frames and makes it difficult to classify target objects and background. In this paper, we propose an efficient MOD algorithm that can cope with moving camera environments. In addition, we present a hardware design and implementation results for the real-time processing of the proposed algorithm. The proposed moving object detector was designed using hardware description language (HDL) and its real-time performance was evaluated using an FPGA based test system. Experimental results demonstrate that our design achieves better detection performance than existing MOD systems. The proposed moving object detector was implemented with 13.2K logic slices, 104 DSP48s, and 163 BRAM and can support real-time processing of 30 fps at an operating frequency of 200 MHz. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Machine Learning Techniques for Undertaking Roundabouts in Autonomous Driving
Sensors 2019, 19(10), 2386; https://doi.org/10.3390/s19102386
Received: 14 March 2019 / Revised: 16 May 2019 / Accepted: 21 May 2019 / Published: 24 May 2019
PDF Full-text (4678 KB) | HTML Full-text | XML Full-text
Abstract
This article presents a machine learning-based technique to build a predictive model and generate rules of action to allow autonomous vehicles to perform roundabout maneuvers. The approach consists of building a predictive model of vehicle speeds and steering angles based on collected data [...] Read more.
This article presents a machine learning-based technique to build a predictive model and generate rules of action to allow autonomous vehicles to perform roundabout maneuvers. The approach consists of building a predictive model of vehicle speeds and steering angles based on collected data related to driver–vehicle interactions and other aggregated data intrinsic to the traffic environment, such as roundabout geometry and the number of lanes obtained from Open-Street-Maps and offline video processing. The study systematically generates rules of action regarding the vehicle speed and steering angle required for autonomous vehicles to achieve complete roundabout maneuvers. Supervised learning algorithms like the support vector machine, linear regression, and deep learning are used to form the predictive models. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Dynamic Multi-LiDAR Based Multiple Object Detection and Tracking
Sensors 2019, 19(6), 1474; https://doi.org/10.3390/s19061474
Received: 6 March 2019 / Revised: 23 March 2019 / Accepted: 23 March 2019 / Published: 26 March 2019
Cited by 2 | PDF Full-text (2300 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
Environmental perception plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios. In this paper, a robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented, using multiple 3D [...] Read more.
Environmental perception plays an essential role in autonomous driving tasks and demands robustness in cluttered dynamic environments such as complex urban scenarios. In this paper, a robust Multiple Object Detection and Tracking (MODT) algorithm for a non-stationary base is presented, using multiple 3D LiDARs for perception. The merged LiDAR data is treated with an efficient MODT framework, considering the limitations of the vehicle-embedded computing environment. The ground classification is obtained through a grid-based method while considering a non-planar ground. Furthermore, unlike prior works, 3D grid-based clustering technique is developed to detect objects under elevated structures. The centroid measurements obtained from the object detection are tracked using Interactive Multiple Model-Unscented Kalman Filter-Joint Probabilistic Data Association Filter (IMM-UKF-JPDAF). IMM captures different motion patterns, UKF handles the nonlinearities of motion models, and JPDAF associates the measurements in the presence of clutter. The proposed algorithm is implemented on two slightly dissimilar platforms, giving real-time performance on embedded computers. The performance evaluation metrics by MOT16 and ground truths provided by KITTI Datasets are used for evaluations and comparison with the state-of-the-art. The experimentation on platforms and comparisons with state-of-the-art techniques suggest that the proposed framework is a feasible solution for MODT tasks. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Robust and Real-Time Detection and Tracking of Moving Objects with Minimum 2D LiDAR Information to Advance Autonomous Cargo Handling in Ports
Sensors 2019, 19(1), 107; https://doi.org/10.3390/s19010107
Received: 13 December 2018 / Revised: 23 December 2018 / Accepted: 25 December 2018 / Published: 29 December 2018
Cited by 3 | PDF Full-text (7860 KB) | HTML Full-text | XML Full-text
Abstract
Detecting and tracking moving objects (DATMO) is an essential component for autonomous driving and transportation. In this paper, we present a computationally low-cost and robust DATMO system which uses as input only 2D laser rangefinder (LRF) information. Due to its low requirements both [...] Read more.
Detecting and tracking moving objects (DATMO) is an essential component for autonomous driving and transportation. In this paper, we present a computationally low-cost and robust DATMO system which uses as input only 2D laser rangefinder (LRF) information. Due to its low requirements both in sensor needs and computation, our DATMO algorithm is meant to be used in current Autonomous Guided Vehicles (AGVs) to improve their reliability for the cargo transportation tasks at port terminals, advancing towards the next generation of fully autonomous transportation vehicles. Our method follows a Detection plus Tracking paradigm. In the detection step we exploit the minimum information of 2D-LRFs by segmenting the elements of the scene in a model-free way and performing a fast object matching to pair segmented elements from two different scans. In this way, we easily recognize dynamic objects and thus reduce consistently by between two and five times the computational burden of the adjacent tracking method. We track the final dynamic objects with an improved Multiple-Hypothesis Tracking (MHT), to which special functions for filtering, confirming, holding, and deleting targets have been included. The full system is evaluated in simulated and real scenarios producing solid results. Specifically, a simulated port environment has been developed to gather realistic data of common autonomous transportation situations such as observing an intersection, joining vehicle platoons, and perceiving overtaking maneuvers. We use different sensor configurations to demonstrate the robustness and adaptability of our approach. We additionally evaluate our system with real data collected in a port terminal the Netherlands. We show that it is able to accomplish the vehicle following task successfully, obtaining a total system recall of more than 98% while running faster than 30 Hz. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Robust Lane-Detection Method for Low-Speed Environments
Sensors 2018, 18(12), 4274; https://doi.org/10.3390/s18124274
Received: 7 November 2018 / Revised: 28 November 2018 / Accepted: 2 December 2018 / Published: 4 December 2018
Cited by 1 | PDF Full-text (8019 KB) | HTML Full-text | XML Full-text
Abstract
Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and [...] Read more.
Vision-based lane-detection methods provide low-cost density information about roads for autonomous vehicles. In this paper, we propose a robust and efficient method to expand the application of these methods to cover low-speed environments. First, the reliable region near the vehicle is initialized and a series of rectangular detection regions are dynamically constructed along the road. Then, an improved symmetrical local threshold edge extraction is introduced to extract the edge points of the lane markings based on accurate marking width limitations. In order to meet real-time requirements, a novel Bresenham line voting space is proposed to improve the process of line segment detection. Combined with straight lines, polylines, and curves, the proposed geometric fitting method has the ability to adapt to various road shapes. Finally, different status vectors and Kalman filter transfer matrices are used to track the key points of the linear and nonlinear parts of the lane. The proposed method was tested on a public database and our autonomous platform. The experimental results show that the method is robust and efficient and can meet the real-time requirements of autonomous vehicles. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Multi-Target Detection Method Based on Variable Carrier Frequency Chirp Sequence
Sensors 2018, 18(10), 3386; https://doi.org/10.3390/s18103386
Received: 1 September 2018 / Revised: 26 September 2018 / Accepted: 5 October 2018 / Published: 10 October 2018
Cited by 2 | PDF Full-text (4181 KB) | HTML Full-text | XML Full-text
Abstract
Continuous waveform (CW) radar is widely used in intelligent transportation systems, vehicle assisted driving, and other fields because of its simple structure, low cost and high integration. There are several waveforms which have been developed in the last years. The chirp sequence waveform [...] Read more.
Continuous waveform (CW) radar is widely used in intelligent transportation systems, vehicle assisted driving, and other fields because of its simple structure, low cost and high integration. There are several waveforms which have been developed in the last years. The chirp sequence waveform has the ability to extract the range and velocity parameters of multiple targets. However, conventional chirp sequence waveforms suffer from the Doppler ambiguity problem. This paper proposes a new waveform that follows the practical application requirements, high precision requirements, and low system complexity requirements. The new waveform consists of two chirp sequences, which are intertwined to each other. Each chirp signal has the same frequency modulation, the same bandwidth and the same chirp duration. The carrier frequencies are different and there is a frequency shift which is large enough to ensure that the Doppler frequencies for the same moving target are different. According to the sign and numerical relationship of the Doppler frequencies (possibly frequency aliasing), the Doppler frequency ambiguity problem is solved in eight cases. Theoretical analysis and simulation results verify that the new radar waveform is capable of measuring range and radial velocity simultaneously and unambiguously, with high accuracy and resolution even in multi-target situations. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Extended Line Map-Based Precise Vehicle Localization Using 3D LIDAR
Sensors 2018, 18(10), 3179; https://doi.org/10.3390/s18103179
Received: 13 August 2018 / Revised: 18 September 2018 / Accepted: 19 September 2018 / Published: 20 September 2018
Cited by 3 | PDF Full-text (14927 KB) | HTML Full-text | XML Full-text
Abstract
An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 [...] Read more.
An Extended Line Map (ELM)-based precise vehicle localization method is proposed in this paper, and is implemented using 3D Light Detection and Ranging (LIDAR). A binary occupancy grid map in which grids for road marking or vertical structures have a value of 1 and the rest have a value of 0 was created using the reflectivity and distance data of the 3D LIDAR. From the map, lines were detected using a Hough transform. After the detected lines were converted into the node and link forms, they were stored as a map. This map is called an extended line map, of which data size is extremely small (134 KB/km). The ELM-based localization is performed through correlation matching. The ELM is converted back into an occupancy grid map and matched to the map generated using the current 3D LIDAR. In this instance, a Fast Fourier Transform (FFT) was applied as the correlation matching method, and the matching time was approximately 78 ms (based on MATLAB). The experiment was carried out in the Gangnam area of Seoul, South Korea. The traveling distance was approximately 4.2 km, and the maximum traveling speed was approximately 80 km/h. As a result of localization, the root mean square (RMS) position errors for the lateral and longitudinal directions were 0.136 m and 0.223 m, respectively. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Open AccessArticle
Multi-Timescale Drowsiness Characterization Based on a Video of a Driver’s Face
Sensors 2018, 18(9), 2801; https://doi.org/10.3390/s18092801
Received: 14 August 2018 / Revised: 22 August 2018 / Accepted: 23 August 2018 / Published: 25 August 2018
Cited by 2 | PDF Full-text (1454 KB) | HTML Full-text | XML Full-text
Abstract
Drowsiness is a major cause of fatal accidents, in particular in transportation. It is therefore crucial to develop automatic, real-time drowsiness characterization systems designed to issue accurate and timely warnings of drowsiness to the driver. In practice, the least intrusive, physiology-based approach is [...] Read more.
Drowsiness is a major cause of fatal accidents, in particular in transportation. It is therefore crucial to develop automatic, real-time drowsiness characterization systems designed to issue accurate and timely warnings of drowsiness to the driver. In practice, the least intrusive, physiology-based approach is to remotely monitor, via cameras, facial expressions indicative of drowsiness such as slow and long eye closures. Since the system’s decisions are based upon facial expressions in a given time window, there exists a trade-off between accuracy (best achieved with long windows, i.e., at long timescales) and responsiveness (best achieved with short windows, i.e., at short timescales). To deal with this trade-off, we develop a multi-timescale drowsiness characterization system composed of four binary drowsiness classifiers operating at four distinct timescales (5 s, 15 s, 30 s, and 60 s) and trained jointly. We introduce a multi-timescale ground truth of drowsiness, based on the reaction times (RTs) performed during standard Psychomotor Vigilance Tasks (PVTs), that strategically enables our system to characterize drowsiness with diverse trade-offs between accuracy and responsiveness. We evaluated our system on 29 subjects via leave-one-subject-out cross-validation and obtained strong results, i.e., global accuracies of 70%, 85%, 89%, and 94% for the four classifiers operating at increasing timescales, respectively. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview
A Systematic Review of Perception System and Simulators for Autonomous Vehicles Research
Sensors 2019, 19(3), 648; https://doi.org/10.3390/s19030648
Received: 30 November 2018 / Revised: 29 January 2019 / Accepted: 31 January 2019 / Published: 5 February 2019
Cited by 1 | PDF Full-text (3364 KB) | HTML Full-text | XML Full-text
Abstract
This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents [...] Read more.
This paper presents a systematic review of the perception systems and simulators for autonomous vehicles (AV). This work has been divided into three parts. In the first part, perception systems are categorized as environment perception systems and positioning estimation systems. The paper presents the physical fundamentals, principle functioning, and electromagnetic spectrum used to operate the most common sensors used in perception systems (ultrasonic, RADAR, LiDAR, cameras, IMU, GNSS, RTK, etc.). Furthermore, their strengths and weaknesses are shown, and the quantification of their features using spider charts will allow proper selection of different sensors depending on 11 features. In the second part, the main elements to be taken into account in the simulation of a perception system of an AV are presented. For this purpose, the paper describes simulators for model-based development, the main game engines that can be used for simulation, simulators from the robotics field, and lastly simulators used specifically for AV. Finally, the current state of regulations that are being applied in different countries around the world on issues concerning the implementation of autonomous vehicles is presented. Full article
(This article belongs to the Special Issue Perception Sensors for Road Applications)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top