E-Mail Alert

Add your e-mail address to receive forthcoming issues of this journal:

Journal Browser

Journal Browser

Special Issue "Sensors for Autonomous Road Vehicles"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Physical Sensors".

Deadline for manuscript submissions: closed (31 October 2016)

Special Issue Editor

Guest Editor
Dr. Felipe Jimenez

University Institute for Automobile Research (INSIA), Technical University of Madrid. INSIA. Campus Sur UPM. Carretera de Valencia km 7 28031, Madrid (Spain)
E-Mail
Phone: +34 913365317
Fax: +34 913365302
Interests: intelligent transport systems, advanced driver assistance systems, vehicle positioning, GNSS, inertial sensors, digital maps, vehicle dynamics, driver monitoring, vehicle perception, connected vehicles, cooperative services, autonomous vehicles

Special Issue Information

Dear Colleagues,

The relevance of autonomous road vehicles has increased in recent years and several research groups and companies are interested in the topic. Vehicle and vehicle components’ manufactures are working on new developments that improve the performance of such autonomous systems in order to improve safety, comfort and the efficiency of road transport.

When talking about autonomous vehicles, hardware and software developments could be distinguished as follows.

In the first group, different solutions can be implemented for automation of the pedals, the steering wheel and the shifter/gear box depending on the final requirements and the vehicle specifications. Furthermore, the vehicle surroundings’ surveillance is a key factor for successful and safe automatic implementations. In this regard, when autonomous systems increase in complexity, it is crucial to have a clear, complete and accurate vision of the obstacles around the vehicle in order to define free areas to which the vehicle movement would be safe. This surroundings representation involves advanced perception systems and sensor fusion algorithms. In other cases, guidance is carried out using satellite positioning, following lines, etc., and these sensors involve specifications in order to guarantee a stable vehicle behavior.

In the second group, based on the information retrieved by the sensors, the control unit should decide which action on the actuators should be performed. The earliest systems involved only simple actions, such as speed maintenance, but novel systems include also evasive maneuvers or complete driving automation. Specifications for such cases have increased significantly.

It is also necessary to take the role played by the driver into account, mainly when only partial automation is considered or changes between manual and automatic control happen during a trip.

Similarly, due to the rapid developments of these systems, field operational tests are being performed all over the world. In these tests, problems and lessons learned can provide useful information for future implementations.

Finally, studies of the state-of-the-art in relation to autonomous road vehicles, and the sensors and actuators used are also welcome.

In conclusion, the aim of this Special Issue is to bring together innovative developments in areas related to sensors and actuators applied to autonomous road vehicles, including, but not limited to:

  • Autonomous vehicles
  • Automatization of speed
  • Automatization of steering wheel
  • Full automation
  • Partial automation
  • Vehicle surroundings surveillance
  • Sensor fusion techniques for autonomous systems
  • Interaction of autonomous systems and driver
  • Decision algorithms for autonomous actions
  • Cooperation between autonomous vehicles and infrastructure
  • New assistance systems based on automation
  • Sensors requirements for autonomous road vehicles
  • State-of-the-art review of sensors for autonomous road vehicles
  • Field operational tests of autonomous vehicles
  • Special applications of vehicle automation

Authors are invited to contact the guest editor prior to submission if they are uncertain whether their work falls within the general scope of this Special Issue.

Dr. Felipe Jimenez
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 1800 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Autonomous vehicles
  • Road vehicles
  • Sensors
  • Actuators
  • Vehicle surroundings surveillance
  • Driver assistance systems
  • Sensor fusion

Published Papers (25 papers)

View options order results:
result details:
Displaying articles 1-25
Export citation of selected articles as:

Research

Jump to: Review

Open AccessArticle
A Review of the Bayesian Occupancy Filter
Sensors 2017, 17(2), 344; https://doi.org/10.3390/s17020344
Received: 29 October 2016 / Revised: 25 January 2017 / Accepted: 3 February 2017 / Published: 10 February 2017
Cited by 4 | PDF Full-text (854 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous vehicle systems are currently the object of intense research within scientific and industrial communities; however, many problems remain to be solved. One of the most critical aspects addressed in both autonomous driving and robotics is environment perception, since it consists of the [...] Read more.
Autonomous vehicle systems are currently the object of intense research within scientific and industrial communities; however, many problems remain to be solved. One of the most critical aspects addressed in both autonomous driving and robotics is environment perception, since it consists of the ability to understand the surroundings of the vehicle to estimate risks and make decisions on future movements. In recent years, the Bayesian Occupancy Filter (BOF) method has been developed to evaluate occupancy by tessellation of the environment. A review of the BOF and its variants is presented in this paper. Moreover, we propose a detailed taxonomy where the BOF is decomposed into five progressive layers, from the level closest to the sensor to the highest abstractlevelofriskassessment. Inaddition,wepresentastudyofimplementedusecasestoprovide a practical understanding on the main uses of the BOF and its taxonomy. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Fisheye-Based Method for GPS Localization Improvement in Unknown Semi-Obstructed Areas
Sensors 2017, 17(1), 119; https://doi.org/10.3390/s17010119
Received: 30 October 2016 / Revised: 27 December 2016 / Accepted: 4 January 2017 / Published: 17 January 2017
Cited by 11 | PDF Full-text (11586 KB) | HTML Full-text | XML Full-text
Abstract
A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem [...] Read more.
A precise GNSS (Global Navigation Satellite System) localization is vital for autonomous road vehicles, especially in cluttered or urban environments where satellites are occluded, preventing accurate positioning. We propose to fuse GPS (Global Positioning System) data with fisheye stereovision to face this problem independently to additional data, possibly outdated, unavailable, and needing correlation with reality. Our stereoscope is sky-facing with 360° × 180° fisheye cameras to observe surrounding obstacles. We propose a 3D modelling and plane extraction through following steps: stereoscope self-calibration for decalibration robustness, stereo matching considering neighbours epipolar curves to compute 3D, and robust plane fitting based on generated cartography and Hough transform. We use these 3D data with GPS raw data to estimate NLOS (Non Line Of Sight) reflected signals pseudorange delay. We exploit extracted planes to build a visibility mask for NLOS detection. A simplified 3D canyon model allows to compute reflections pseudorange delays. In the end, GPS positioning is computed considering corrected pseudoranges. With experimentations on real fixed scenes, we show generated 3D models reaching metric accuracy and improvement of horizontal GPS positioning accuracy by more than 50%. The proposed procedure is effective, and the proposed NLOS detection outperforms CN0-based methods (Carrier-to-receiver Noise density). Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
A Machine Learning Approach to Pedestrian Detection for Autonomous Vehicles Using High-Definition 3D Range Data
Sensors 2017, 17(1), 18; https://doi.org/10.3390/s17010018
Received: 31 October 2016 / Revised: 11 December 2016 / Accepted: 15 December 2016 / Published: 23 December 2016
Cited by 12 | PDF Full-text (12514 KB) | HTML Full-text | XML Full-text
Abstract
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The [...] Read more.
This article describes an automated sensor-based system to detect pedestrians in an autonomous vehicle application. Although the vehicle is equipped with a broad set of sensors, the article focuses on the processing of the information generated by a Velodyne HDL-64E LIDAR sensor. The cloud of points generated by the sensor (more than 1 million points per revolution) is processed to detect pedestrians, by selecting cubic shapes and applying machine vision and machine learning algorithms to the XY, XZ, and YZ projections of the points contained in the cube. The work relates an exhaustive analysis of the performance of three different machine learning algorithms: k-Nearest Neighbours (kNN), Naïve Bayes classifier (NBC), and Support Vector Machine (SVM). These algorithms have been trained with 1931 samples. The final performance of the method, measured a real traffic scenery, which contained 16 pedestrians and 469 samples of non-pedestrians, shows sensitivity (81.2%), accuracy (96.2%) and specificity (96.8%). Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Recognition of Damaged Arrow-Road Markings by Visible Light Camera Sensor Based on Convolutional Neural Network
Sensors 2016, 16(12), 2160; https://doi.org/10.3390/s16122160
Received: 31 October 2016 / Revised: 7 December 2016 / Accepted: 14 December 2016 / Published: 16 December 2016
Cited by 8 | PDF Full-text (12795 KB) | HTML Full-text | XML Full-text
Abstract
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced [...] Read more.
Automobile driver information as displayed on marked road signs indicates the state of the road, traffic conditions, proximity to schools, etc. These signs are important to insure the safety of the driver and pedestrians. They are also important input to the automated advanced driver assistance system (ADAS), installed in many automobiles. Over time, the arrow-road markings may be eroded or otherwise damaged by automobile contact, making it difficult for the driver to correctly identify the marking. Failure to properly identify an arrow-road marker creates a dangerous situation that may result in traffic accidents or pedestrian injury. Very little research exists that studies the problem of automated identification of damaged arrow-road marking painted on the road. In this study, we propose a method that uses a convolutional neural network (CNN) to recognize six types of arrow-road markings, possibly damaged, by visible light camera sensor. Experimental results with six databases of Road marking dataset, KITTI dataset, Málaga dataset 2009, Málaga urban dataset, Naver street view dataset, and Road/Lane detection evaluation 2013 dataset, show that our method outperforms conventional methods. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Localization Based on Magnetic Markers for an All-Wheel Steering Vehicle
Sensors 2016, 16(12), 2015; https://doi.org/10.3390/s16122015
Received: 11 October 2016 / Revised: 23 November 2016 / Accepted: 24 November 2016 / Published: 29 November 2016
Cited by 4 | PDF Full-text (5989 KB) | HTML Full-text | XML Full-text
Abstract
Real-time continuous localization is a key technology in the development of intelligent transportation systems. In these systems, it is very important to have accurate information about the position and heading angle of the vehicle at all times. The most widely implemented methods for [...] Read more.
Real-time continuous localization is a key technology in the development of intelligent transportation systems. In these systems, it is very important to have accurate information about the position and heading angle of the vehicle at all times. The most widely implemented methods for positioning are the global positioning system (GPS), vision-based system, and magnetic marker system. Among these methods, the magnetic marker system is less vulnerable to indoor and outdoor environment conditions; moreover, it requires minimal maintenance expenses. In this paper, we present a position estimation scheme based on magnetic markers and odometry sensors for an all-wheel-steering vehicle. The heading angle of the vehicle is determined by using the position coordinates of the last two detected magnetic markers and odometer data. The instant position and heading angle of the vehicle are integrated with an extended Kalman filter to estimate the continuous position. GPS data with the real-time kinematics mode was obtained to evaluate the performance of the proposed position estimation system. The test results show that the performance of the proposed localization algorithm is accurate (mean error: 3 cm; max error: 9 cm) and reliable under unexpected missing markers or incorrect markers. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features
Sensors 2016, 16(11), 1935; https://doi.org/10.3390/s16111935
Received: 28 August 2016 / Revised: 11 November 2016 / Accepted: 11 November 2016 / Published: 17 November 2016
Cited by 8 | PDF Full-text (3865 KB) | HTML Full-text | XML Full-text
Abstract
Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and [...] Read more.
Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performance. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
A Robust Method for Ego-Motion Estimation in Urban Environment Using Stereo Camera
Sensors 2016, 16(10), 1704; https://doi.org/10.3390/s16101704
Received: 12 July 2016 / Revised: 8 September 2016 / Accepted: 30 September 2016 / Published: 17 October 2016
Cited by 3 | PDF Full-text (6018 KB) | HTML Full-text | XML Full-text
Abstract
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with [...] Read more.
Visual odometry estimates the ego-motion of an agent (e.g., vehicle and robot) using image information and is a key component for autonomous vehicles and robotics. This paper proposes a robust and precise method for estimating the 6-DoF ego-motion, using a stereo rig with optical flow analysis. An objective function fitted with a set of feature points is created by establishing the mathematical relationship between optical flow, depth and camera ego-motion parameters through the camera’s 3-dimensional motion and planar imaging model. Accordingly, the six motion parameters are computed by minimizing the objective function, using the iterative Levenberg–Marquard method. One of key points for visual odometry is that the feature points selected for the computation should contain inliers as much as possible. In this work, the feature points and their optical flows are initially detected by using the Kanade–Lucas–Tomasi (KLT) algorithm. A circle matching is followed to remove the outliers caused by the mismatching of the KLT algorithm. A space position constraint is imposed to filter out the moving points from the point set detected by the KLT algorithm. The Random Sample Consensus (RANSAC) algorithm is employed to further refine the feature point set, i.e., to eliminate the effects of outliers. The remaining points are tracked to estimate the ego-motion parameters in the subsequent frames. The approach presented here is tested on real traffic videos and the results prove the robustness and precision of the method. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Performance Enhancement of Land Vehicle Positioning Using Multiple GPS Receivers in an Urban Area
Sensors 2016, 16(10), 1688; https://doi.org/10.3390/s16101688
Received: 30 May 2016 / Revised: 25 September 2016 / Accepted: 8 October 2016 / Published: 14 October 2016
Cited by 5 | PDF Full-text (6594 KB) | HTML Full-text | XML Full-text
Abstract
The Global Positioning System (GPS) is the most widely used navigation system in land vehicle applications. In urban areas, the GPS suffers from insufficient signal strength, multipath propagation and non-line-of-sight (NLOS) errors, so it thus becomes difficult to obtain accurate and reliable position [...] Read more.
The Global Positioning System (GPS) is the most widely used navigation system in land vehicle applications. In urban areas, the GPS suffers from insufficient signal strength, multipath propagation and non-line-of-sight (NLOS) errors, so it thus becomes difficult to obtain accurate and reliable position information. In this paper, an integration algorithm for multiple receivers is proposed to enhance the positioning performance of GPS for land vehicles in urban areas. The pseudoranges of multiple receivers are integrated based on a tightly coupled approach, and erroneous measurements are detected by testing the closeness of the pseudoranges. In order to fairly compare the pseudoranges, GPS errors and terms arising due to the differences between the positions of the receivers need to be compensated. The double-difference technique is used to eliminate GPS errors in the pseudoranges, and the geometrical distance is corrected by projecting the baseline vector between pairs of receivers. In order to test and analyze the proposed algorithm, an experiment involving live data was performed. The positioning performance of the algorithm was compared with that of the receiver autonomous integrity monitoring (RAIM)-based integration algorithm for multiple receivers. The test results showed that the proposed algorithm yields more accurate position information in urban areas. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Pose Self-Calibration of Stereo Vision Systems for Autonomous Vehicle Applications
Sensors 2016, 16(9), 1492; https://doi.org/10.3390/s16091492
Received: 10 June 2016 / Revised: 8 August 2016 / Accepted: 7 September 2016 / Published: 14 September 2016
Cited by 1 | PDF Full-text (3931 KB) | HTML Full-text | XML Full-text
Abstract
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the [...] Read more.
Nowadays, intelligent systems applied to vehicles have grown very rapidly; their goal is not only the improvement of safety, but also making autonomous driving possible. Many of these intelligent systems are based on making use of computer vision in order to know the environment and act accordingly. It is of great importance to be able to estimate the pose of the vision system because the measurement matching between the perception system (pixels) and the vehicle environment (meters) depends on the relative position between the perception system and the environment. A new method of camera pose estimation for stereo systems is presented in this paper, whose main contribution regarding the state of the art on the subject is the estimation of the pitch angle without being affected by the roll angle. The validation of the self-calibration method is accomplished by comparing it with relevant methods of camera pose estimation, where a synthetic sequence is used in order to measure the continuous error with a ground truth. This validation is enriched by the experimental results of the method in real traffic environments. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Road Lane Detection by Discriminating Dashed and Solid Road Lanes Using a Visible Light Camera Sensor
Sensors 2016, 16(8), 1313; https://doi.org/10.3390/s16081313
Received: 26 April 2016 / Revised: 29 July 2016 / Accepted: 15 August 2016 / Published: 18 August 2016
Cited by 8 | PDF Full-text (9895 KB) | HTML Full-text | XML Full-text
Abstract
With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous [...] Read more.
With the increasing need for road lane detection used in lane departure warning systems and autonomous vehicles, many studies have been conducted to turn road lane detection into a virtual assistant to improve driving safety and reduce car accidents. Most of the previous research approaches detect the central line of a road lane and not the accurate left and right boundaries of the lane. In addition, they do not discriminate between dashed and solid lanes when detecting the road lanes. However, this discrimination is necessary for the safety of autonomous vehicles and the safety of vehicles driven by human drivers. To overcome these problems, we propose a method for road lane detection that distinguishes between dashed and solid lanes. Experimental results with the Caltech open database showed that our method outperforms conventional methods. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Robust Pedestrian Classification Based on Hierarchical Kernel Sparse Representation
Sensors 2016, 16(8), 1296; https://doi.org/10.3390/s16081296
Received: 19 May 2016 / Revised: 9 August 2016 / Accepted: 10 August 2016 / Published: 16 August 2016
Cited by 3 | PDF Full-text (1673 KB) | HTML Full-text | XML Full-text
Abstract
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and [...] Read more.
Vision-based pedestrian detection has become an active topic in computer vision and autonomous vehicles. It aims at detecting pedestrians appearing ahead of the vehicle using a camera so that autonomous vehicles can assess the danger and take action. Due to varied illumination and appearance, complex background and occlusion pedestrian detection in outdoor environments is a difficult problem. In this paper, we propose a novel hierarchical feature extraction and weighted kernel sparse representation model for pedestrian classification. Initially, hierarchical feature extraction based on a CENTRIST descriptor is used to capture discriminative structures. A max pooling operation is used to enhance the invariance of varying appearance. Then, a kernel sparse representation model is proposed to fully exploit the discrimination information embedded in the hierarchical local features, and a Gaussian weight function as the measure to effectively handle the occlusion in pedestrian images. Extensive experiments are conducted on benchmark databases, including INRIA, Daimler, an artificially generated dataset and a real occluded dataset, demonstrating the more robust performance of the proposed method compared to state-of-the-art pedestrian classification methods. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Graphical abstract

Open AccessArticle
Ontology-Based Architecture for Intelligent Transportation Systems Using a Traffic Sensor Network
Sensors 2016, 16(8), 1287; https://doi.org/10.3390/s16081287
Received: 12 April 2016 / Revised: 10 August 2016 / Accepted: 11 August 2016 / Published: 15 August 2016
Cited by 12 | PDF Full-text (3759 KB) | HTML Full-text | XML Full-text
Abstract
Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in [...] Read more.
Intelligent transportation systems are a set of technological solutions used to improve the performance and safety of road transportation. A crucial element for the success of these systems is the exchange of information, not only between vehicles, but also among other components in the road infrastructure through different applications. One of the most important information sources in this kind of systems is sensors. Sensors can be within vehicles or as part of the infrastructure, such as bridges, roads or traffic signs. Sensors can provide information related to weather conditions and traffic situation, which is useful to improve the driving process. To facilitate the exchange of information between the different applications that use sensor data, a common framework of knowledge is needed to allow interoperability. In this paper an ontology-driven architecture to improve the driving environment through a traffic sensor network is proposed. The system performs different tasks automatically to increase driver safety and comfort using the information provided by the sensors. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Efficient Lane Boundary Detection with Spatial-Temporal Knowledge Filtering
Sensors 2016, 16(8), 1276; https://doi.org/10.3390/s16081276
Received: 29 March 2016 / Revised: 25 July 2016 / Accepted: 8 August 2016 / Published: 12 August 2016
Cited by 9 | PDF Full-text (26361 KB) | HTML Full-text | XML Full-text
Abstract
Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To [...] Read more.
Lane boundary detection technology has progressed rapidly over the past few decades. However, many challenges that often lead to lane detection unavailability remain to be solved. In this paper, we propose a spatial-temporal knowledge filtering model to detect lane boundaries in videos. To address the challenges of structure variation, large noise and complex illumination, this model incorporates prior spatial-temporal knowledge with lane appearance features to jointly identify lane boundaries. The model first extracts line segments in video frames. Two novel filters—the Crossing Point Filter (CPF) and the Structure Triangle Filter (STF)—are proposed to filter out the noisy line segments. The two filters introduce spatial structure constraints and temporal location constraints into lane detection, which represent the spatial-temporal knowledge about lanes. A straight line or curve model determined by a state machine is used to fit the line segments to finally output the lane boundaries. We collected a challenging realistic traffic scene dataset. The experimental results on this dataset and other standard dataset demonstrate the strength of our method. The proposed method has been successfully applied to our autonomous experimental vehicle. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Graphical abstract

Open AccessArticle
Vertical Corner Feature Based Precise Vehicle Localization Using 3D LIDAR in Urban Area
Sensors 2016, 16(8), 1268; https://doi.org/10.3390/s16081268
Received: 31 March 2016 / Revised: 2 August 2016 / Accepted: 4 August 2016 / Published: 10 August 2016
Cited by 13 | PDF Full-text (26314 KB) | HTML Full-text | XML Full-text
Abstract
Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which [...] Read more.
Tall buildings are concentrated in urban areas. The outer walls of buildings are vertically erected to the ground and almost flat. Therefore, the vertical corners that meet the vertical planes are present everywhere in urban areas. These corners act as convenient landmarks, which can be extracted by using the light detection and ranging (LIDAR) sensor. A vertical corner feature based precise vehicle localization method is proposed in this paper and implemented using 3D LIDAR (Velodyne HDL-32E). The vehicle motion is predicted by accumulating the pose increment output from the iterative closest point (ICP) algorithm based on the geometric relations between the scan data of the 3D LIDAR. The vertical corner is extracted using the proposed corner extraction method. The vehicle position is then corrected by matching the prebuilt corner map with the extracted corner. The experiment was carried out in the Gangnam area of Seoul, South Korea. In the experimental results, the maximum horizontal position error is about 0.46 m and the 2D Root Mean Square (RMS) horizontal error is about 0.138 m. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Fast Object Motion Estimation Based on Dynamic Stixels
Sensors 2016, 16(8), 1182; https://doi.org/10.3390/s16081182
Received: 22 April 2016 / Revised: 22 July 2016 / Accepted: 22 July 2016 / Published: 28 July 2016
Cited by 2 | PDF Full-text (12899 KB) | HTML Full-text | XML Full-text | Supplementary Files
Abstract
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the [...] Read more.
The stixel world is a simplification of the world in which obstacles are represented as vertical instances, called stixels, standing on a surface assumed to be planar. In this paper, previous approaches for stixel tracking are extended using a two-level scheme. In the first level, stixels are tracked by matching them between frames using a bipartite graph in which edges represent a matching cost function. Then, stixels are clustered into sets representing objects in the environment. These objects are matched based on the number of stixels paired inside them. Furthermore, a faster, but less accurate approach is proposed in which only the second level is used. Several configurations of our method are compared to an existing state-of-the-art approach to show how our methodology outperforms it in several areas, including an improvement in the quality of the depth reconstruction. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Graphical abstract

Open AccessArticle
A Novel Line Space Voting Method for Vanishing-Point Detection of General Road Images
Sensors 2016, 16(7), 948; https://doi.org/10.3390/s16070948
Received: 16 April 2016 / Revised: 16 June 2016 / Accepted: 17 June 2016 / Published: 23 June 2016
Cited by 7 | PDF Full-text (4315 KB) | HTML Full-text | XML Full-text
Abstract
Vanishing-point detection is an important component for the visual navigation system of an autonomous mobile robot. In this paper, we present a novel line space voting method for fast vanishing-point detection. First, the line segments are detected from the road image by the [...] Read more.
Vanishing-point detection is an important component for the visual navigation system of an autonomous mobile robot. In this paper, we present a novel line space voting method for fast vanishing-point detection. First, the line segments are detected from the road image by the line segment detector (LSD) method according to the pixel’s gradient and texture orientation computed by the Sobel operator. Then, the vanishing-point of the road is voted on by considering the points of the lines and their neighborhood spaces with weighting methods. Our algorithm is simple, fast, and easy to implement with high accuracy. It has been experimentally tested with over hundreds of structured and unstructured road images. The experimental results indicate that the proposed method is effective and can meet the real-time requirements of navigation for autonomous mobile robots and unmanned ground vehicles. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
A Novel Multi-Sensor Environmental Perception Method Using Low-Rank Representation and a Particle Filter for Vehicle Reversing Safety
Sensors 2016, 16(6), 848; https://doi.org/10.3390/s16060848
Received: 9 March 2016 / Revised: 22 May 2016 / Accepted: 1 June 2016 / Published: 9 June 2016
Cited by 7 | PDF Full-text (21768 KB) | HTML Full-text | XML Full-text
Abstract
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using [...] Read more.
Environmental perception and information processing are two key steps of active safety for vehicle reversing. Single-sensor environmental perception cannot meet the need for vehicle reversing safety due to its low reliability. In this paper, we present a novel multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. The proposed system consists of four main steps, namely multi-sensor environmental perception, information fusion, target recognition and tracking using low-rank representation and a particle filter, and vehicle reversing speed control modules. First of all, the multi-sensor environmental perception module, based on a binocular-camera system and ultrasonic range finders, obtains the distance data for obstacles behind the vehicle when the vehicle is reversing. Secondly, the information fusion algorithm using an adaptive Kalman filter is used to process the data obtained with the multi-sensor environmental perception module, which greatly improves the robustness of the sensors. Then the framework of a particle filter and low-rank representation is used to track the main obstacles. The low-rank representation is used to optimize an objective particle template that has the smallest L-1 norm. Finally, the electronic throttle opening and automatic braking is under control of the proposed vehicle reversing control strategy prior to any potential collisions, making the reversing control safer and more reliable. The final system simulation and practical testing results demonstrate the validity of the proposed multi-sensor environmental perception method using low-rank representation and a particle filter for vehicle reversing safety. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Accurate Vehicle Location System Using RFID, an Internet of Things Approach
Sensors 2016, 16(6), 825; https://doi.org/10.3390/s16060825
Received: 15 March 2016 / Revised: 18 May 2016 / Accepted: 24 May 2016 / Published: 4 June 2016
Cited by 25 | PDF Full-text (6076 KB) | HTML Full-text | XML Full-text
Abstract
Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate [...] Read more.
Modern infrastructure, such as dense urban areas and underground tunnels, can effectively block all GPS signals, which implies that effective position triangulation will not be achieved. The main problem that is addressed in this project is the design and implementation of an accurate vehicle location system using radio-frequency identification (RFID) technology in combination with GPS and the Global system for Mobile communication (GSM) technology, in order to provide a solution to the limitation discussed above. In essence, autonomous vehicle tracking will be facilitated with the use of RFID technology where GPS signals are non-existent. The design of the system and the results are reflected in this paper. An extensive literature study was done on the field known as the Internet of Things, as well as various topics that covered the integration of independent technology in order to address a specific challenge. The proposed system is then designed and implemented. An RFID transponder was successfully designed and a read range of approximately 31 cm was obtained in the low frequency communication range (125 kHz to 134 kHz). The proposed system was designed, implemented, and field tested and it was found that a vehicle could be accurately located and tracked. It is also found that the antenna size of both the RFID reader unit and RFID transponder plays a critical role in the maximum communication range that can be achieved. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
A Highly Reliable and Cost-Efficient Multi-Sensor System for Land Vehicle Positioning
Sensors 2016, 16(6), 755; https://doi.org/10.3390/s16060755
Received: 4 April 2016 / Revised: 9 May 2016 / Accepted: 20 May 2016 / Published: 25 May 2016
Cited by 5 | PDF Full-text (6636 KB) | HTML Full-text | XML Full-text
Abstract
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost [...] Read more.
In this paper, we propose a novel positioning solution for land vehicles which is highly reliable and cost-efficient. The proposed positioning system fuses information from the MEMS-based reduced inertial sensor system (RISS) which consists of one vertical gyroscope and two horizontal accelerometers, low-cost GPS, and supplementary sensors and sources. First, pitch and roll angle are accurately estimated based on a vehicle kinematic model. Meanwhile, the negative effect of the uncertain nonlinear drift of MEMS inertial sensors is eliminated by an H∞ filter. Further, a distributed-dual-H∞ filtering (DDHF) mechanism is adopted to address the uncertain nonlinear drift of the MEMS-RISS and make full use of the supplementary sensors and sources. The DDHF is composed of a main H∞ filter (MHF) and an auxiliary H∞ filter (AHF). Finally, a generalized regression neural network (GRNN) module with good approximation capability is specially designed for the MEMS-RISS. A hybrid methodology which combines the GRNN module and the AHF is utilized to compensate for RISS position errors during GPS outages. To verify the effectiveness of the proposed solution, road-test experiments with various scenarios were performed. The experimental results illustrate that the proposed system can achieve accurate and reliable positioning for land vehicles. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Vehicle Detection Based on Probability Hypothesis Density Filter
Sensors 2016, 16(4), 510; https://doi.org/10.3390/s16040510
Received: 7 January 2016 / Revised: 29 March 2016 / Accepted: 31 March 2016 / Published: 9 April 2016
Cited by 3 | PDF Full-text (2303 KB) | HTML Full-text | XML Full-text
Abstract
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In [...] Read more.
In the past decade, the developments of vehicle detection have been significantly improved. By utilizing cameras, vehicles can be detected in the Regions of Interest (ROI) in complex environments. However, vision techniques often suffer from false positives and limited field of view. In this paper, a LiDAR based vehicle detection approach is proposed by using the Probability Hypothesis Density (PHD) filter. The proposed approach consists of two phases: the hypothesis generation phase to detect potential objects and the hypothesis verification phase to classify objects. The performance of the proposed approach is evaluated in complex scenarios, compared with the state-of-the-art. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Vision-Based Steering Control, Speed Assistance and Localization for Inner-City Vehicles
Sensors 2016, 16(3), 362; https://doi.org/10.3390/s16030362
Received: 13 November 2015 / Revised: 17 February 2016 / Accepted: 25 February 2016 / Published: 11 March 2016
Cited by 7 | PDF Full-text (7809 KB) | HTML Full-text | XML Full-text
Abstract
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated [...] Read more.
Autonomous route following with road vehicles has gained popularity in the last few decades. In order to provide highly automated driver assistance systems, different types and combinations of sensors have been presented in the literature. However, most of these approaches apply quite sophisticated and expensive sensors, and hence, the development of a cost-efficient solution still remains a challenging problem. This work proposes the use of a single monocular camera sensor for an automatic steering control, speed assistance for the driver and localization of the vehicle on a road. Herein, we assume that the vehicle is mainly traveling along a predefined path, such as in public transport. A computer vision approach is presented to detect a line painted on the road, which defines the path to follow. Visual markers with a special design painted on the road provide information to localize the vehicle and to assist in its speed control. Furthermore, a vision-based control system, which keeps the vehicle on the predefined path under inner-city speed constraints, is also presented. Real driving tests with a commercial car on a closed circuit finally prove the applicability of the derived approach. In these tests, the car reached a maximum speed of 48 km/h and successfully traveled a distance of 7 km without the intervention of a human driver and any interruption. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Prediction of Military Vehicle’s Drawbar Pull Based on an Improved Relevance Vector Machine and Real Vehicle Tests
Sensors 2016, 16(3), 351; https://doi.org/10.3390/s16030351
Received: 30 November 2015 / Revised: 28 February 2016 / Accepted: 1 March 2016 / Published: 10 March 2016
Cited by 1 | PDF Full-text (6220 KB) | HTML Full-text | XML Full-text
Abstract
The scientific and effective prediction of drawbar pull is of great importance in the evaluation of military vehicle trafficability. Nevertheless, the existing prediction models have demonstrated lots of inherent limitations. In this framework, a multiple-kernel relevance vector machine model (MkRVM) including Gaussian kernel [...] Read more.
The scientific and effective prediction of drawbar pull is of great importance in the evaluation of military vehicle trafficability. Nevertheless, the existing prediction models have demonstrated lots of inherent limitations. In this framework, a multiple-kernel relevance vector machine model (MkRVM) including Gaussian kernel and polynomial kernel is proposed to predict drawbar pull. Nonlinear decreasing inertia weight particle swarm optimization (NDIWPSO) is employed for parameter optimization. As the relations between drawbar pull and its influencing factors have not been tested on real vehicles, a series of experimental analyses based on real vehicle test data are done to confirm the effective influencing factors. A dynamic testing system is applied to conduct field tests and gain required test data. Gaussian kernel RVM, polynomial kernel RVM, support vector machine (SVM) and generalized regression neural network (GRNN) are also used to compare with the MkRVM model. The results indicate that the MkRVM model is a preferable model in this case. Finally, the proposed novel model is compared to the traditional prediction model of drawbar pull. The results show that the MkRVM model significantly improves the prediction accuracy. A great potential of improved RVM is indicated in further research of wheel-soil interactions. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
The Local Integrity Approach for Urban Contexts: Definition and Vehicular Experimental Assessment
Sensors 2016, 16(2), 154; https://doi.org/10.3390/s16020154
Received: 26 November 2015 / Revised: 13 January 2016 / Accepted: 15 January 2016 / Published: 26 January 2016
Cited by 5 | PDF Full-text (11770 KB) | HTML Full-text | XML Full-text
Abstract
A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization [...] Read more.
A novel cooperative integrity monitoring concept, called “local integrity”, suitable to automotive applications in urban scenarios, is discussed in this paper. The idea is to take advantage of a collaborative Vehicular Ad hoc NETwork (VANET) architecture in order to perform a spatial/temporal characterization of possible degradations of Global Navigation Satellite System (GNSS) signals. Such characterization enables the computation of the so-called “Local Protection Levels”, taking into account local impairments to the received signals. Starting from theoretical concepts, this paper describes the experimental validation by means of a measurement campaign and the real-time implementation of the algorithm on a vehicular prototype. A live demonstration in a real scenario has been successfully carried out, highlighting effectiveness and performance of the proposed approach. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Open AccessArticle
Drivers’ Visual Behavior-Guided RRT Motion Planner for Autonomous On-Road Driving
Sensors 2016, 16(1), 102; https://doi.org/10.3390/s16010102
Received: 22 November 2015 / Revised: 28 December 2015 / Accepted: 8 January 2016 / Published: 15 January 2016
Cited by 11 | PDF Full-text (6261 KB) | HTML Full-text | XML Full-text
Abstract
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior [...] Read more.
This paper describes a real-time motion planner based on the drivers’ visual behavior-guided rapidly exploring random tree (RRT) approach, which is applicable to on-road driving of autonomous vehicles. The primary novelty is in the use of the guidance of drivers’ visual search behavior in the framework of RRT motion planner. RRT is an incremental sampling-based method that is widely used to solve the robotic motion planning problems. However, RRT is often unreliable in a number of practical applications such as autonomous vehicles used for on-road driving because of the unnatural trajectory, useless sampling, and slow exploration. To address these problems, we present an interesting RRT algorithm that introduces an effective guided sampling strategy based on the drivers’ visual search behavior on road and a continuous-curvature smooth method based on B-spline. The proposed algorithm is implemented on a real autonomous vehicle and verified against several different traffic scenarios. A large number of the experimental results demonstrate that our algorithm is feasible and efficient for on-road autonomous driving. Furthermore, the comparative test and statistical analyses illustrate that its excellent performance is superior to other previous algorithms. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Review

Jump to: Research

Open AccessReview
Survey on Ranging Sensors and Cooperative Techniques for Relative Positioning of Vehicles
Sensors 2017, 17(2), 271; https://doi.org/10.3390/s17020271
Received: 31 October 2016 / Revised: 22 December 2016 / Accepted: 24 January 2017 / Published: 31 January 2017
Cited by 23 | PDF Full-text (4038 KB) | HTML Full-text | XML Full-text
Abstract
Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser [...] Read more.
Future driver assistance systems will rely on accurate, reliable and continuous knowledge on the position of other road participants, including pedestrians, bicycles and other vehicles. The usual approach to tackle this requirement is to use on-board ranging sensors inside the vehicle. Radar, laser scanners or vision-based systems are able to detect objects in their line-of-sight. In contrast to these non-cooperative ranging sensors, cooperative approaches follow a strategy in which other road participants actively support the estimation of the relative position. The limitations of on-board ranging sensors regarding their detection range and angle of view and the facility of blockage can be approached by using a cooperative approach based on vehicle-to-vehicle communication. The fusion of both, cooperative and non-cooperative strategies, seems to offer the largest benefits regarding accuracy, availability and robustness. This survey offers the reader a comprehensive review on different techniques for vehicle relative positioning. The reader will learn the important performance indicators when it comes to relative positioning of vehicles, the different technologies that are both commercially available and currently under research, their expected performance and their intrinsic limitations. Moreover, the latest research in the area of vision-based systems for vehicle detection, as well as the latest work on GNSS-based vehicle localization and vehicular communication for relative positioning of vehicles, are reviewed. The survey also includes the research work on the fusion of cooperative and non-cooperative approaches to increase the reliability and the availability. Full article
(This article belongs to the Special Issue Sensors for Autonomous Road Vehicles)
Figures

Figure 1

Sensors EISSN 1424-8220 Published by MDPI AG, Basel, Switzerland RSS E-Mail Table of Contents Alert
Back to Top