sensors-logo

Journal Browser

Journal Browser

Special Issue "Intelligent Vehicles"

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: closed (31 May 2020).

Printed Edition Available!
A printed edition of this Special Issue is available here.

Special Issue Editors

Prof. Dr. David Fernández-Llorca
Website
Guest Editor
Prof. Dr. Ignacio Parra Alonso
Website
Guest Editor
Computer Engineering Department, Polytechnic School, University of Alcalá, Campus Universitario s/n, 288805, Alcalá de Henares, Madrid, Spain
Interests: Vehicle localization; autonomous vehicles; driver assistance systems; imaging and image analysis
Special Issues and Collections in MDPI journals
Prof. Dr. Iván García Daza
Website
Guest Editor
Computer Engineering Department, Polytechnic School, University of Alcalá, Campus Universitario s/n, 288805, Alcalá de Henares, Madrid, Spain
Interests: accurate mapping systems based on optimal optimization algorithms; advanced driver assistance systems; assistive intelligent vehicles; driver and road user state and intent recognition; dynamic and cinematic car models; intelligent localization systems based on LiDAR odometry; intelligent navigation and localization systems based on inertial navigation systems; intelligent-vehicle-related image, radar, and LiDAR signal processing; sensor fusion systems for driverless cars
Special Issues and Collections in MDPI journals
Prof. Dr. Noelia Hernández Parra
Website
Guest Editor
Assistant professor, Computer Engineering Department. INVETT Research Group. Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: Accurate Indoor and Outdoor Global Positioning; Vehicle Localization; Autonomous Vehicles; Driver Assistance Systems; Imaging and Image Analysis
Special Issues and Collections in MDPI journals

Special Issue Information

Dear Colleagues,

When we talk about intelligent vehicles or driverless cars, we can (almost) state that the future is now. Both industry and academy have made tremendous advancements in the last decade in this field, and a considerable number of prototypes are now autonomously driving our roads. Technology and research findings are moving quickly, and the race is on to develop intelligent vehicles that enable everyone to enjoy safe, efficient, and sustainable mobility.

The capability of intelligent vehicles to sense, interpret, and fully understand the current traffic scene, as well as to infer future states and potential hazards, maybe the main challenge in the driverless cars’ arena. Current scene understanding technologies and methodologies depend on multiple sensor systems, such as cameras (visible or infrared spectrum), radar, LiDAR, and so on, and are based on highly complex and sophisticated algorithms, including artificial intelligence. New approaches to model the behavior of other road users (VRUs and drivers) are needed in order to ensure the safety of the control strategies. Robust sensing under different lighting and weather conditions becomes mandatory to advance towards fail-aware, fail-safe, and fail operational systems.

The aim of this Special Issue is to contribute to the state-of-the-art, and to introduce current developments concerning the perception and sensor technologies for intelligent vehicles. We encourage potential authors to submit contributions of original research, new developments, and substantial experimental works concerning intelligent vehicles. Surveys are very welcomed too.

Therefore, prospective authors are invited to submit original contributions or survey papers for review for publication in the Sensors open access journal. Topics of interest include (but are not limited to) the following:

  • Sensor technologies for driverless cars
  • Vehicle scene understanding
  • Vulnerable road users (VRUs) protection
  • Vehicle navigation and localization systems
  • Advanced driver assistance systems
  • Intelligent vehicles related image, radar, and LiDAR signal processing
  • Sensor and information fusion
  • Human factors and human machine interaction
  • Assistive intelligent vehicles
  • Driver and road users state and intent recognition
  • Cooperative driving
  • Sensing under different lighting and weather conditions
  • Fail-safe, fail-aware, and fail-operational systems
  • HD and accurate mapping systems

Prof. Dr. David Fernández-Llorca
Prof. Dr. Ignacio Parra Alonso
Prof. Dr. Iván García Daza
Prof. Dr. Noelia Hernández Parra
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All papers will be peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2200 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Intelligent vehicles
  • Sensors
  • Road users behavior modeling
  • Sensor and information fusion
  • Advanced driver assistance systems
  • Image, radar, and LiDAR signal processing
  • Human factors
  • Fail-safe, fail-aware, and fail-operational

Published Papers (33 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Editorial

Jump to: Research

Open AccessEditorial
Sensors and Sensing for Intelligent Vehicles
Sensors 2020, 20(18), 5115; https://doi.org/10.3390/s20185115 - 08 Sep 2020
Cited by 1
Abstract
Over the past decades, both industry and academy have made enormous advancements in the field of intelligent vehicles, and a considerable number of prototypes are now driving our roads, railways, air and sea autonomously. However, there is still a long way to go [...] Read more.
Over the past decades, both industry and academy have made enormous advancements in the field of intelligent vehicles, and a considerable number of prototypes are now driving our roads, railways, air and sea autonomously. However, there is still a long way to go before a widespread adoption. Among all the scientific and technical problems to be solved by intelligent vehicles, the ability to perceive, interpret, and fully understand the operational environment, as well as to infer future states and potential hazards, represent the most difficult and complex tasks, being probably the main bottlenecks that the scientific community and industry must solve in the coming years to ensure the safe and efficient operation of the vehicles (and, therefore, their future adoption). The great complexity and the almost infinite variety of possible scenarios in which an intelligent vehicle must operate, raise the problem of perception as an "endless" issue that will always be ongoing. As a humble contribution to the advancement of vehicles endowed with intelligence, we organized the Special Issue on Intelligent Vehicles. This work offers a complete analysis of all the mansucripts published, and presents the main conclusions drawn. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Research

Jump to: Editorial

Open AccessArticle
Fail-Aware LIDAR-Based Odometry for Autonomous Vehicles
Sensors 2020, 20(15), 4097; https://doi.org/10.3390/s20154097 - 23 Jul 2020
Cited by 1
Abstract
Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to [...] Read more.
Autonomous driving systems are set to become a reality in transport systems and, so, maximum acceptance is being sought among users. Currently, the most advanced architectures require driver intervention when functional system failures or critical sensor operations take place, presenting problems related to driver state, distractions, fatigue, and other factors that prevent safe control. Therefore, this work presents a redundant, accurate, robust, and scalable LiDAR odometry system with fail-aware system features that can allow other systems to perform a safe stop manoeuvre without driver mediation. All odometry systems have drift error, making it difficult to use them for localisation tasks over extended periods. For this reason, the paper presents an accurate LiDAR odometry system with a fail-aware indicator. This indicator estimates a time window in which the system manages the localisation tasks appropriately. The odometry error is minimised by applying a dynamic 6-DoF model and fusing measures based on the Iterative Closest Points (ICP), environment feature extraction, and Singular Value Decomposition (SVD) methods. The obtained results are promising for two reasons: First, in the KITTI odometry data set, the ranking achieved by the proposed method is twelfth, considering only LiDAR-based methods, where its translation and rotation errors are 1.00 % and 0.0041 deg/m, respectively. Second, the encouraging results of the fail-aware indicator demonstrate the safety of the proposed LiDAR odometry system. The results depict that, in order to achieve an accurate odometry system, complex models and measurement fusion techniques must be used to improve its behaviour. Furthermore, if an odometry system is to be used for redundant localisation features, it must integrate a fail-aware indicator for use in a safe manner. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Real-Time Traffic Light Detection with Frequency Patterns Using a High-Speed Camera
Sensors 2020, 20(14), 4035; https://doi.org/10.3390/s20144035 - 20 Jul 2020
Cited by 1
Abstract
LEDs are widely employed as traffic lights. Because most LED traffic lights are driven by alternative power, they blink at high frequencies, even at twice their frequencies. We propose a method to detect a traffic light from images captured by a high-speed camera [...] Read more.
LEDs are widely employed as traffic lights. Because most LED traffic lights are driven by alternative power, they blink at high frequencies, even at twice their frequencies. We propose a method to detect a traffic light from images captured by a high-speed camera that can recognize a blinking traffic light. This technique is robust under various illuminations because it can detect traffic lights by extracting information from the blinking pixels at a specific frequency. The method is composed of six modules, which includes a band-pass filter and a Kalman filter. All the modules run simultaneously to achieve real-time processing and can run at 500 fps for images with a resolution of 800 × 600. This technique was verified on an original dataset captured by a high-speed camera under different illumination conditions such as a sunset or night scene. The recall and accuracy justify the generalization of the proposed detection system. In particular, it can detect traffic lights with a different appearance without tuning parameters and without datasets having to be learned. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
LiDAR-Based GNSS Denied Localization for Autonomous Racing Cars
Sensors 2020, 20(14), 3992; https://doi.org/10.3390/s20143992 - 17 Jul 2020
Cited by 2
Abstract
Self driving vehicles promise to bring one of the greatest technological and social revolutions of the next decade for their potential to drastically change human mobility and goods transportation, in particular regarding efficiency and safety. Autonomous racing provides very similar technological issues while [...] Read more.
Self driving vehicles promise to bring one of the greatest technological and social revolutions of the next decade for their potential to drastically change human mobility and goods transportation, in particular regarding efficiency and safety. Autonomous racing provides very similar technological issues while allowing for more extreme conditions in a safe human environment. While the software stack driving the racing car consists of several modules, in this paper we focus on the localization problem, which provides as output the estimated pose of the vehicle needed by the planning and control modules. When driving near the friction limits, localization accuracy is critical as small errors can induce large errors in control due to the nonlinearities of the vehicle’s dynamic model. In this paper, we present a localization architecture for a racing car that does not rely on Global Navigation Satellite Systems (GNSS). It consists of two multi-rate Extended Kalman Filters and an extension of a state-of-the-art laser-based Monte Carlo localization approach that exploits some a priori knowledge of the environment and context. We first compare the proposed method with a solution based on a widely employed state-of-the-art implementation, outlining its strengths and limitations within our experimental scenario. The architecture is then tested both in simulation and experimentally on a full-scale autonomous electric racing car during an event of Roborace Season Alpha. The results show its robustness in avoiding the robot kidnapping problem typical of particle filters localization methods, while providing a smooth and high rate pose estimate. The pose error distribution depends on the car velocity, and spans on average from 0.1 m (at 60 km/h) to 1.48 m (at 200 km/h) laterally and from 1.9 m (at 100 km/h) to 4.92 m (at 200 km/h) longitudinally. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Multiple-Target Homotopic Quasi-Complete Path Planning Method for Mobile Robot Using a Piecewise Linear Approach
Sensors 2020, 20(11), 3265; https://doi.org/10.3390/s20113265 - 08 Jun 2020
Cited by 1
Abstract
The ability to plan a multiple-target path that goes through places considered important is desirable for autonomous mobile robots that perform tasks in industrial environments. This characteristic is necessary for inspection robots that monitor the critical conditions of sectors in thermal, nuclear, and [...] Read more.
The ability to plan a multiple-target path that goes through places considered important is desirable for autonomous mobile robots that perform tasks in industrial environments. This characteristic is necessary for inspection robots that monitor the critical conditions of sectors in thermal, nuclear, and hydropower plants. This ability is also useful for applications such as service at home, victim rescue, museum guidance, land mine detection, and so forth. Multiple-target collision-free path planning is a topic that has not been very studied because of the complexity that it implies. Usually, this issue is left in second place because, commonly, it is solved by segmentation using the point-to-point strategy. Nevertheless, this approach exhibits a poor performance, in terms of path length, due to unnecessary turnings and redundant segments present in the found path. In this paper, a multiple-target method based on homotopy continuation capable to calculate a collision-free path in a single execution for complex environments is presented. This method exhibits a better performance, both in speed and efficiency, and robustness compared to the original Homotopic Path Planning Method (HPPM). Among the new schemes that improve their performance are the Double Spherical Tracking (DST), the dummy obstacle scheme, and a systematic criterion to a selection of repulsion parameter. The case studies show its effectiveness to find a solution path for office-like environments in just a few milliseconds, even if they have narrow corridors and hundreds of obstacles. Additionally, a comparison between the proposed method and sampling-based planning algorithms (SBP) with the best performance is presented. Furthermore, the results of case studies show that the proposed method exhibits a better performance than SBP algorithms for execution time, memory, and in some cases path length metrics. Finally, to validate the feasibility of the paths calculated by the proposed planner; two simulations using the pure-pursuit controlled and differential drive robot model contained in the Robotics System Toolbox of MATLAB are presented. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Improved LiDAR Probabilistic Localization for Autonomous Vehicles Using GNSS
Sensors 2020, 20(11), 3145; https://doi.org/10.3390/s20113145 - 02 Jun 2020
Cited by 6
Abstract
This paper proposes a method that improves autonomous vehicles localization using a modification of probabilistic laser localization like Monte Carlo Localization (MCL) algorithm, enhancing the weights of the particles by adding Kalman filtered Global Navigation Satellite System (GNSS) information. GNSS data are used [...] Read more.
This paper proposes a method that improves autonomous vehicles localization using a modification of probabilistic laser localization like Monte Carlo Localization (MCL) algorithm, enhancing the weights of the particles by adding Kalman filtered Global Navigation Satellite System (GNSS) information. GNSS data are used to improve localization accuracy in places with fewer map features and to prevent the kidnapped robot problems. Besides, laser information improves accuracy in places where the map has more features and GNSS higher covariance, allowing the approach to be used in specifically difficult scenarios for GNSS such as urban canyons. The algorithm is tested using KITTI odometry dataset proving that it improves localization compared with classic GNSS + Inertial Navigation System (INS) fusion and Adaptive Monte Carlo Localization (AMCL), it is also tested in the autonomous vehicle platform of the Intelligent Systems Lab (LSI), of the University Carlos III de of Madrid, providing qualitative results. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Motion State Estimation of Target Vehicle under Unknown Time-Varying Noises Based on Improved Square-Root Cubature Kalman Filter
Sensors 2020, 20(9), 2620; https://doi.org/10.3390/s20092620 - 04 May 2020
Cited by 1
Abstract
In the advanced driver assistance system (ADAS), millimeter-wave radar is an important sensor to estimate the motion state of the target-vehicle. In this paper, the estimation of target-vehicle motion state includes two parts: the tracking of the target-vehicle and the identification of the [...] Read more.
In the advanced driver assistance system (ADAS), millimeter-wave radar is an important sensor to estimate the motion state of the target-vehicle. In this paper, the estimation of target-vehicle motion state includes two parts: the tracking of the target-vehicle and the identification of the target-vehicle motion state. In the unknown time-varying noise, non-linear target-vehicle tracking faces the problem of low precision. Based on the square-root cubature Kalman filter (SRCKF), the Sage–Husa noise statistic estimator and the fading memory exponential weighting method are combined to derive a time-varying noise statistic estimator for non-linear systems. A method of classifying the motion state of the target vehicle based on the time window is proposed by analyzing the transfer mechanism of the motion state of the target vehicle. The results of the vehicle test show that: (1) Compared with the Sage–Husa extended Kalman filtering (SH-EKF) and SRCKF algorithms, the maximum increase in filtering accuracy of longitudinal distance using the improved square-root cubature Kalman filter (ISRCKF) algorithm is 45.53% and 59.15%, respectively, and the maximum increase in filtering the accuracy of longitudinal speed using the ISRCKF algorithm is 23.53% and 29.09%, respectively. (2) The classification and recognition results of the target-vehicle motion state are consistent with the target-vehicle motion state. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Self-Driving Car Location Estimation Based on a Particle-Aided Unscented Kalman Filter
Sensors 2020, 20(9), 2544; https://doi.org/10.3390/s20092544 - 29 Apr 2020
Cited by 8
Abstract
Localization is one of the key components in the operation of self-driving cars. Owing to the noisy global positioning system (GPS) signal and multipath routing in urban environments, a novel, practical approach is needed. In this study, a sensor fusion approach for self-driving [...] Read more.
Localization is one of the key components in the operation of self-driving cars. Owing to the noisy global positioning system (GPS) signal and multipath routing in urban environments, a novel, practical approach is needed. In this study, a sensor fusion approach for self-driving cars was developed. To localize the vehicle position, we propose a particle-aided unscented Kalman filter (PAUKF) algorithm. The unscented Kalman filter updates the vehicle state, which includes the vehicle motion model and non-Gaussian noise affection. The particle filter provides additional updated position measurement information based on an onboard sensor and a high definition (HD) map. The simulations showed that our method achieves better precision and comparable stability in localization performance compared to previous approaches. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Reliable Road Scene Interpretation Based on ITOM with the Integrated Fusion of Vehicle and Lane Tracker in Dense Traffic Situation
Sensors 2020, 20(9), 2457; https://doi.org/10.3390/s20092457 - 26 Apr 2020
Cited by 1
Abstract
Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane [...] Read more.
Lane detection and tracking in a complex road environment is one of the most important research areas in highly automated driving systems. Studies on lane detection cover a variety of difficulties, such as shadowy situations, dimmed lane painting, and obstacles that prohibit lane feature detection. There are several hard cases in which lane candidate features are not easily extracted from image frames captured by a driving vehicle. We have carefully selected typical scenarios in which the extraction of lane candidate features can be easily corrupted by road vehicles and road markers that lead to degradations in the understanding of road scenes, resulting in difficult decision making. We have introduced two main contributions to the interpretation of road scenes in dense traffic environments. First, to obtain robust road scene understanding, we have designed a novel framework combining a lane tracker method integrated with a camera and a radar forward vehicle tracker system, which is especially useful in dense traffic situations. We have introduced an image template occupancy matching method with the integrated vehicle tracker that makes it possible to avoid extracting irrelevant lane features caused by forward target vehicles and road markers. Second, we present a robust multi-lane detection by a tracking algorithm that incudes adjacent lanes as well as ego lanes. We verify a comprehensive experimental evaluation with a real dataset comprised of problematic road scenarios. Experimental result shows that the proposed method is very reliable for multi-lane detection at the presented difficult situations. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Human-Like Lane Change Decision Model for Autonomous Vehicles that Considers the Risk Perception of Drivers in Mixed Traffic
Sensors 2020, 20(8), 2259; https://doi.org/10.3390/s20082259 - 16 Apr 2020
Cited by 3
Abstract
Determining an appropriate time to execute a lane change is a critical issue for the development of Autonomous Vehicles (AVs).However, few studies have considered the rear and the front vehicle-driver’s risk perception while developing a human-like lane-change decision model. This paper aims to [...] Read more.
Determining an appropriate time to execute a lane change is a critical issue for the development of Autonomous Vehicles (AVs).However, few studies have considered the rear and the front vehicle-driver’s risk perception while developing a human-like lane-change decision model. This paper aims to develop a lane-change decision model for AVs and to identify a two level threshold that conforms to a driver’s perception of the ability to safely change lanes with a rear vehicle approaching fast. Based on the signal detection theory and extreme moment trials on a real highway, two thresholds of safe lane change were determined with consideration of risk perception of the rear and the subject vehicle drivers, respectively. The rear vehicle’s Minimum Safe Deceleration (MSD) during the lane change maneuver of the subject vehicle was selected as the lane change safety indicator, and was calculated using the proposed human-like lane-change decision model. The results showed that, compared with the driver in the front extreme moment trial, the driver in the rear extreme moment trial is more conservative during the lane change process. To meet the safety expectations of the subject and rear vehicle drivers, the primary and secondary safe thresholds were determined to be 0.85 m/s2 and 1.76 m/s2, respectively. The decision model can help make AVs safer and more polite during lane changes, as it not only improves acceptance of the intelligent driving system, but also further ensures the rear vehicle’s driver’s safety. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Research on a Simulation Method of the Millimeter Wave Radar Virtual Test Environment for Intelligent Driving
Sensors 2020, 20(7), 1929; https://doi.org/10.3390/s20071929 - 30 Mar 2020
Cited by 1
Abstract
This study addresses the virtual testing of intelligent driving, examines the key problems in modeling and simulating millimeter wave radar environmental clutter, and proposes a modeling and simulation method for the environmental clutter of millimeter wave radar in intelligent driving. First, based on [...] Read more.
This study addresses the virtual testing of intelligent driving, examines the key problems in modeling and simulating millimeter wave radar environmental clutter, and proposes a modeling and simulation method for the environmental clutter of millimeter wave radar in intelligent driving. First, based on the attributes of intelligent vehicle millimeter wave radar, the classification characteristics of the traffic environment of an intelligent vehicle and the generation mechanism of radar environmental clutter are analyzed. Next, the statistical distribution characteristics of the clutter amplitude, the distribution characteristics of the power spectrum, and the electromagnetic dielectric characteristics are analyzed. The simulation method of radar clutter under environmental conditions such as road surface, rainfall, snowfall, and fog are deduced and designed. Finally, experimental comparison results are utilized to validate the model and simulation method. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Intelligent Driving Assistant Based on Road Accident Risk Map Analysis and Vehicle Telemetry
Sensors 2020, 20(6), 1763; https://doi.org/10.3390/s20061763 - 22 Mar 2020
Cited by 1
Abstract
Through the application of intelligent systems in driver assistance systems, the experience of traveling by road has become much more comfortable and safe. In this sense, this paper then reports the development of an intelligent driving assistant, based on vehicle telemetry and road [...] Read more.
Through the application of intelligent systems in driver assistance systems, the experience of traveling by road has become much more comfortable and safe. In this sense, this paper then reports the development of an intelligent driving assistant, based on vehicle telemetry and road accident risk map analysis, whose responsibility is to alert the driver in order to avoid risky situations that may cause traffic accidents. In performance evaluations using real cars in a real environment, the on-board intelligent assistant reproduced real-time audio-visual alerts according to information obtained from both telemetry and road accident risk map analysis. As a result, an intelligent assistance agent based on fuzzy reasoning was obtained, which supported the driver correctly in real-time according to the telemetry data, the vehicle environment and the principles of secure driving practices and transportation regulation laws. Experimental results and conclusions emphasizing the advantages of the proposed intelligent driving assistant in the improvement of the driving task are presented. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Model Predictive Controller Based on Online Obtaining of Softness Factor and Fusion Velocity for Automatic Train Operation
Sensors 2020, 20(6), 1719; https://doi.org/10.3390/s20061719 - 19 Mar 2020
Cited by 1
Abstract
This paper develops an improved model predictive controller based on the online obtaining of softness factor and fusion velocity for automatic train operation to enhance the tracking control performance. Specifically, the softness factor of the improved model predictive control algorithm is not a [...] Read more.
This paper develops an improved model predictive controller based on the online obtaining of softness factor and fusion velocity for automatic train operation to enhance the tracking control performance. Specifically, the softness factor of the improved model predictive control algorithm is not a constant, conversely, an improved online adaptive adjusting method for softness factor based on fuzzy satisfaction of system output value and velocity distance trajectory characteristic is adopted, and an improved whale optimization algorithm has been proposed to solve the adjustable parameters; meanwhile, the system output value for automatic train operation is not sampled by a normal speed sensor, on the contrary, an improved online velocity sampled method for the system output value based on a fusion velocity model and an intelligent digital torque sensor is applied. In addition, the two improved strategies proposed take the real-time storage and calculation capacities of the core chip of the controller into account. Therefore, the proposed improved strategies (I) have good performance in tracking precision, (II) are simple and easily conducted, and (III) can ensure the accomplishing of computational tasks in real-time. Finally, to verify the effectiveness of the improved model predictive controller, the Matlab/simulink simulation and hardware-in-the-loop simulation (HILS) are adopted for automatic train operation tracking control, and the tracking control simulation results indicate that the improved model predictive controller has better tracking control effectiveness compared with the existing traditional improved model predictive controller. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Graphical abstract

Open AccessArticle
A Repeated Game Freeway Lane Changing Model
Sensors 2020, 20(6), 1554; https://doi.org/10.3390/s20061554 - 11 Mar 2020
Cited by 4
Abstract
Lane changes are complex safety- and throughput-critical driver actions. Most lane-changing models deal with lane-changing maneuvers solely from the merging driver’s standpoint and thus ignore driver interaction. To overcome this shortcoming, we develop a game-theoretical decision-making model and validate the model using empirical [...] Read more.
Lane changes are complex safety- and throughput-critical driver actions. Most lane-changing models deal with lane-changing maneuvers solely from the merging driver’s standpoint and thus ignore driver interaction. To overcome this shortcoming, we develop a game-theoretical decision-making model and validate the model using empirical merging maneuver data at a freeway on-ramp. Specifically, this paper advances our repeated game model by using updated payoff functions. Validation results using the Next Generation SIMulation (NGSIM) empirical data show that the developed game-theoretical model provides better prediction accuracy compared to previous work, giving correct predictions approximately 86% of the time. In addition, a sensitivity analysis demonstrates the rationality of the model and its sensitivity to variations in various factors. To provide evidence of the benefits of the repeated game approach, which takes into account previous decision-making results, a case study is conducted using an agent-based simulation model. The proposed repeated game model produces superior performance to a one-shot game model when simulating actual freeway merging behaviors. Finally, this lane change model, which captures the collective decision-making between human drivers, can be used to develop automated vehicle driving strategies. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Robust Traffic Light and Arrow Detection Using Digital Map with Spatial Prior Information for Automated Driving
Sensors 2020, 20(4), 1181; https://doi.org/10.3390/s20041181 - 21 Feb 2020
Cited by 1
Abstract
Traffic light recognition is an indispensable elemental technology for automated driving in urban areas. In this study, we propose an algorithm that recognizes traffic lights and arrow lights by image processing using the digital map and precise vehicle pose which is estimated by [...] Read more.
Traffic light recognition is an indispensable elemental technology for automated driving in urban areas. In this study, we propose an algorithm that recognizes traffic lights and arrow lights by image processing using the digital map and precise vehicle pose which is estimated by a localization module. The use of a digital map allows the determination of a region-of-interest in an image to reduce the computational cost and false detection. In addition, this study develops an algorithm to recognize arrow lights using relative positions of traffic lights, and the arrow light is used as prior spatial information. This allows for the recognition of distant arrow lights that are difficult for humans to see clearly. Experiments were conducted to evaluate the recognition performance of the proposed method and to verify if it matches the performance required for automated driving. Quantitative evaluations indicate that the proposed method achieved 91.8% and 56.7% of the average f-value for traffic lights and arrow lights, respectively. It was confirmed that the arrow-light detection could recognize small arrow objects even if their size was smaller than 10 pixels. The verification experiments indicate that the performance of the proposed method meets the necessary requirements for smooth acceleration or deceleration at intersections in automated driving. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Shadow Detection in Still Road Images Using Chrominance Properties of Shadows and Spectral Power Distribution of the Illumination
Sensors 2020, 20(4), 1012; https://doi.org/10.3390/s20041012 - 13 Feb 2020
Cited by 2
Abstract
A well-known challenge in vision-based driver assistance systems is cast shadows on the road, which makes fundamental tasks such as road and lane detections difficult. In as much as shadow detection relies on shadow features, in this paper, we propose a set of [...] Read more.
A well-known challenge in vision-based driver assistance systems is cast shadows on the road, which makes fundamental tasks such as road and lane detections difficult. In as much as shadow detection relies on shadow features, in this paper, we propose a set of new chrominance properties of shadows based on the skylight and sunlight contributions to the road surface chromaticity. Six constraints on shadow and non-shadowed regions are derived from these properties. The chrominance properties and the associated constraints are used as shadow features in an effective shadow detection method intended to be integrated on an onboard road detection system where the identification of cast shadows on the road is a determinant stage. Onboard systems deal with still outdoor images; thus, the approach focuses on distinguishing shadow boundaries from material changes by considering two illumination sources: sky and sun. A non-shadowed road region is illuminated by both skylight and sunlight, whereas a shadowed one is illuminated by skylight only; thus, their chromaticity varies. The shadow edge detection strategy consists of the identification of image edges separating shadowed and non-shadowed road regions. The classification is achieved by verifying whether the pixel chrominance values of regions on both sides of the image edges satisfy the six constraints. Experiments on real traffic scenes demonstrated the effectiveness of our shadow detection system in detecting shadow edges on the road and material-change edges, outperforming previous shadow detection methods based on physical features, and showing the high potential of the new chrominance properties. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Fail-Operational Control Architecture Approach and Dead-Reckoning Strategy in Case of Positioning Failures
Sensors 2020, 20(2), 442; https://doi.org/10.3390/s20020442 - 13 Jan 2020
Cited by 2
Abstract
Presently, in the event of a failure in Automated Driving Systems, control architectures rely on hardware redundancies over software solutions to assure reliability or wait for human interaction in takeover requests to achieve a minimal risk condition. As user confidence and final acceptance [...] Read more.
Presently, in the event of a failure in Automated Driving Systems, control architectures rely on hardware redundancies over software solutions to assure reliability or wait for human interaction in takeover requests to achieve a minimal risk condition. As user confidence and final acceptance of this novel technology are strongly related to enabling safe states, automated fall-back strategies must be assured as a response to failures while the system is performing a dynamic driving task. In this work, a fail-operational control architecture approach and dead-reckoning strategy in case of positioning failures are developed and presented. A fail-operational system is capable of detecting failures in the last available positioning source, warning the decision stage to set up a fall-back strategy and planning a new trajectory in real time. The surrounding objects and road borders are considered during the vehicle motion control after failure, to avoid collisions and lane-keeping purposes. A case study based on a realistic urban scenario is simulated for testing and system verification. It shows that the proposed approach always bears in mind both the passenger’s safety and comfort during the fall-back maneuvering execution. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessEditor’s ChoiceArticle
Vehicle Trajectory Prediction and Collision Warning via Fusion of Multisensors and Wireless Vehicular Communications
Sensors 2020, 20(1), 288; https://doi.org/10.3390/s20010288 - 04 Jan 2020
Cited by 8
Abstract
Driver inattention is one of the leading causes of traffic crashes worldwide. Providing the driver with an early warning prior to a potential collision can significantly reduce the fatalities and level of injuries associated with vehicle collisions. In order to monitor the vehicle [...] Read more.
Driver inattention is one of the leading causes of traffic crashes worldwide. Providing the driver with an early warning prior to a potential collision can significantly reduce the fatalities and level of injuries associated with vehicle collisions. In order to monitor the vehicle surroundings and predict collisions, on-board sensors such as radar, lidar, and cameras are often used. However, the driving environment perception based on these sensors can be adversely affected by a number of factors such as weather and solar irradiance. In addition, potential dangers cannot be detected if the target is located outside the limited field-of-view of the sensors, or if the line of sight to the target is occluded. In this paper, we propose an approach for designing a vehicle collision warning system based on fusion of multisensors and wireless vehicular communications. A high-level fusion of radar, lidar, camera, and wireless vehicular communication data was performed to predict the trajectories of remote targets and generate an appropriate warning to the driver prior to a possible collision. We implemented and evaluated the proposed vehicle collision system in virtual driving environments, which consisted of a vehicle–vehicle collision scenario and a vehicle–pedestrian collision scenario. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Occlusion-Free Road Segmentation Leveraging Semantics for Autonomous Vehicles
Sensors 2019, 19(21), 4711; https://doi.org/10.3390/s19214711 - 30 Oct 2019
Cited by 3
Abstract
The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive [...] Read more.
The deep convolutional neural network has led the trend of vision-based road detection, however, obtaining a full road area despite the occlusion from monocular vision remains challenging due to the dynamic scenes in autonomous driving. Inferring the occluded road area requires a comprehensive understanding of the geometry and the semantics of the visible scene. To this end, we create a small but effective dataset based on the KITTI dataset named KITTI-OFRS (KITTI-occlusion-free road segmentation) dataset and propose a lightweight and efficient, fully convolutional neural network called OFRSNet (occlusion-free road segmentation network) that learns to predict occluded portions of the road in the semantic domain by looking around foreground objects and visible road layout. In particular, the global context module is used to build up the down-sampling and joint context up-sampling block in our network, which promotes the performance of the network. Moreover, a spatially-weighted cross-entropy loss is designed to significantly increases the accuracy of this task. Extensive experiments on different datasets verify the effectiveness of the proposed approach, and comparisons with current excellent methods show that the proposed method outperforms the baseline models by obtaining a better trade-off between accuracy and runtime, which makes our approach is able to be applied to autonomous vehicles in real-time. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Combined Edge- and Stixel-based Object Detection in 3D Point Cloud
Sensors 2019, 19(20), 4423; https://doi.org/10.3390/s19204423 - 12 Oct 2019
Cited by 4
Abstract
Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU (Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient [...] Read more.
Environment perception is critical for feasible path planning and safe driving for autonomous vehicles. Perception devices, such as camera, LiDAR (Light Detection and Ranging), IMU (Inertial Measurement Unit), etc., only provide raw sensing data with no identification of vital objects, which is insufficient for autonomous vehicles to perform safe and efficient self-driving operations. This study proposes an improved edge-oriented segmentation-based method to detect the objects from the sensed three-dimensional (3D) point cloud. The improved edge-oriented segmentation-based method consists of three main steps: First, the bounding areas of objects are identified by edge detection and stixel estimation in corresponding two-dimensional (2D) images taken by a stereo camera. Second, 3D sparse point clouds of objects are reconstructed in bounding areas. Finally, the dense point clouds of objects are segmented by matching the 3D sparse point clouds of objects with the whole scene point cloud. After comparison with the existing methods of segmentation, the experimental results demonstrate that the proposed edge-oriented segmentation method improves the precision of 3D point cloud segmentation, and that the objects can be segmented accurately. Meanwhile, the visualization of output data in advanced driving assistance systems (ADAS) can be greatly facilitated due to the decrease in computational time and the decrease in the number of points in the object’s point cloud. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Vehicle Deceleration Prediction Model to Reflect Individual Driver Characteristics by Online Parameter Learning for Autonomous Regenerative Braking of Electric Vehicles
Sensors 2019, 19(19), 4171; https://doi.org/10.3390/s19194171 - 26 Sep 2019
Cited by 2
Abstract
The connected powertrain control, which uses intelligent transportation system information, has been widely researched to improve driver convenience and energy efficiency. The vehicle state prediction on decelerating driving conditions can be applied to automatic regenerative braking in electric vehicles. However, drivers can feel [...] Read more.
The connected powertrain control, which uses intelligent transportation system information, has been widely researched to improve driver convenience and energy efficiency. The vehicle state prediction on decelerating driving conditions can be applied to automatic regenerative braking in electric vehicles. However, drivers can feel a sense of heterogeneity when regenerative control is performed based on prediction results from a general prediction model. As a result, a deceleration prediction model which represents individual driving characteristics is required to ensure a more comfortable experience with an automatic regenerative braking control. Thus, in this paper, we proposed a deceleration prediction model based on the parametric mathematical equation and explicit model parameters. The model is designed specifically for deceleration prediction by using the parametric equation that describes deceleration characteristics. Furthermore, the explicit model parameters are updated according to individual driver characteristics using the driver’s braking data during real driving situations. The proposed algorithm was integrated and validated on a real-time embedded system, and then, it was applied to the model-based regenerative control algorithm as a case study. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Deceleration Planning Algorithm Based on Classified Multi-Layer Perceptron Models for Smart Regenerative Braking of EV in Diverse Deceleration Conditions
Sensors 2019, 19(18), 4020; https://doi.org/10.3390/s19184020 - 18 Sep 2019
Cited by 3
Abstract
The smart regenerative braking system (SRS) is an autonomous version of one-pedal driving in electric vehicles. To implement SRS, a deceleration planning algorithm is necessary to generate the deceleration used in automatic regenerative control. To reduce the discomfort from the automatic regeneration, the [...] Read more.
The smart regenerative braking system (SRS) is an autonomous version of one-pedal driving in electric vehicles. To implement SRS, a deceleration planning algorithm is necessary to generate the deceleration used in automatic regenerative control. To reduce the discomfort from the automatic regeneration, the deceleration should be similar to human driving. In this paper, a deceleration planning algorithm based on multi-layer perceptron (MLP) is proposed. The MLP models can mimic the human driving behavior by learning the driving data. In addition, the proposed deceleration planning algorithm has a classified structure to improve the planning performance in each deceleration condition. Therefore, the individual MLP models were designed according to three different deceleration conditions: car-following, speed bump, and intersection. The proposed algorithm was validated through driving simulations. Then, time to collision and similarity to human driving were analyzed. The results show that the minimum time to collision was 1.443 s and the velocity root-mean-square error (RMSE) with human driving was 0.302 m/s. Through the driving simulation, it was validated that the vehicle moves safely with desirable velocity when SRS is in operation, based on the proposed algorithm. Furthermore, the classified structure has more advantages than the integrated structure in terms of planning performance. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Reinforcement Learning-Based End-to-End Parking for Automatic Parking System
Sensors 2019, 19(18), 3996; https://doi.org/10.3390/s19183996 - 16 Sep 2019
Cited by 6
Abstract
According to the existing mainstream automatic parking system (APS), a parking path is first planned based on the parking slot detected by the sensors. Subsequently, the path tracking module guides the vehicle to track the planned parking path. However, since the vehicle is [...] Read more.
According to the existing mainstream automatic parking system (APS), a parking path is first planned based on the parking slot detected by the sensors. Subsequently, the path tracking module guides the vehicle to track the planned parking path. However, since the vehicle is non-linear dynamic, path tracking error inevitably occurs, leading to inclination and deviation of the parking. Accordingly, in this paper, a reinforcement learning-based end-to-end parking algorithm is proposed to achieve automatic parking. The vehicle can continuously learn and accumulate experience from numerous parking attempts and then learn the command of the optimal steering wheel angle at different parking slots. Based on this end-to-end parking, errors caused by path tracking can be avoided. Moreover, to ensure that the parking slot can be obtained continuously in the process of learning, a parking slot tracking algorithm is proposed based on the combination of vision and vehicle chassis information. Furthermore, given that the learning network output is hard to converge, and it is easy to fall into local optimum during the parking process, several reinforcement learning training methods in terms of parking conditions are developed. Lastly, by the real vehicle test, it is proved that using the proposed method can achieve a better parking attitude than using the path planning and path tracking-based method. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Multi-sensor Fusion Road Friction Coefficient Estimation During Steering with Lyapunov Method
Sensors 2019, 19(18), 3816; https://doi.org/10.3390/s19183816 - 04 Sep 2019
Cited by 3
Abstract
The road friction coefficient is a key parameter for autonomous vehicles and vehicle dynamic control. With the development of autonomous vehicles, increasingly, more environmental perception sensors are being installed on vehicles, which means that more information can be used to estimate the road [...] Read more.
The road friction coefficient is a key parameter for autonomous vehicles and vehicle dynamic control. With the development of autonomous vehicles, increasingly, more environmental perception sensors are being installed on vehicles, which means that more information can be used to estimate the road friction coefficient. In this paper, a nonlinear observer aided by vehicle lateral displacement information for estimating the road friction coefficient is proposed. First, the tire brush model is modified to describe the tire characteristics more precisely in high friction conditions using tire test data. Then, on the basis of vehicle dynamics and a kinematic model, a nonlinear observer is designed, and the self-aligning torque of the wheel, lateral acceleration, and vehicle lateral displacement are used to estimate the road friction coefficient during steering. Finally, slalom tests and DLC (Double Line Change) tests in high friction conditions are conducted to verify the proposed estimation algorithm. Test results showed that the proposed method performs well during steering and the estimated road friction coefficient converges to the reference value rapidly. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Real-Time Photometric Calibrated Monocular Direct Visual SLAM
Sensors 2019, 19(16), 3604; https://doi.org/10.3390/s19163604 - 19 Aug 2019
Cited by 2
Abstract
To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation [...] Read more.
To solve the illumination sensitivity problems of mobile ground equipment, an enhanced visual SLAM algorithm based on the sparse direct method was proposed in this paper. Firstly, the vignette and response functions of the input sequences were optimized based on the photometric formation of the camera. Secondly, the Shi–Tomasi corners of the input sequence were tracked, and optimization equations were established using the pixel tracking of sparse direct visual odometry (VO). Thirdly, the Levenberg–Marquardt (L–M) method was applied to solve the joint optimization equation, and the photometric calibration parameters in the VO were updated to realize the real-time dynamic compensation of the exposure of the input sequences, which reduced the effects of the light variations on SLAM’s (simultaneous localization and mapping) accuracy and robustness. Finally, a Shi–Tomasi corner filtered strategy was designed to reduce the computational complexity of the proposed algorithm, and the loop closure detection was realized based on the oriented FAST and rotated BRIEF (ORB) features. The proposed algorithm was tested using TUM, KITTI, EuRoC, and an actual environment, and the experimental results show that the positioning and mapping performance of the proposed algorithm is promising. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Sensor Fault Detection and Signal Restoration in Intelligent Vehicles
Sensors 2019, 19(15), 3306; https://doi.org/10.3390/s19153306 - 27 Jul 2019
Cited by 3
Abstract
This paper presents fault diagnosis logic and signal restoration algorithms for vehicle motion sensors. Because various sensors are equipped to realize automatic operation of the vehicle, defects in these sensors lead to severe safety issues. Therefore, an effective and reliable fault detection and [...] Read more.
This paper presents fault diagnosis logic and signal restoration algorithms for vehicle motion sensors. Because various sensors are equipped to realize automatic operation of the vehicle, defects in these sensors lead to severe safety issues. Therefore, an effective and reliable fault detection and recovery system should be developed. The primary idea of the proposed fault detection system is the conversion of measured wheel speeds into vehicle central axis information and the selection of a reference central axis speed based on this information. Thus, the obtained results are employed to estimate the speed for all wheel sides, which are compared with measured values to identify fault and recover the fault signal. For fault diagnosis logic, a conditional expression is derived with only two variables to distinguish between normal and fault; further, an analytical redundancy structure and a simple diagnostic logic structure are presented. Finally, an off-line test is conducted using test vehicle information to validate the proposed method; it demonstrates that the proposed fault detection and signal restoration algorithm can satisfy the control performance required for each sensor failure. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Vehicle Driver Monitoring through the Statistical Process Control
Sensors 2019, 19(14), 3059; https://doi.org/10.3390/s19143059 - 11 Jul 2019
Cited by 3
Abstract
This paper proposes the use of the Statistical Process Control (SPC), more specifically, the Exponentially Weighted Moving Average method, for the monitoring of drivers using approaches based on the vehicle and the driver’s behavior. Based on the SPC, we propose a method for [...] Read more.
This paper proposes the use of the Statistical Process Control (SPC), more specifically, the Exponentially Weighted Moving Average method, for the monitoring of drivers using approaches based on the vehicle and the driver’s behavior. Based on the SPC, we propose a method for the lane departure detection; a method for detecting sudden driver movements; and a method combined with computer vision to detect driver fatigue. All methods consider information from sensors scattered by the vehicle. The results showed the efficiency of the methods in the identification and detection of unwanted driver actions, such as sudden movements, lane departure, and driver fatigue. Lane departure detection obtained results of up to 76.92% (without constant speed) and 84.16% (speed maintained at ≈60). Furthermore, sudden movements detection obtained results of up to 91.66% (steering wheel) and 94.44% (brake). The driver fatigue has been detected in up to 94.46% situations. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
A Strain-Based Method to Estimate Tire Parameters for Intelligent Tires under Complex Maneuvering Operations
Sensors 2019, 19(13), 2973; https://doi.org/10.3390/s19132973 - 05 Jul 2019
Cited by 8
Abstract
The possibility of using tires as active sensors opens the door to a huge number of different ways to accomplish this goal. In this case, based on a tire equipped with strain sensors, also known as an Intelligent Tire, relevant vehicle dynamics information [...] Read more.
The possibility of using tires as active sensors opens the door to a huge number of different ways to accomplish this goal. In this case, based on a tire equipped with strain sensors, also known as an Intelligent Tire, relevant vehicle dynamics information can be provided. The purpose of this research is to improve the strain-based methodology for Intelligent Tires to estimate all tire forces, based only on deformations measured in the contact patch. Firstly, through an indoor test rig data, an algorithm has been developed to pick out the relevant features of strain data and correlate them with tire parameters. This information of the tire contact patch is then transmitted to a fuzzy logic system to estimate the tire parameters. To evaluate the reliability of the proposed estimator, the well-known simulation software CarSim has been used to back up the estimation results. The software CarSim has been used to provide the vehicle parameters in complex maneuvers. Finally, the estimations have been checked with the simulation results. This approach has enabled the behaviour of the intelligent tire to be tested for different maneuvers and velocities, providing key information about the tire parameters directly from the only contact that exists between the vehicle and the road. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Semantic Segmentation with Transfer Learning for Off-Road Autonomous Driving
Sensors 2019, 19(11), 2577; https://doi.org/10.3390/s19112577 - 06 Jun 2019
Cited by 13
Abstract
Since the state-of-the-art deep learning algorithms demand a large training dataset, which is often unavailable in some domains, the transfer of knowledge from one domain to another has been a trending technique in the computer vision field. However, this method may not be [...] Read more.
Since the state-of-the-art deep learning algorithms demand a large training dataset, which is often unavailable in some domains, the transfer of knowledge from one domain to another has been a trending technique in the computer vision field. However, this method may not be a straight-forward task considering several issues such as original network size or large differences between the source and target domain. In this paper, we perform transfer learning for semantic segmentation of off-road driving environments using a pre-trained segmentation network called DeconvNet. We explore and verify two important aspects regarding transfer learning. First, since the original network size was very large and did not perform well for our application, we proposed a smaller network, which we call the light-weight network. This light-weight network is half the size to the original DeconvNet architecture. We transferred the knowledge from the pre-trained DeconvNet to our light-weight network and fine-tuned it. Second, we used synthetic datasets as the intermediate domain before training with the real-world off-road driving data. Fine-tuning the model trained with the synthetic dataset that simulates the off-road driving environment provides more accurate results for the segmentation of real-world off-road driving environments than transfer learning without using a synthetic dataset does, as long as the synthetic dataset is generated considering real-world variations. We also explore the issue whereby the use of a too simple and/or too random synthetic dataset results in negative transfer. We consider the Freiburg Forest dataset as a real-world off-road driving dataset. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Multi-Stage Hough Space Calculation for Lane Markings Detection via IMU and Vision Fusion
Sensors 2019, 19(10), 2305; https://doi.org/10.3390/s19102305 - 19 May 2019
Cited by 3
Abstract
It is challenging to achieve robust lane detection based on a single frame, particularly when complicated driving scenarios are present. A novel approach based on multiple frames is proposed in this paper by taking advantage of the fusion of vision and Inertial Measurement [...] Read more.
It is challenging to achieve robust lane detection based on a single frame, particularly when complicated driving scenarios are present. A novel approach based on multiple frames is proposed in this paper by taking advantage of the fusion of vision and Inertial Measurement Units (IMU). Hough space is employed as a storage medium where lane markings can be stored and visited conveniently. The detection of lane markings is achieved by the following steps. Firstly, primary line segments are extracted from a basic Hough space, which is calculated by Hough Transform. Secondly, a CNN-based classifier is introduced to measure the confidence probability of each line segment, and transforms the basic Hough space into a probabilistic Hough space. In the third step, pose information provided by the IMU is applied to align previous probabilistic Hough spaces to the current one and a filtered probabilistic Hough space is acquired by smoothing the primary probabilistic Hough space across frames. Finally, valid line segments with probability higher than 0.7 are extracted from the filtered probabilistic Hough space. The proposed approach is applied experimentally, and the results demonstrate a satisfying performance compared to various existing methods. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Simulating Dynamic Driving Behavior in Simulation Test for Unmanned Vehicles via Multi-Sensor Data
Sensors 2019, 19(7), 1670; https://doi.org/10.3390/s19071670 - 08 Apr 2019
Cited by 2
Abstract
Driving behavior is the main basis for evaluating the performance of an unmanned vehicle. In simulation tests of unmanned vehicles, in order for simulation results to be approximated to the actual results as much as possible, model of driving behaviors must be able [...] Read more.
Driving behavior is the main basis for evaluating the performance of an unmanned vehicle. In simulation tests of unmanned vehicles, in order for simulation results to be approximated to the actual results as much as possible, model of driving behaviors must be able to exhibit actual motion of unmanned vehicles. We propose an automatic approach of simulating dynamic driving behaviors of vehicles in traffic scene represented by image sequences. The spatial topological attributes and appearance attributes of virtual vehicles are computed separately according to the constraint of geometric consistency of sparse 3D space organized by image sequence. To achieve this goal, we need to solve three main problems: Registration of vehicle in a 3D space of road environment, vehicle’s image observed from corresponding viewpoint in the road scene, and consistency of the vehicle and the road environment. After the proposed method was embedded in a scene browser, a typical traffic scene including the intersections was chosen for a virtual vehicle to execute the driving tasks of lane change, overtaking, slowing down and stop, right turn, and U-turn. The experimental results show that different driving behaviors of vehicles in typical traffic scene can be exhibited smoothly and realistically. Our method can also be used for generating simulation data of traffic scenes that are difficult to collect. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Portable System for Monitoring and Controlling Driver Behavior and the Use of a Mobile Phone While Driving
Sensors 2019, 19(7), 1563; https://doi.org/10.3390/s19071563 - 31 Mar 2019
Cited by 10
Abstract
There is an utmost requirement for technology to control a driver’s phone while driving, which will prevent the driver from being distracted and thus saving the driver’s and passenger’s lives. Information from recent studies has shown that 70% of the young and aware [...] Read more.
There is an utmost requirement for technology to control a driver’s phone while driving, which will prevent the driver from being distracted and thus saving the driver’s and passenger’s lives. Information from recent studies has shown that 70% of the young and aware drivers are used to texting while driving. There are many different technologies used to control mobile phones while driving, including electronic device control, global positioning system (GPS), on-board diagnostics (OBD)-II-based devices, mobile phone applications or apps, etc. These devices acquire the vehicle information such as the car speed and use the information to control the driver’s phone such as preventing them from making or receiving calls at specific speed limits. The information from the devices is interfaced via Bluetooth and can later be used to control mobile phone applications. The main aim of this paper is to propose the design of a portable system for monitoring the use of a mobile phone while driving and for controlling a driver’s mobile phone, if necessary, when the vehicle reaches a specific speed limit (>10 km/h). A paper-based self-reported questionnaire survey was carried out among 600 teenage drivers from different nationalities to see the driving behavior of young drivers in Qatar. Finally, a mobile application was developed to monitor the mobile usage of a driver and an OBD-II module-based portable system was designed to acquire data from the vehicle to identify drivers’ behavior with respect to phone usage, sudden lane changes, and abrupt breaking/sharp speeding. This information was used in a mobile application to control the driver’s mobile usage as well as to report the driving behavior while driving. The application of such a system can significantly improve drivers’ behavior all over the world. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Open AccessArticle
Multi-Vehicle Tracking via Real-Time Detection Probes and a Markov Decision Process Policy
Sensors 2019, 19(6), 1309; https://doi.org/10.3390/s19061309 - 15 Mar 2019
Cited by 7
Abstract
Online multi-object tracking (MOT) has broad applications in time-critical video analysis scenarios such as advanced driver-assistance systems (ADASs) and autonomous driving. In this paper, the proposed system aims at tracking multiple vehicles in the front view of an onboard monocular camera. The vehicle [...] Read more.
Online multi-object tracking (MOT) has broad applications in time-critical video analysis scenarios such as advanced driver-assistance systems (ADASs) and autonomous driving. In this paper, the proposed system aims at tracking multiple vehicles in the front view of an onboard monocular camera. The vehicle detection probes are customized to generate high precision detection, which plays a basic role in the following tracking-by-detection method. A novel Siamese network with a spatial pyramid pooling (SPP) layer is applied to calculate pairwise appearance similarity. The motion model captured from the refined bounding box provides the relative movements and aspects. The online-learned policy treats each tracking period as a Markov decision process (MDP) to maintain long-term, robust tracking. The proposed method is validated in a moving vehicle with an onboard NVIDIA Jetson TX2 and returns real-time speeds. Compared with other methods on KITTI and self-collected datasets, our method achieves significant performance in terms of the “Mostly-tracked”, “Fragmentation”, and “ID switch” variables. Full article
(This article belongs to the Special Issue Intelligent Vehicles) Printed Edition available
Show Figures

Figure 1

Back to TopTop