sensors-logo

Journal Browser

Journal Browser

Advanced Sensing Techniques for Autonomous Vehicles and Advanced Driver Assistance Systems (ADAS)

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Electronic Sensors".

Deadline for manuscript submissions: closed (20 May 2023) | Viewed by 42539

Special Issue Editors


E-Mail Website
Guest Editor
Computer Engineering Department, INVETT Research Group, Universidad de Alcalá, Alcalá de Henares, Madrid, Spain
Interests: intelligent transportation systems; autonomous vehicles; control systems; driver assistance systems; artificial vision
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Engineering Department, Polytechnic School, University of Alcalá, Campus Universitario s/n, Alcalá de Henares, 288805 Madrid, Spain
Interests: accurate mapping systems based on optimal optimization algorithms; advanced driver assistance systems; assistive intelligent vehicles; driver and road user state and intent recognition; dynamic and cinematic car models; intelligent localization systems based on LiDAR odometry; intelligent navigation and localization systems based on inertial navigation systems; intelligent-vehicle-related image, radar, and LiDAR signal processing; sensor fusion systems for driverless cars
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Computer Engineering Department, Universidad de Alcalá, Alcalá de Henares, 28805 Madrid, Spain
Interests: computer vision; multi-sensory systems; 3D sensing; mapping and localization; autonomous vehicles and robotics
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
INVETT Research Group, Universidad de Alcalá, Campus Universitario, Ctra, Madrid-Barcelona km, 33, 600, 28805 Alcalá de Henares, Spain
Interests: intelligent vehicles and traffic technologies; intelligent vehicles; user-based autonomous vehicle design; advanced vehicle and traffic perception and modeling systems; predictive perception systems
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Several systems are essential to autonomous vehicles, including localization, navigation, and obstacle avoidance systems. To be able to implement all of these systems, autonomous vehicles must be equipped with a multitude of sensors (GPS, inertial measurement units (IMUs), radars, cameras, LiDARs, etc.). All of these systems require the development of techniques that extract the relevant information as efficiently as possible. This Special Issue focuses on exploring these techniques for the purpose of their application in autonomous vehicles or advanced driving assistance systems (ADAS). Topics include, but are not limited, to:

  • inertial measurement units;
  • artificial vision;
  • accurate localization;
  • mapping;
  • simultaneous localization and mapping (SLAM);
  • LiDAR odometry;
  • navigation;
  • sensor fusion.

Prof. Dr. Javier Alonso Ruiz
Prof. Dr. Iván García Daza
Dr. Carlota Salinas
Dr. Rubén Izquierdo
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • Accurate localization
  • Mapping
  • LiDAR odometry
  • Navigation
  • Sensor fusion

Published Papers (15 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

18 pages, 44271 KiB  
Article
One-Stage Brake Light Status Detection Based on YOLOv8
by Geesung Oh and Sejoon Lim
Sensors 2023, 23(17), 7436; https://doi.org/10.3390/s23177436 - 25 Aug 2023
Cited by 5 | Viewed by 2791
Abstract
Despite the advancement of advanced driver assistance systems (ADAS) and autonomous driving systems, surpassing the threshold of level 3 of driving automation remains a challenging task. Level 3 of driving automation requires assuming full responsibility for the vehicle’s actions, necessitating the acquisition of [...] Read more.
Despite the advancement of advanced driver assistance systems (ADAS) and autonomous driving systems, surpassing the threshold of level 3 of driving automation remains a challenging task. Level 3 of driving automation requires assuming full responsibility for the vehicle’s actions, necessitating the acquisition of safer and more interpretable cues. To approach level 3, we propose a novel method for detecting driving vehicles and their brake light status, which is a crucial visual cue relied upon by human drivers. Our proposal consists of two main components. First, we introduce a fast and accurate one-stage brake light status detection network based on YOLOv8. Through transfer learning using a custom dataset, we enable YOLOv8 not only to detect the driving vehicle, but also to determine its brake light status. Furthermore, we present the publicly available custom dataset, which includes over 11,000 forward images along with manual annotations. We evaluate the performance of our proposed method in terms of detection accuracy and inference time on an edge device. The experimental results demonstrate high detection performance with an mAP50 (mean average precision at IoU threshold of 0.50) ranging from 0.766 to 0.793 on the test dataset, along with a short inference time of 133.30 ms on the Jetson Nano device. In conclusion, our proposed method achieves high accuracy and fast inference time in detecting brake light status. This contribution effectively improves safety, interpretability, and comfortability by providing valuable input information for ADAS and autonomous driving technologies. Full article
Show Figures

Figure 1

25 pages, 8907 KiB  
Article
End-to-End Multimodal Sensor Dataset Collection Framework for Autonomous Vehicles
by Junyi Gu, Artjom Lind, Tek Raj Chhetri, Mauro Bellone and Raivo Sell
Sensors 2023, 23(15), 6783; https://doi.org/10.3390/s23156783 - 29 Jul 2023
Cited by 1 | Viewed by 2362
Abstract
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and [...] Read more.
Autonomous driving vehicles rely on sensors for the robust perception of their surroundings. Such vehicles are equipped with multiple perceptive sensors with a high level of redundancy to ensure safety and reliability in any driving condition. However, multi-sensor, such as camera, LiDAR, and radar systems raise requirements related to sensor calibration and synchronization, which are the fundamental blocks of any autonomous system. On the other hand, sensor fusion and integration have become important aspects of autonomous driving research and directly determine the efficiency and accuracy of advanced functions such as object detection and path planning. Classical model-based estimation and data-driven models are two mainstream approaches to achieving such integration. Most recent research is shifting to the latter, showing high robustness in real-world applications but requiring large quantities of data to be collected, synchronized, and properly categorized. However, there are two major research gaps in existing works: (i) they lack fusion (and synchronization) of multi-sensors, camera, LiDAR and radar; and (ii) generic scalable, and user-friendly end-to-end implementation. To generalize the implementation of the multi-sensor perceptive system, we introduce an end-to-end generic sensor dataset collection framework that includes both hardware deploying solutions and sensor fusion algorithms. The framework prototype integrates a diverse set of sensors, such as camera, LiDAR, and radar. Furthermore, we present a universal toolbox to calibrate and synchronize three types of sensors based on their characteristics. The framework also includes the fusion algorithms, which utilize the merits of three sensors, namely, camera, LiDAR, and radar, and fuse their sensory information in a manner that is helpful for object detection and tracking research. The generality of this framework makes it applicable in any robotic or autonomous applications and suitable for quick and large-scale practical deployment. Full article
Show Figures

Figure 1

18 pages, 5686 KiB  
Article
Probabilistic Semantic Mapping for Autonomous Driving in Urban Environments
by Hengyuan Zhang, Shashank Venkatramani, David Paz, Qinru Li, Hao Xiang and Henrik I. Christensen
Sensors 2023, 23(14), 6504; https://doi.org/10.3390/s23146504 - 18 Jul 2023
Cited by 1 | Viewed by 1572
Abstract
Statistical learning techniques and increased computational power have facilitated the development of self-driving car technology. However, a limiting factor has been the high expense of scaling and maintaining high-definition (HD) maps. These maps are a crucial backbone for many approaches to self-driving technology. [...] Read more.
Statistical learning techniques and increased computational power have facilitated the development of self-driving car technology. However, a limiting factor has been the high expense of scaling and maintaining high-definition (HD) maps. These maps are a crucial backbone for many approaches to self-driving technology. In response to this challenge, we present an approach that fuses pre-built point cloud map data with images to automatically and accurately identify static landmarks such as roads, sidewalks, and crosswalks. Our pipeline utilizes semantic segmentation of 2D images, associates semantic labels with points in point cloud maps to pinpoint locations in the physical world, and employs a confusion matrix formulation to generate a probabilistic bird’s-eye view semantic map from semantic point clouds. The approach has been tested in an urban area with different segmentation networks to generate a semantic map with road features. The resulting map provides a rich context of the environment that is valuable for downstream tasks such as trajectory generation and intent prediction. Moreover, it has the potential to be extended to the automatic generation of HD maps for semantic features. The entire software pipeline is implemented in the robot operating system (ROS), a widely used robotics framework, and made available. Full article
Show Figures

Figure 1

29 pages, 5944 KiB  
Article
Study on Multi-Heterogeneous Sensor Data Fusion Method Based on Millimeter-Wave Radar and Camera
by Jianyu Duan
Sensors 2023, 23(13), 6044; https://doi.org/10.3390/s23136044 - 29 Jun 2023
Cited by 1 | Viewed by 1567
Abstract
This study presents a novel multimodal heterogeneous perception cross-fusion framework for intelligent vehicles that combines data from millimeter-wave radar and camera to enhance target tracking accuracy and handle system uncertainties. The framework employs a multimodal interaction strategy to predict target motion more accurately [...] Read more.
This study presents a novel multimodal heterogeneous perception cross-fusion framework for intelligent vehicles that combines data from millimeter-wave radar and camera to enhance target tracking accuracy and handle system uncertainties. The framework employs a multimodal interaction strategy to predict target motion more accurately and an improved joint probability data association method to match measurement data with targets. An adaptive root-mean-square cubature Kalman filter is used to estimate the statistical characteristics of noise under complex traffic scenarios with varying process and measurement noise. Experiments conducted on a real vehicle platform demonstrate that the proposed framework improves reliability and robustness in challenging environments. It overcomes the challenges of insufficient data fusion utilization, frequent leakage, and misjudgment of dangerous obstructions around vehicles, and inaccurate prediction of collision risks. The proposed framework has the potential to advance the state of the art in target tracking and perception for intelligent vehicles. Full article
Show Figures

Figure 1

22 pages, 10296 KiB  
Article
Joint Calibration of a Multimodal Sensor System for Autonomous Vehicles
by Jon Muhovič and Janez Perš
Sensors 2023, 23(12), 5676; https://doi.org/10.3390/s23125676 - 17 Jun 2023
Cited by 1 | Viewed by 1336
Abstract
Multimodal sensor systems require precise calibration if they are to be used in the field. Due to the difficulty of obtaining the corresponding features from different modalities, the calibration of such systems is an open problem. We present a systematic approach for calibrating [...] Read more.
Multimodal sensor systems require precise calibration if they are to be used in the field. Due to the difficulty of obtaining the corresponding features from different modalities, the calibration of such systems is an open problem. We present a systematic approach for calibrating a set of cameras with different modalities (RGB, thermal, polarization, and dual-spectrum near infrared) with regard to a LiDAR sensor using a planar calibration target. Firstly, a method for calibrating a single camera with regard to the LiDAR sensor is proposed. The method is usable with any modality, as long as the calibration pattern is detected. A methodology for establishing a parallax-aware pixel mapping between different camera modalities is then presented. Such a mapping can then be used to transfer annotations, features, and results between highly differing camera modalities to facilitate feature extraction and deep detection and segmentation methods. Full article
Show Figures

Figure 1

27 pages, 4034 KiB  
Article
Implementing Model Predictive Control and Steady-State Dynamics for Lane Detection for Automated Vehicles in a Variety of Occlusion in Clothoid-Form Roads
by Swapnil Waykole, Nirajan Shiwakoti and Peter Stasinopoulos
Sensors 2023, 23(8), 4085; https://doi.org/10.3390/s23084085 - 18 Apr 2023
Viewed by 1488
Abstract
Lane detection in driving situations is a critical module for advanced driver assistance systems (ADASs) and automated cars. Many advanced lane detection algorithms have been presented in recent years. However, most approaches rely on recognising the lane from a single or several images, [...] Read more.
Lane detection in driving situations is a critical module for advanced driver assistance systems (ADASs) and automated cars. Many advanced lane detection algorithms have been presented in recent years. However, most approaches rely on recognising the lane from a single or several images, which often results in poor performance when dealing with extreme scenarios such as intense shadow, severe mark degradation, severe vehicle occlusion, and so on. This paper proposes an integration of steady-state dynamic equations and Model Predictive Control-Preview Capability (MPC-PC) strategy to find key parameters of the lane detection algorithm for automated cars while driving on clothoid-form roads (structured and unstructured roads) to tackle issues such as the poor detection accuracy of lane identification and tracking in occlusion (e.g., rain) and different light conditions (e.g., night vs. daytime). First, the MPC preview capability plan is designed and applied in order to maintain the vehicle on the target lane. Second, as an input to the lane detection method, the key parameters such as yaw angle, sideslip, and steering angle are calculated using a steady-state dynamic and motion equations. The developed algorithm is tested with a primary (own dataset) and a secondary dataset (publicly available dataset) in a simulation environment. With our proposed approach, the mean detection accuracy varies from 98.7% to 99%, and the detection time ranges from 20 to 22 ms under various driving circumstances. Comparison of our proposed algorithm’s performance with other existing approaches shows that the proposed algorithm has good comprehensive recognition performance in the different dataset, thus indicating desirable accuracy and adaptability. The suggested approach will help advance intelligent-vehicle lane identification and tracking and help to increase intelligent-vehicle driving safety. Full article
Show Figures

Figure 1

20 pages, 7694 KiB  
Article
A Deep Learning Framework for Accurate Vehicle Yaw Angle Estimation from a Monocular Camera Based on Part Arrangement
by Wenjun Huang, Wenbo Li, Luqi Tang, Xiaoming Zhu and Bin Zou
Sensors 2022, 22(20), 8027; https://doi.org/10.3390/s22208027 - 20 Oct 2022
Cited by 2 | Viewed by 2360
Abstract
An accurate object pose is essential to assess its state and predict its movements. In recent years, scholars have often predicted object poses by matching an image with a virtual 3D model or by regressing the six-degree-of-freedom pose of the target directly from [...] Read more.
An accurate object pose is essential to assess its state and predict its movements. In recent years, scholars have often predicted object poses by matching an image with a virtual 3D model or by regressing the six-degree-of-freedom pose of the target directly from the pixel data via deep learning methods. However, these approaches may ignore a fact that was proposed in the early days of computer vision research, i.e., that object parts are strongly represented in the object pose. In this study, we propose a novel and lightweight deep learning framework, YAEN (yaw angle estimation network), for accurate object yaw angle prediction from a monocular camera based on the arrangement of parts. YAEN uses an encoding–decoding structure for vehicle yaw angle prediction. The vehicle part arrangement information is extracted by the part-encoding network, and the yaw angle is extracted from vehicle part arrangement information by the yaw angle decoding network. Because vehicle part information is refined by the encoder, the decoding network structure is lightweight; the YAEN model has low hardware requirements and can reach a detection speed of 97FPS on a 2070s graphics cards. To improve the performance of our model, we used asymmetric convolution and SSE (sum of squared errors) loss functions of adding the sign. To verify the effectiveness of this model, we constructed an accurate yaw angle dataset under real-world conditions with two vehicles equipped with high-precision positioning devices. Experimental results prove that our method can achieve satisfactory prediction performance in scenarios in which vehicles do not obscure each other, with an average prediction error of less than 3.1° and an accuracy of 96.45% for prediction errors of less than 10° in real driving scenarios. Full article
Show Figures

Figure 1

15 pages, 3961 KiB  
Article
Analysis of ADAS Radars with Electronic Warfare Perspective
by Alper Cemil and Mehmet Ünlü
Sensors 2022, 22(16), 6142; https://doi.org/10.3390/s22166142 - 17 Aug 2022
Cited by 4 | Viewed by 2634
Abstract
The increasing demand in the development of autonomous driving systems makes the employment of automotive radars unavoidable. Such a motivation for the demonstration of fully-autonomous vehicles brings the challenge of secure driving under high traffic jam conditions. In this paper, we present the [...] Read more.
The increasing demand in the development of autonomous driving systems makes the employment of automotive radars unavoidable. Such a motivation for the demonstration of fully-autonomous vehicles brings the challenge of secure driving under high traffic jam conditions. In this paper, we present the investigation of Advanced Driver Assistance Systems (ADAS) radars from the perspective of electronic warfare (EW). Close to real life, four ADAS jamming scenarios have been defined. Considering these scenarios, the necessary jamming power to jam ADAS radars is calculated. The required jamming Effective Radiated Power (ERP) is −2 dBm to 40 dBm depending on the jamming scenario. These ERP values are very low and easily realizable. Moreover, the effect of the jamming has been investigated on the radar detection at radar Range Doppler Map (RDM) and 2-Dimensional Constant False Alarm Rate (2D-CFAR). Furthermore, the possible jamming system requirements have been investigated. It is noted that the required jamming system will not require high-end technology. It is concluded that for the security of automotive driving, the ADAS radar manufacturer should consider the intentional jamming and related Electronic Counter Countermeasures (ECCM) features in the design of ADAS radars. Full article
Show Figures

Figure 1

14 pages, 4890 KiB  
Article
Vehicle Detection and Tracking Using Thermal Cameras in Adverse Visibility Conditions
by Abhay Singh Bhadoriya, Vamsi Vegamoor and Sivakumar Rathinam
Sensors 2022, 22(12), 4567; https://doi.org/10.3390/s22124567 - 17 Jun 2022
Cited by 9 | Viewed by 3909
Abstract
Level 5 autonomy, as defined by the Society of Automotive Engineers, requires the vehicle to function under all weather and visibility conditions. This sensing problem becomes significantly challenging in weather conditions that include events such as sudden changes in lighting, smoke, fog, snow, [...] Read more.
Level 5 autonomy, as defined by the Society of Automotive Engineers, requires the vehicle to function under all weather and visibility conditions. This sensing problem becomes significantly challenging in weather conditions that include events such as sudden changes in lighting, smoke, fog, snow, and rain. No standalone sensor currently in the market can reliably perceive the environment in all conditions. While regular cameras, lidars, and radars will suffice for typical driving conditions, they may fail in some edge cases. The goal of this paper is to demonstrate that the addition of Long Wave Infrared (LWIR)/thermal cameras to the sensor stack on a self-driving vehicle can help fill this sensory gap during adverse visibility conditions. In this paper, we trained a machine learning-based image detector on thermal image data and used it for vehicle detection. For vehicle tracking, Joint Probabilistic Data association and Multiple Hypothesis Tracking approaches were explored where the thermal camera information was fused with a front-facing radar. The algorithms were implemented using FLIR thermal cameras on a 2017 Lincoln MKZ operating in College Station, TX, USA. The performance of the tracking algorithm has also been validated in simulations using Unreal Engine. Full article
Show Figures

Figure 1

27 pages, 2315 KiB  
Article
FPGA-Based Pedestrian Detection for Collision Prediction System
by Lucas Cambuim and Edna Barros
Sensors 2022, 22(12), 4421; https://doi.org/10.3390/s22124421 - 11 Jun 2022
Cited by 3 | Viewed by 2163
Abstract
Pedestrian detection (PD) systems capable of locating pedestrians over large distances and locating them faster are needed in Pedestrian Collision Prediction (PCP) systems to increase the decision-making distance. This paper proposes a performance-optimized FPGA implementation of a HOG-SVM-based PD system with support for [...] Read more.
Pedestrian detection (PD) systems capable of locating pedestrians over large distances and locating them faster are needed in Pedestrian Collision Prediction (PCP) systems to increase the decision-making distance. This paper proposes a performance-optimized FPGA implementation of a HOG-SVM-based PD system with support for image pyramids and detection windows of different sizes to locate near and far pedestrians. This work proposes a hardware architecture that can process one pixel per clock cycle by exploring data and temporal parallelism using techniques such as pipeline and spatial division of data between parallel processing units. The proposed architecture for the PD module was validated in FPGA and integrated with the stereo semi-global matching (SGM) module, also prototyped in FPGA. Processing two windows of different dimensions permitted a reduction in miss rate of at least 6% compared to a uniquely sized window detector. The performances achieved by the PD system and the PCP system in HD resolution were 100 and 66.2 frames per second (FPS), respectively. The performance improvement achieved by the PCP system with the addition of our PD module permitted an increase in decision-making distance of 3.3 m compared to a PCP system that processes at 30 FPS. Full article
Show Figures

Figure 1

30 pages, 19861 KiB  
Article
Exploration-Based SLAM (e-SLAM) for the Indoor Mobile Robot Using Lidar
by Hasan Ismail, Rohit Roy, Long-Jye Sheu, Wei-Hua Chieng and Li-Chuan Tang
Sensors 2022, 22(4), 1689; https://doi.org/10.3390/s22041689 - 21 Feb 2022
Cited by 14 | Viewed by 5353
Abstract
This paper attempts to uncover one possible method for the IMR (indoor mobile robot) to perform indoor exploration associated with SLAM (simultaneous localization and mapping) using LiDAR. Specifically, the IMR is required to construct a map when it has landed on an unexplored [...] Read more.
This paper attempts to uncover one possible method for the IMR (indoor mobile robot) to perform indoor exploration associated with SLAM (simultaneous localization and mapping) using LiDAR. Specifically, the IMR is required to construct a map when it has landed on an unexplored floor of a building. We had implemented the e-SLAM (exploration-based SLAM) using the coordinate transformation and the navigation prediction techniques to achieve that purpose in the engineering school building which consists of many 100-m2 labs, corridors, elevator waiting space and the lobby. We first derive the LiDAR mesh for the orthogonal walls and filter out the static furniture and dynamic humans in the same space as the IMR. Then, we define the LiDAR pose frame including the translation and rotation from the orthogonal walls. According to the MSC (most significant corner) obtained from the intersection of the orthogonal walls, we calculate the displacement of the IMR. The orientation of the IMR is calculated from the alignment of orthogonal walls in the consecutive LiDAR pose frames, which is also assisted by the LQE (linear quadratic estimation) method. All the computation can be done in a single processor machine in real-time. The e-SLAM technique leads to a potential for the in-house service robot to start operation without having pre-scan LiDAR maps, which can save the installation time of the service robot. In this study, we use only the LiDAR and compared our result with the IMU to verify the consistency between the two navigation sensors in the experiments. The scenario of the experiment consists of rooms, corridors, elevators, and the lobby, which is common to most office buildings. Full article
Show Figures

Figure 1

15 pages, 11221 KiB  
Article
An Investigation into the Appropriateness of Car-Following Models in Assessing Autonomous Vehicles
by Akito Higatani and Wafaa Saleh
Sensors 2021, 21(21), 7131; https://doi.org/10.3390/s21217131 - 27 Oct 2021
Cited by 3 | Viewed by 1745
Abstract
The dramatic progress of Intelligent Transportation Systems (ITS) has made autodriving technology extensively emphasised. Various models have been developed for the aim of modelling the behaviour of autonomous vehicles and their impacts on traffic, although there is still a lot to be researched [...] Read more.
The dramatic progress of Intelligent Transportation Systems (ITS) has made autodriving technology extensively emphasised. Various models have been developed for the aim of modelling the behaviour of autonomous vehicles and their impacts on traffic, although there is still a lot to be researched about the technology. There are three main features that need to be represented in any car-following model to enable it to model autonomous vehicles: desired time gap, collision avoidance system and sensor detection range. Most available car-following models satisfy the first feature, most of the available car-following models do not satisfy the second feature and only few models satisfy the third feature. Therefore, conclusions from such models must be taken cautiously. Any of these models could be considered for updating to include a collision avoidance-system module, in order to be able to model autonomous vehicles. The Helly model is car-following model that has a simple structure and is sometimes used as the controller for Autonomous Vehicles (AV), but it does not have a collision avoidance concept. In this paper, the Helly model, which is a very commonly used classic car-following model is assessed and examined for possible update for the purpose of using it to model autonomous vehicles more efficiently. This involves assessing the parameters of the model and investigating the possible update of the model to include a collision avoidance-system module. There are two procedures that have been investigated in this paper to assess the Helly model to allow for a more realistic modelling of autonomous vehicles. The first technique is to investigate and assess the values of the parameters of the model. The second procedure is to modify the formula of that model to include a collision avoidance system. The results show that the performance of the modified full-range Auto Cruising Control (FACC) Helly model is superior to the other models in almost all situations and for almost all time-gap settings. Only the Alexandros E. Papacharalampous’s Model (A.E.P.) controller seems to perform slightly better than the (FACC) Helly model. Therefore, it is reasonable to suggest that the (FACC) Helly model be recommended as the most accurate model to use to represent autonomous vehicles in microsimulations, and that it should be further investigated. Full article
Show Figures

Figure 1

31 pages, 33769 KiB  
Article
Urban Intersection Classification: A Comparative Analysis
by Augusto Luis Ballardini, Álvaro Hernández Saz, Sandra Carrasco Limeros, Javier Lorenzo, Ignacio Parra Alonso, Noelia Hernández Parra, Iván García Daza and Miguel Ángel Sotelo
Sensors 2021, 21(18), 6269; https://doi.org/10.3390/s21186269 - 18 Sep 2021
Cited by 5 | Viewed by 3154
Abstract
Understanding the scene in front of a vehicle is crucial for self-driving vehicles and Advanced Driver Assistance Systems, and in urban scenarios, intersection areas are one of the most critical, concentrating between 20% to 25% of road fatalities. This research presents a thorough [...] Read more.
Understanding the scene in front of a vehicle is crucial for self-driving vehicles and Advanced Driver Assistance Systems, and in urban scenarios, intersection areas are one of the most critical, concentrating between 20% to 25% of road fatalities. This research presents a thorough investigation on the detection and classification of urban intersections as seen from onboard front-facing cameras. Different methodologies aimed at classifying intersection geometries have been assessed to provide a comprehensive evaluation of state-of-the-art techniques based on Deep Neural Network (DNN) approaches, including single-frame approaches and temporal integration schemes. A detailed analysis of most popular datasets previously used for the application together with a comparison with ad hoc recorded sequences revealed that the performances strongly depend on the field of view of the camera rather than other characteristics or temporal-integrating techniques. Due to the scarcity of training data, a new dataset is created by performing data augmentation from real-world data through a Generative Adversarial Network (GAN) to increase generalizability as well as to test the influence of data quality. Despite being in the relatively early stages, mainly due to the lack of intersection datasets oriented to the problem, an extensive experimental activity has been performed to analyze the individual performance of each proposed systems. Full article
Show Figures

Figure 1

22 pages, 1531 KiB  
Article
CAPformer: Pedestrian Crossing Action Prediction Using Transformer
by Javier Lorenzo, Ignacio Parra Alonso, Rubén Izquierdo, Augusto Luis Ballardini, Álvaro Hernández Saz, David Fernández Llorca and Miguel Ángel Sotelo
Sensors 2021, 21(17), 5694; https://doi.org/10.3390/s21175694 - 24 Aug 2021
Cited by 16 | Viewed by 3845
Abstract
Anticipating pedestrian crossing behavior in urban scenarios is a challenging task for autonomous vehicles. Early this year, a benchmark comprising JAAD and PIE datasets have been released. In the benchmark, several state-of-the-art methods have been ranked. However, most of the ranked temporal models [...] Read more.
Anticipating pedestrian crossing behavior in urban scenarios is a challenging task for autonomous vehicles. Early this year, a benchmark comprising JAAD and PIE datasets have been released. In the benchmark, several state-of-the-art methods have been ranked. However, most of the ranked temporal models rely on recurrent architectures. In our case, we propose, as far as we are concerned, the first self-attention alternative, based on transformer architecture, which has had enormous success in natural language processing (NLP) and recently in computer vision. Our architecture is composed of various branches which fuse video and kinematic data. The video branch is based on two possible architectures: RubiksNet and TimeSformer. The kinematic branch is based on different configurations of transformer encoder. Several experiments have been performed mainly focusing on pre-processing input data, highlighting problems with two kinematic data sources: pose keypoints and ego-vehicle speed. Our proposed model results are comparable to PCPA, the best performing model in the benchmark reaching an F1 Score of nearly 0.78 against 0.77. Furthermore, by using only bounding box coordinates and image data, our model surpasses PCPA by a larger margin (F1=0.75 vs. F1=0.72). Our model has proven to be a valid alternative to recurrent architectures, providing advantages such as parallelization and whole sequence processing, learning relationships between samples not possible with recurrent architectures. Full article
Show Figures

Figure 1

14 pages, 461 KiB  
Article
Configurable Sensor Model Architecture for the Development of Automated Driving Systems
by Simon Schmidt, Birgit Schlager, Stefan Muckenhuber and Rainer Stark
Sensors 2021, 21(14), 4687; https://doi.org/10.3390/s21144687 - 08 Jul 2021
Cited by 13 | Viewed by 3346
Abstract
Sensor models provide the required environmental perception information for the development and testing of automated driving systems in virtual vehicle environments. In this article, a configurable sensor model architecture is introduced. Based on methods of model-based systems engineering (MBSE) and functional decomposition, this [...] Read more.
Sensor models provide the required environmental perception information for the development and testing of automated driving systems in virtual vehicle environments. In this article, a configurable sensor model architecture is introduced. Based on methods of model-based systems engineering (MBSE) and functional decomposition, this approach supports a flexible and continuous way to use sensor models in automotive development. Modeled sensor effects, representing single-sensor properties, are combined to an overall sensor behavior. This improves reusability and enables adaptation to specific requirements of the development. Finally, a first practical application of the configurable sensor model architecture is demonstrated, using two exemplary sensor effects: the geometric field of view (FoV) and the object-dependent FoV. Full article
Show Figures

Figure 1

Back to TopTop