sensors-logo

Journal Browser

Journal Browser

Sensors and Sensor Fusion Technology in Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Vehicular Sensing".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 7614

Special Issue Editor


E-Mail Website
Guest Editor
Department of Control and Computer Engineering, Politecnico di Torino, 10129 Turin, Italy
Interests: high-performance computing; formal methods; autonomous vehicles; SIMD and SIMT architectures; algorithms for path planning and connectivity; software applications; algorithms and data structures (divide-and-conquer, optimization, estimation, etc.)
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

Autonomous (self-driving) vehicles are one of the most significant trends in research and development, with several major automotive companies, research centers, and academic institutions regularly contributing to this field. Sensors (such as cameras, radars, lidars, sonars, global positioning systems or GPSs, inertial measurement units or IMUs, and wheel odometry) collect data that are analyzed by an on-board processor and used to control the speed, steering, and brakes of vehicles. Vehicle control systems may also use information collected by other cars and from environmental maps to make decisions.

As sensors are critical components, the fusion of the information from them and their proper interpretation, followed by the control of the vehicle, are paramount in autonomous driving. Researchers also understand that including more sensors in the sensor fusion system allows for the generation of more performing and robust solutions, but higher costs and reliability problems are the disadvantages of this approach.

The current state of the art in this area includes several aspects, such as 3D object detection methods, moving object detection and tracking systems, occupancy grid mappings for navigation and localization in dynamic environments, data fusion, etc.

The main target of this Special Issue is to collect papers reviewing the state of the art and all significant breakthroughs occurring in these areas.

Dr. Stefano Quer
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • sensor fusion
  • multi-sensor fusion
  • autonomous vehicles
  • autonomous driving
  • intelligent vehicles
  • self-driving cars
  • perception
  • environmental reconstruction
  • data integration
  • deep learning
  • multi-view
  • obstacle detection
  • target tracking
  • camera
  • lidar
  • radar

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • Reprint: MDPI Books provides the opportunity to republish successful Special Issues in book format, both online and in print.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 13446 KiB  
Article
Mounting Angle Prediction for Automotive Radar Using Complex-Valued Convolutional Neural Network
by Sunghoon Moon and Younglok Kim
Sensors 2025, 25(2), 353; https://doi.org/10.3390/s25020353 - 9 Jan 2025
Viewed by 1089
Abstract
In advanced driver-assistance systems (ADASs), the misalignment of the mounting angle of the automotive radar significantly affects the accuracy of object detection and tracking, impacting system safety and performance. This paper introduces the Automotive Radar Alignment Detection Network (AutoRAD-Net), a novel model that [...] Read more.
In advanced driver-assistance systems (ADASs), the misalignment of the mounting angle of the automotive radar significantly affects the accuracy of object detection and tracking, impacting system safety and performance. This paper introduces the Automotive Radar Alignment Detection Network (AutoRAD-Net), a novel model that leverages complex-valued convolutional neural network (CV-CNN) to address azimuth misalignment challenges in automotive radars. By utilizing complex-valued inputs, AutoRAD-Net effectively learns the physical properties of the radar data, enabling precise azimuth alignment. The model was trained and validated using mounting angle offsets ranging from −3° to +3° and exhibited errors no greater than 0.15° across all tested offsets. Moreover, it demonstrated reliable predictions even for unseen offsets, such as −1.7°, showcasing its generalization capability. The predicted offsets can then be used for physical radar alignment or integrated into compensation algorithms to enhance data interpretation accuracy in ADAS applications. This paper presents AutoRAD-Net as a practical solution for azimuth alignment, advancing radar reliability and performance in autonomous driving systems. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

21 pages, 20775 KiB  
Article
Sensor Fusion Method for Object Detection and Distance Estimation in Assisted Driving Applications
by Stefano Favelli, Meng Xie and Andrea Tonoli
Sensors 2024, 24(24), 7895; https://doi.org/10.3390/s24247895 - 10 Dec 2024
Cited by 1 | Viewed by 2257
Abstract
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between [...] Read more.
The fusion of multiple sensors’ data in real-time is a crucial process for autonomous and assisted driving, where high-level controllers need classification of objects in the surroundings and estimation of relative positions. This paper presents an open-source framework to estimate the distance between a vehicle equipped with sensors and different road objects on its path using the fusion of data from cameras, radars, and LiDARs. The target application is an Advanced Driving Assistance System (ADAS) that benefits from the integration of the sensors’ attributes to plan the vehicle’s speed according to real-time road occupation and distance from obstacles. Based on geometrical projection, a low-level sensor fusion approach is proposed to map 3D point clouds into 2D camera images. The fusion information is used to estimate the distance of objects detected and labeled by a Yolov7 detector. The open-source pipeline implemented in ROS consists of a sensors’ calibration method, a Yolov7 detector, 3D point cloud downsampling and clustering, and finally a 3D-to-2D transformation between the reference frames. The goal of the pipeline is to perform data association and estimate the distance of the identified road objects. The accuracy and performance are evaluated in real-world urban scenarios with commercial hardware. The pipeline running on an embedded Nvidia Jetson AGX achieves good accuracy on object identification and distance estimation, running at 5 Hz. The proposed framework introduces a flexible and resource-efficient method for data association from common automotive sensors and proves to be a promising solution for enabling effective environment perception ability for assisted driving. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

26 pages, 6782 KiB  
Article
Multi-Tracking Sensor Architectures for Reconstructing Autonomous Vehicle Crashes: An Exploratory Study
by Mohammad Mahfuzul Haque, Akbar Ghobakhlou and Ajit Narayanan
Sensors 2024, 24(13), 4194; https://doi.org/10.3390/s24134194 - 27 Jun 2024
Viewed by 1279
Abstract
With the continuous development of new sensor features and tracking algorithms for object tracking, researchers have opportunities to experiment using different combinations. However, there is no standard or agreed method for selecting an appropriate architecture for autonomous vehicle (AV) crash reconstruction using multi-sensor-based [...] Read more.
With the continuous development of new sensor features and tracking algorithms for object tracking, researchers have opportunities to experiment using different combinations. However, there is no standard or agreed method for selecting an appropriate architecture for autonomous vehicle (AV) crash reconstruction using multi-sensor-based sensor fusion. This study proposes a novel simulation method for tracking performance evaluation (SMTPE) to solve this problem. The SMTPE helps select the best tracking architecture for AV crash reconstruction. This study reveals that a radar-camera-based centralized tracking architecture of multi-sensor fusion performed the best among three different architectures tested with varying sensor setups, sampling rates, and vehicle crash scenarios. We provide a brief guideline for the best practices in selecting appropriate sensor fusion and tracking architecture arrangements, which can be helpful for future vehicle crash reconstruction and other AV improvement research. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

24 pages, 2043 KiB  
Article
UAV Path Optimization for Angle-Only Self-Localization and Target Tracking Based on the Bayesian Fisher Information Matrix
by Kutluyil Dogancay and Hatem Hmam
Sensors 2024, 24(10), 3120; https://doi.org/10.3390/s24103120 - 14 May 2024
Cited by 1 | Viewed by 1368
Abstract
In this paper, new path optimization algorithms are developed for uncrewed aerial vehicle (UAV) self-localization and target tracking, exploiting beacon (landmark) bearings and angle-of-arrival (AOA) measurements from a manoeuvring target. To account for time-varying rotations in the local UAV coordinates with respect to [...] Read more.
In this paper, new path optimization algorithms are developed for uncrewed aerial vehicle (UAV) self-localization and target tracking, exploiting beacon (landmark) bearings and angle-of-arrival (AOA) measurements from a manoeuvring target. To account for time-varying rotations in the local UAV coordinates with respect to the global Cartesian coordinate system, the unknown orientation angle of the UAV is also estimated jointly with its location from the beacon bearings. This is critically important, as orientation errors can significantly degrade the self-localization performance. The joint self-localization and target tracking problem is formulated as a Kalman filtering problem with an augmented state vector that includes all the unknown parameters and a measurement vector of beacon bearings and target AOA measurements. This formulation encompasses applications where Global Navigation Satellite System (GNSS)-based self-localization is not available or reliable, and only beacons or landmarks can be utilized for UAV self-localization. An optimal UAV path is determined from the optimization of the Bayesian Fisher information matrix by means of A- and D-optimality criteria. The performance of this approach at different measurement noise levels is investigated. A modified closed-form projection algorithm based on a previous work is also proposed to achieve optimal UAV paths. The performance of the developed UAV path optimization algorithms is demonstrated with extensive simulation examples. Full article
(This article belongs to the Special Issue Sensors and Sensor Fusion Technology in Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop