remotesensing-logo

Journal Browser

Journal Browser

Advances in Remote Sensing of Solving Challenges in Autonomous Driving and Safety Analysis

A special issue of Remote Sensing (ISSN 2072-4292). This special issue belongs to the section "Urban Remote Sensing".

Deadline for manuscript submissions: closed (15 July 2024) | Viewed by 1743

Special Issue Editors


E-Mail Website
Guest Editor
Sensing and Perception, SMART Mechatronics Research Group, Saxion University of Applied Sciences, Enschede, The Netherlands
Interests: autonomous vehicles; LIDAR/radar-based localization systems; mapping systems; SLAM technologies; eye-based human‒machine interface systems; driver monitoring systems
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Department of Defense System Engineering, Sejong University, Seoul 05006, Republic of Korea
Interests: control sytem; signal processing; radar signal; tracking; estimation; guidance and navigation; Markov chains and simulation
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Secretary of ISPRS WG I/7—Mobile Mapping Technology, Interdepartmental Research Center of Geomatics (CIRGEO), University of Padua, Padua, Italy
Interests: geomatics; mobile mapping; laser scanning; photogrammetry; remote sensing; navigation; data processing; machine learning; unmanned aerial vehicles; cultural heritage
Special Issues, Collections and Topics in MDPI journals

Special Issue Information

Dear Colleagues,

As safety is the prime priority and the key issue in commercializing autonomous vehicles, the main challenge in the research field has become to increase safety, and more in general the system performance, in critical and unique working conditions. This includes for instance: precise localization in snow–rain road conditions, generating accurate and largescale maps by SLAM technologies, far detection of construction areas for smooth path planning, maneuvering with the existence of unprotected turns, making a robust decision on classifying stationary vehicles as obstacles or temporarily stopped due to traffic jams and traffic signal recognition in sun glare. Without robustly solving these issues, autonomous driving will stay in the demonstration loop, and the deployment of autonomous vehicles will be limited to certain operating conditions. In addition, these problems may lead to deadly traffic accidents and produce a considerable negative impact on societies to accept running autonomous vehicles in streets.

Analyzing the reasons for these problems and clearly illustrating them are the cornerstone to investigating the relevant effects on the autopilot’s performance and proposing the corresponding optimal solutions. Remote sensing and image processing applications play the main role in designing optimal solutions based on sensory and observation data such as modeling the changes in the pattern distribution of LIDAR 3D point clouds in snowfall weather conditions and improving the localization accuracy by matching map observation environmental features. Therefore, this Special Issue aims to add value to the autonomous vehicle research field by demonstrating and analyzing critical and unique problems of mapping, localization, perception and path-planning modules that are rarely discussed in the literature and currently considered as futuristic matters.

Eventually, we hope to significantly contribute to increasing the safety of autonomous driving and provide prominent and robust solutions through the published papers.

Dr. Mohammad Aldibaja
Dr. Sufyan Ali Memon
Dr. Andrea Masiero
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Remote Sensing is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2700 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • autonomous vehicles
  • 3D point cloud analysis
  • path planning with unprotected turns
  • robust perception of construction areas
  • SLAM-based mapping in challenging environments
  • road pavement assessment for driving safety analysis
  • object status classification in urban traffic conditions
  • LIDAR/radar-based localization systems in adverse weather conditions
  • map quality analysis and enhancement
  • lane graph generation
  • safe AI integration technologies into autonomous driving

Published Papers (2 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

19 pages, 4057 KiB  
Article
Global Navigation Satellite System/Inertial Measurement Unit/Camera/HD Map Integrated Localization for Autonomous Vehicles in Challenging Urban Tunnel Scenarios
by Lu Tao, Pan Zhang, Kefu Gao and Jingnan Liu
Remote Sens. 2024, 16(12), 2230; https://doi.org/10.3390/rs16122230 - 19 Jun 2024
Viewed by 442
Abstract
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement [...] Read more.
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement Units (IMUs), cameras, and high-definition (HD) maps. Firstly, we use a novel electronic horizon module to assess GNSS integrity and concurrently load the HD map data surrounding the AVs. This map data are then transformed into a visual space to match the corresponding lane lines captured by the on-board camera using an improved BiSeNet. Consequently, the matched HD map data are used to correct our localization algorithm, which is driven by an extended Kalman filter that integrates multiple sources of information, encompassing a GNSS, IMU, speedometer, camera, and HD maps. Our system is designed with redundancy to handle challenging city tunnel scenarios. To evaluate the proposed system, real-world experiments were conducted on a 36-kilometer city route that includes nine consecutive tunnels, totaling near 13 km and accounting for 35% of the entire route. The experimental results reveal that 99% of lateral localization errors are less than 0.29 m, and 90% of longitudinal localization errors are less than 3.25 m, ensuring reliable lane-level localization for AVs in challenging urban tunnel scenarios. Full article
Show Figures

Figure 1

23 pages, 8941 KiB  
Article
DS-Trans: A 3D Object Detection Method Based on a Deformable Spatiotemporal Transformer for Autonomous Vehicles
by Yuan Zhu, Ruidong Xu, Chongben Tao, Hao An, Huaide Wang, Zhipeng Sun and Ke Lu
Remote Sens. 2024, 16(9), 1621; https://doi.org/10.3390/rs16091621 - 30 Apr 2024
Viewed by 836
Abstract
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations [...] Read more.
Facing the significant challenge of 3D object detection in complex weather conditions and road environments, existing algorithms based on single-frame point cloud data struggle to achieve desirable results. These methods typically focus on spatial relationships within a single frame, overlooking the semantic correlations and spatiotemporal continuity between consecutive frames. This leads to discontinuities and abrupt changes in the detection outcomes. To address this issue, this paper proposes a multi-frame 3D object detection algorithm based on a deformable spatiotemporal Transformer. Specifically, a deformable cross-scale Transformer module is devised, incorporating a multi-scale offset mechanism that non-uniformly samples features at different scales, enhancing the spatial information aggregation capability of the output features. Simultaneously, to address the issue of feature misalignment during multi-frame feature fusion, a deformable cross-frame Transformer module is proposed. This module incorporates independently learnable offset parameters for different frame features, enabling the model to adaptively correlate dynamic features across multiple frames and improve the temporal information utilization of the model. A proposal-aware sampling algorithm is introduced to significantly increase the foreground point recall, further optimizing the efficiency of feature extraction. The obtained multi-scale and multi-frame voxel features are subjected to an adaptive fusion weight extraction module, referred to as the proposed mixed voxel set extraction module. This module allows the model to adaptively obtain mixed features containing both spatial and temporal information. The effectiveness of the proposed algorithm is validated on the KITTI, nuScenes, and self-collected urban datasets. The proposed algorithm achieves an average precision improvement of 2.1% over the latest multi-frame-based algorithms. Full article
Show Figures

Figure 1

Back to TopTop