sensors-logo

Journal Browser

Journal Browser

Computer Vision and Sensor Fusion for Autonomous Vehicles

A special issue of Sensors (ISSN 1424-8220). This special issue belongs to the section "Intelligent Sensors".

Deadline for manuscript submissions: 30 June 2025 | Viewed by 740

Special Issue Editor


E-Mail Website
Guest Editor
1. Nutrien, Converged Tech (R&D), Saskatoon, SK, Canada
2. Faculty of Engineering, University of Alberta, Edmonton, AB, Canada
Interests: computer vision; advanced control and automation; sensor fusion; machine learning; data-driven approaches

Special Issue Information

Dear Colleagues,

The rapid advancement of autonomous vehicle technology is transforming transportation. At the heart of this innovation are sophisticated systems of computer vision and sensor fusion, crucial for the robust and reliable operation of self-driving cars. This Special Issue of Sensors will explore the latest research and developments in these fields, providing a comprehensive overview of current methodologies, challenges, and future directions.

In this Special Issue, we will discuss the uses of vision-based sensor fusion methods in self-driving vehicles. Topics include but are not limited to the following:

  • Semantic segmentation and scene understanding;
  • Robustness of computer vision systems under varying environmental conditions;
  • SLAM (simultaneous localization and mapping) methods;
  • Machine learning and its application in autonomous driving systems;
  • Optical flow and motion analysis;
  • New advancements in vision sensors (cameras, radar, lidar, etc.) and their fusion;
  • Real-world deployment and testing of vision-assisted autonomous driving vehicles;
  • Case studies on the implementation of vision-based sensor fusion in commercial and research-focused autonomous vehicles.

Dr. Yousef Alipouri
Guest Editor

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • computer vision
  • sensor fusion
  • self-driving
  • SLAM
  • optical flow
  • motion detection and analysis
  • scene understanding

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (1 paper)

Order results
Result details
Select all
Export citation of selected articles as:

Research

13 pages, 3587 KiB  
Article
KPMapNet: Keypoint Representation Learning for Online Vectorized High-Definition Map Construction
by Bicheng Jin, Wenyu Hao, Wenzhao Qiu and Shanmin Pang
Sensors 2025, 25(6), 1897; https://doi.org/10.3390/s25061897 - 18 Mar 2025
Viewed by 296
Abstract
Vectorized high-definition (HD) map construction is a critical task in the autonomous driving domain. The existing methods typically represent vectorized map elements with a fixed number of points, establishing robust baselines for this task. However, the inherent shape priors introduce additional shape errors, [...] Read more.
Vectorized high-definition (HD) map construction is a critical task in the autonomous driving domain. The existing methods typically represent vectorized map elements with a fixed number of points, establishing robust baselines for this task. However, the inherent shape priors introduce additional shape errors, which in turn lead to error accumulation in the downstream tasks. Moreover, the subtle and sparse nature of the annotations limits detection-based frameworks in accurately extracting the relevant features, often resulting in the loss of fine structural details in the predictions. To address these challenges, this work presents KPMapNet, an end-to-end framework that redefines the ground truth training representation of vectorized map elements to achieve precise topological representations. Specifically, the conventional equidistant sampling method is modified to better preserve the geometric features of the original instances while maintaining a fixed number of points. In addition, a map mask fusion module and an enhanced hybrid attention module are incorporated to mitigate the issues introduced by the new representation. Moreover, a novel point-line matching loss function is introduced to further refine the training process. Extensive experiments on the nuScenes and Argoverse2 datasets demonstrate that KPMapNet achieves state-of-the-art performance, with 75.1 mAP on nuScenes and 74.2 mAP on Argoverse2. The visualization results further corroborate the enhanced accuracy of the map generation outcomes. Full article
(This article belongs to the Special Issue Computer Vision and Sensor Fusion for Autonomous Vehicles)
Show Figures

Figure 1

Back to TopTop