Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (57)

Search Parameters:
Keywords = high-definition (HD) map

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 6556 KiB  
Article
Multi-Task Trajectory Prediction Using a Vehicle-Lane Disentangled Conditional Variational Autoencoder
by Haoyang Chen, Na Li, Hangguan Shan, Eryun Liu and Zhiyu Xiang
Sensors 2025, 25(14), 4505; https://doi.org/10.3390/s25144505 - 20 Jul 2025
Viewed by 400
Abstract
Trajectory prediction under multimodal information is critical for autonomous driving, necessitating the integration of dynamic vehicle states and static high-definition (HD) maps to model complex agent–scene interactions effectively. However, existing methods often employ static scene encodings and unstructured latent spaces, limiting their ability [...] Read more.
Trajectory prediction under multimodal information is critical for autonomous driving, necessitating the integration of dynamic vehicle states and static high-definition (HD) maps to model complex agent–scene interactions effectively. However, existing methods often employ static scene encodings and unstructured latent spaces, limiting their ability to capture evolving spatial contexts and produce diverse yet contextually coherent predictions. To tackle these challenges, we propose MS-SLV, a novel generative framework that introduces (1) a time-aware scene encoder that aligns HD map features with vehicle motion to capture evolving scene semantics and (2) a structured latent model that explicitly disentangles agent-specific intent and scene-level constraints. Additionally, we introduce an auxiliary lane prediction task to provide targeted supervision for scene understanding and improve latent variable learning. Our approach jointly predicts future trajectories and lane sequences, enabling more interpretable and scene-consistent forecasts. Extensive evaluations on the nuScenes dataset demonstrate the effectiveness of MS-SLV, achieving a 12.37% reduction in average displacement error and a 7.67% reduction in final displacement error over state-of-the-art methods. Moreover, MS-SLV significantly improves multi-modal prediction, reducing the top-5 Miss Rate (MR5) and top-10 Miss Rate (MR10) by 26% and 33%, respectively, and lowering the Off-Road Rate (ORR) by 3%, as compared with the strongest baseline in our evaluation. Full article
(This article belongs to the Special Issue AI-Driven Sensor Technologies for Next-Generation Electric Vehicles)
Show Figures

Figure 1

28 pages, 13392 KiB  
Article
Optimising Electrode Montages in Conventional Transcranial Direct Current Stimulation and High-Definition Transcranial Direct Current Stimulation of the Cerebellum for Pain Modulation
by Adelais Farnell Sharp and Alice Witney
Brain Sci. 2025, 15(4), 344; https://doi.org/10.3390/brainsci15040344 - 27 Mar 2025
Viewed by 603
Abstract
The cerebellum is involved in pain processing and, therefore, an important target for non-invasive brain stimulation (NIBS) for analgesia. When targeting a brain region for NIBS, it can be difficult to ensure activation of only target regions. Optimal Montages for cerebellar stimulation for [...] Read more.
The cerebellum is involved in pain processing and, therefore, an important target for non-invasive brain stimulation (NIBS) for analgesia. When targeting a brain region for NIBS, it can be difficult to ensure activation of only target regions. Optimal Montages for cerebellar stimulation for pain modulation have not been established. This paper systematically examines cerebellar NIBS Montages by comparing simulated current flow models for targeted conventional cerebellar tDCS and focused high-definition 4 × 1 HD-tDCS, to examine the most effective Montage for targeting only the lobes of the cerebellum. The objective was to explore if slight variations in electrode placement and voltage could be producing confounding activations of other brain regions as shown by the Soterix® current modelling software (Ver. 2019). A left deltoid anode for right cerebellar lobe sponge (3 cm lateral to inion) produces the best targeting with conventional tDCS. For high-definition tDCS (HD-tDCS) a 4 × 1 array based on a 93-electrode EEG map, with the central electrode at PO10, and the array at O2, P8, Ex2, and Ex6, provided focal stimulation. Optimisation of NIBS must include an evaluation of electrode Montages and current flow modelling to determine which structures and pathways will be impacted by the neurostimulation. This approach is essential for future cerebellar NIBS experimental design and will facilitate comparative analysis across different protocols and optimise understanding of the role of the cerebellum in pain processing. Full article
(This article belongs to the Special Issue The Role of the Cerebellum in Motor and Non-motor Behaviours)
Show Figures

Figure 1

38 pages, 3079 KiB  
Review
Building the Future of Transportation: A Comprehensive Survey on AV Perception, Localization, and Mapping
by Ashok Kumar Patil, Bhargav Punugupati, Himanshi Gupta, Niranjan S. Mayur, Srivatsa Ramesh and Prasad B. Honnavalli
Sensors 2025, 25(7), 2004; https://doi.org/10.3390/s25072004 - 23 Mar 2025
Viewed by 1194
Abstract
Autonomous vehicles (AVs) depend on perception, localization, and mapping to interpret their surroundings and navigate safely. This paper reviews existing methodologies and best practices in these domains, focusing on object detection, object tracking, localization techniques, and environmental mapping strategies. In the perception module, [...] Read more.
Autonomous vehicles (AVs) depend on perception, localization, and mapping to interpret their surroundings and navigate safely. This paper reviews existing methodologies and best practices in these domains, focusing on object detection, object tracking, localization techniques, and environmental mapping strategies. In the perception module, we analyze state-of-the-art object detection frameworks, such as You Only Look Once version 8 (YOLOv8), and object tracking algorithms like ByteTrack and BoT-SORT (Boosted SORT). We assess their real-time performance, robustness to occlusions, and suitability for complex urban environments. We examine different approaches for localization, including Light Detection and Ranging (LiDAR)-based localization, camera-based localization, and sensor fusion techniques. These methods enhance positional accuracy, particularly in scenarios where Global Positioning System (GPS) signals are unreliable or unavailable. The mapping section explores Simultaneous Localization and Mapping (SLAM) techniques and high-definition (HD) maps, discussing their role in creating detailed, real-time environmental representations that enable autonomous navigation. Additionally, we present insights from our testing, evaluating the effectiveness of different perception, localization, and mapping methods in real-world conditions. By summarizing key advancements, challenges, and practical considerations, this paper provides a reference for researchers and developers working on autonomous vehicle perception, localization, and mapping. Full article
Show Figures

Figure 1

13 pages, 3587 KiB  
Article
KPMapNet: Keypoint Representation Learning for Online Vectorized High-Definition Map Construction
by Bicheng Jin, Wenyu Hao, Wenzhao Qiu and Shanmin Pang
Sensors 2025, 25(6), 1897; https://doi.org/10.3390/s25061897 - 18 Mar 2025
Viewed by 579
Abstract
Vectorized high-definition (HD) map construction is a critical task in the autonomous driving domain. The existing methods typically represent vectorized map elements with a fixed number of points, establishing robust baselines for this task. However, the inherent shape priors introduce additional shape errors, [...] Read more.
Vectorized high-definition (HD) map construction is a critical task in the autonomous driving domain. The existing methods typically represent vectorized map elements with a fixed number of points, establishing robust baselines for this task. However, the inherent shape priors introduce additional shape errors, which in turn lead to error accumulation in the downstream tasks. Moreover, the subtle and sparse nature of the annotations limits detection-based frameworks in accurately extracting the relevant features, often resulting in the loss of fine structural details in the predictions. To address these challenges, this work presents KPMapNet, an end-to-end framework that redefines the ground truth training representation of vectorized map elements to achieve precise topological representations. Specifically, the conventional equidistant sampling method is modified to better preserve the geometric features of the original instances while maintaining a fixed number of points. In addition, a map mask fusion module and an enhanced hybrid attention module are incorporated to mitigate the issues introduced by the new representation. Moreover, a novel point-line matching loss function is introduced to further refine the training process. Extensive experiments on the nuScenes and Argoverse2 datasets demonstrate that KPMapNet achieves state-of-the-art performance, with 75.1 mAP on nuScenes and 74.2 mAP on Argoverse2. The visualization results further corroborate the enhanced accuracy of the map generation outcomes. Full article
(This article belongs to the Special Issue Computer Vision and Sensor Fusion for Autonomous Vehicles)
Show Figures

Figure 1

23 pages, 6468 KiB  
Article
Urban Signalized Intersection Traffic State Prediction: A Spatial–Temporal Graph Model Integrating the Cell Transmission Model and Transformer
by Anran Li, Zhenlin Xu, Wenhao Li, Yanyan Chen and Yuyan Pan
Appl. Sci. 2025, 15(5), 2377; https://doi.org/10.3390/app15052377 - 23 Feb 2025
Cited by 4 | Viewed by 770
Abstract
This paper presents the Cell Transformer (CeT), which utilizes high-definition (HD) map data to predict future traffic states at signalized intersections, thereby aiding trajectory planning for autonomous vehicles. CeT employs discretized lane segments to emulate the cell transmission model, creating a cell space [...] Read more.
This paper presents the Cell Transformer (CeT), which utilizes high-definition (HD) map data to predict future traffic states at signalized intersections, thereby aiding trajectory planning for autonomous vehicles. CeT employs discretized lane segments to emulate the cell transmission model, creating a cell space to forecast vehicle counts across all segments based on historical traffic data. CeT enhances prediction accuracy by distinguishing between different vehicle types by incorporating vehicle-type attributes into vehicle-state representations through multi-head attention. In this framework, cells are modeled as nodes in a directed graph, with dynamic connections representing variations in signal phases, thereby embedding spatial relationships and signal information within dynamic graphs. Temporal embeddings derived from time attributes are integrated with these graphs to generate comprehensive spatial–temporal representations. Utilizing an encoder–decoder architecture, CeT captures dependencies and correlations from past cell states to predict future traffic conditions. Validation using real traffic data from pNEUMA demonstrates that CeT significantly outperforms baseline models in two-phase signalized intersection scenarios, achieving reductions of 11.47% in Mean Absolute Error (MAE), 13.48% in Root Mean Square Error (RMSE), and an increase of 4.36% in Accuracy (ACC). In four-phase signalized intersection scenarios, CeT shows even greater effectiveness, with improvements of 13.36% in MAE, 12.93% in RMSE, and 4.78% in ACC. These results underscore CeT’s superior predictive capabilities and highlight the contributions of its core components. Full article
Show Figures

Figure 1

38 pages, 14791 KiB  
Article
Online High-Definition Map Construction for Autonomous Vehicles: A Comprehensive Survey
by Hongyu Lyu, Julie Stephany Berrio Perez, Yaoqi Huang, Kunming Li, Mao Shan and Stewart Worrall
J. Sens. Actuator Netw. 2025, 14(1), 15; https://doi.org/10.3390/jsan14010015 - 2 Feb 2025
Viewed by 3888
Abstract
High-definition (HD) maps aim to provide detailed road information with centimeter-level accuracy, essential for enabling precise navigation and safe operation of autonomous vehicles (AVs). Traditional offline construction methods involve several complex steps, such as data collection, point cloud generation, and feature extraction, but [...] Read more.
High-definition (HD) maps aim to provide detailed road information with centimeter-level accuracy, essential for enabling precise navigation and safe operation of autonomous vehicles (AVs). Traditional offline construction methods involve several complex steps, such as data collection, point cloud generation, and feature extraction, but these methods are resource-intensive and struggle to keep pace with the rapidly changing road environments. In contrast, online HD map construction leverages onboard sensor data to dynamically generate local HD maps, offering a bird’s-eye view (BEV) representation of the surrounding road environment. This approach has the potential to improve adaptability to spatial and temporal changes in road conditions while enhancing cost-efficiency by reducing the dependency on frequent map updates and expensive survey fleets. This survey provides a comprehensive analysis of online HD map construction, including the task background, high-level motivations, research methodology, key advancements, existing challenges, and future trends. We systematically review the latest advancements in three key sub-tasks: map segmentation, map element detection, and lane graph construction, aiming to bridge gaps in the current literature. We also discuss existing challenges and future trends, covering standardized map representation design, multitask learning, and multi-modality fusion, while offering suggestions for potential improvements. Full article
(This article belongs to the Special Issue Advances in Intelligent Transportation Systems (ITS))
Show Figures

Figure 1

22 pages, 22974 KiB  
Article
EOR: An Enhanced Object Registration Method for Visual Images and High-Definition Maps
by Nian Hui, Zijie Jiang, Zhongliang Cai and Shen Ying
Remote Sens. 2025, 17(1), 66; https://doi.org/10.3390/rs17010066 - 27 Dec 2024
Viewed by 757
Abstract
Accurate object registration is crucial for precise localization and environment sensing in autonomous driving systems. While real-time sensors such as cameras and radar capture the local environment, high-definition (HD) maps provide a global reference frame that enhances localization accuracy and robustness, especially in [...] Read more.
Accurate object registration is crucial for precise localization and environment sensing in autonomous driving systems. While real-time sensors such as cameras and radar capture the local environment, high-definition (HD) maps provide a global reference frame that enhances localization accuracy and robustness, especially in complex scenarios. In this paper, we propose an innovative method called enhanced object registration (EOR) to improve the accuracy and robustness of object registration between camera images and HD maps. Our research investigates the influence of spatial distribution factors and spatial structural characteristics of objects in visual perception and HD maps on registration accuracy and robustness. We specifically focus on understanding the varying importance of different object types and the constrained dimensions of pose estimation. These factors are integrated into a nonlinear optimization model and extended Kalman filter framework. Through comprehensive experimentation on the open-source Argoverse 2 dataset, the proposed EOR demonstrates the ability to maintain high registration accuracy in lateral and elevation dimensions, improve longitudinal accuracy, and increase the probability of successful registration. These findings contribute to a deeper understanding of the relationship between sensing data and scenario understanding in object registration for vehicle localization. Full article
Show Figures

Graphical abstract

23 pages, 3947 KiB  
Article
Learnable Resized and Laplacian-Filtered U-Net: Better Road Marking Extraction and Classification on Sparse-Point-Cloud-Derived Imagery
by Miguel Luis Rivera Lagahit, Xin Liu, Haoyi Xiu, Taehoon Kim, Kyoung-Sook Kim and Masashi Matsuoka
Remote Sens. 2024, 16(23), 4592; https://doi.org/10.3390/rs16234592 - 6 Dec 2024
Viewed by 1172
Abstract
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to [...] Read more.
High-definition (HD) maps for autonomous driving rely on data from mobile mapping systems (MMS), but the high cost of MMS sensors has led researchers to explore cheaper alternatives like low-cost LiDAR sensors. While cost effective, these sensors produce sparser point clouds, leading to poor feature representation and degraded performance in deep learning techniques, such as convolutional neural networks (CNN), for tasks like road marking extraction and classification, which are essential for HD map generation. Examining common image segmentation workflows and the structure of U-Net, a CNN, reveals a source of performance loss in the succession of resizing operations, which further diminishes the already poorly represented features. Addressing this, we propose improving U-Net’s ability to extract and classify road markings from sparse-point-cloud-derived images by introducing a learnable resizer (LR) at the input stage and learnable resizer blocks (LRBs) throughout the network, thereby mitigating feature and localization degradation from resizing operations in the deep learning framework. Additionally, we incorporate Laplacian filters (LFs) to better manage activations along feature boundaries. Our analysis demonstrates significant improvements, with F1-scores increasing from below 20% to above 75%, showing the effectiveness of our approach in improving road marking extraction and classification from sparse-point-cloud-derived imagery. Full article
(This article belongs to the Special Issue Applications of Laser Scanning in Urban Environment)
Show Figures

Figure 1

26 pages, 24227 KiB  
Article
A Base-Map-Guided Global Localization Solution for Heterogeneous Robots Using a Co-View Context Descriptor
by Xuzhe Duan, Meng Wu, Chao Xiong, Qingwu Hu and Pengcheng Zhao
Remote Sens. 2024, 16(21), 4027; https://doi.org/10.3390/rs16214027 - 30 Oct 2024
Cited by 1 | Viewed by 1525
Abstract
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, [...] Read more.
With the continuous advancement of autonomous driving technology, an increasing number of high-definition (HD) maps have been generated and stored in geospatial databases. These HD maps can provide strong localization support for mobile robots equipped with light detection and ranging (LiDAR) sensors. However, the global localization of heterogeneous robots under complex environments remains challenging. Most of the existing point cloud global localization methods perform poorly due to the different perspective views of heterogeneous robots. Leveraging existing HD maps, this paper proposes a base-map-guided heterogeneous robots localization solution. A novel co-view context descriptor with rotational invariance is developed to represent the characteristics of heterogeneous point clouds in a unified manner. The pre-set base map is divided into virtual scans, each of which generates a candidate co-view context descriptor. These descriptors are assigned to robots before operations. By matching the query co-view context descriptors of a working robot with the assigned candidate descriptors, the coarse localization is achieved. Finally, the refined localization is done through point cloud registration. The proposed solution can be applied to both single-robot and multi-robot global localization scenarios, especially when communication is impaired. The heterogeneous datasets used for the experiments cover both indoor and outdoor scenarios, utilizing various scanning modes. The average rotation and translation errors are within 1° and 0.30 m, indicating the proposed solution can provide reliable localization support despite communication failures, even across heterogeneous robots. Full article
Show Figures

Figure 1

14 pages, 3589 KiB  
Article
Vehicle Localization Using Crowdsourced Data Collected on Urban Roads
by Soohyun Cho and Woojin Chung
Sensors 2024, 24(17), 5531; https://doi.org/10.3390/s24175531 - 27 Aug 2024
Cited by 1 | Viewed by 1187
Abstract
Vehicle localization using mounted sensors is an essential technology for various applications, including autonomous vehicles and road mapping. Achieving high positioning accuracy through the fusion of low-cost sensors is a topic of considerable interest. Recently, applications based on crowdsourced data from a large [...] Read more.
Vehicle localization using mounted sensors is an essential technology for various applications, including autonomous vehicles and road mapping. Achieving high positioning accuracy through the fusion of low-cost sensors is a topic of considerable interest. Recently, applications based on crowdsourced data from a large number of vehicles have received significant attention. Equipping standard vehicles with low-cost onboard sensors offers the advantage of collecting data from multiple drives over extensive road networks at a low operational cost. These vehicle trajectories and road observations can be utilized for traffic surveys, road inspections, and mapping. However, data obtained from low-cost devices are likely to be highly inaccurate. On urban roads, unlike highways, complex road structures and GNSS signal obstructions caused by buildings are common. This study proposes a reliable vehicle localization method using a large amount of crowdsourced data collected from urban roads. The proposed localization method is designed with consideration for the high inaccuracy of the data, the complexity of road structures, and the partial use of high-definition (HD) maps that account for environmental changes. The high inaccuracy of sensor data affects the reliability of localization. Therefore, the proposed method includes a reliability assessment of the localized vehicle poses. The performance of the proposed method was evaluated using data collected from buses operating in Seoul, Korea. The data used for the evaluation were collected 18 months after the creation of the HD maps. Full article
Show Figures

Figure 1

41 pages, 57635 KiB  
Article
An Object-Centric Hierarchical Pose Estimation Method Using Semantic High-Definition Maps for General Autonomous Driving
by Jeong-Won Pyo, Jun-Hyeon Choi and Tae-Yong Kuc
Sensors 2024, 24(16), 5191; https://doi.org/10.3390/s24165191 - 11 Aug 2024
Cited by 1 | Viewed by 1488
Abstract
To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. [...] Read more.
To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. The prevalent method for vehicle pose estimation in autonomous driving systems relies on Real-Time Kinematic (RTK) sensor data, ensuring accurate location acquisition. However, due to the characteristics of RTK sensors, precise positioning is challenging or impossible in indoor spaces or areas with signal interference, leading to inaccurate pose estimation and hindering autonomous driving in such scenarios. This paper proposes a method to overcome these challenges by leveraging objects registered in a high-precision map. The proposed approach involves creating a semantic high-definition (HD) map with added objects, forming object-centric features, recognizing locations using these features, and accurately estimating the vehicle’s pose from the recognized location. This proposed method enhances the precision of vehicle pose estimation in environments where acquiring RTK sensor data is challenging, enabling more robust and stable autonomous driving. The paper demonstrates the proposed method’s effectiveness through simulation and real-world experiments, showcasing its capability for more precise pose estimation. Full article
(This article belongs to the Special Issue Artificial Intelligence and Smart Sensors for Autonomous Driving)
Show Figures

Figure 1

20 pages, 8876 KiB  
Article
A Comprehensive Survey on High-Definition Map Generation and Maintenance
by Kaleab Taye Asrat and Hyung-Ju Cho
ISPRS Int. J. Geo-Inf. 2024, 13(7), 232; https://doi.org/10.3390/ijgi13070232 - 1 Jul 2024
Cited by 6 | Viewed by 4888
Abstract
The automotive industry has experienced remarkable growth in recent decades, with a significant focus on advancements in autonomous driving technology. While still in its early stages, the field of autonomous driving has generated substantial research interest, fueled by the promise of achieving fully [...] Read more.
The automotive industry has experienced remarkable growth in recent decades, with a significant focus on advancements in autonomous driving technology. While still in its early stages, the field of autonomous driving has generated substantial research interest, fueled by the promise of achieving fully automated vehicles in the foreseeable future. High-definition (HD) maps are central to this endeavor, offering centimeter-level accuracy in mapping the environment and enabling precise localization. Unlike conventional maps, these highly detailed HD maps are critical for autonomous vehicle decision-making, ensuring safe and accurate navigation. Compiled before testing and regularly updated, HD maps meticulously capture environmental data through various methods. This study explores the vital role of HD maps in autonomous driving, delving into their creation, updating processes, and the challenges and future directions in this rapidly evolving field. Full article
Show Figures

Figure 1

19 pages, 4057 KiB  
Article
Global Navigation Satellite System/Inertial Measurement Unit/Camera/HD Map Integrated Localization for Autonomous Vehicles in Challenging Urban Tunnel Scenarios
by Lu Tao, Pan Zhang, Kefu Gao and Jingnan Liu
Remote Sens. 2024, 16(12), 2230; https://doi.org/10.3390/rs16122230 - 19 Jun 2024
Cited by 3 | Viewed by 2363
Abstract
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement [...] Read more.
Lane-level localization is critical for autonomous vehicles (AVs). However, complex urban scenarios, particularly tunnels, pose significant challenges to AVs’ localization systems. In this paper, we propose a fusion localization method that integrates multiple mass-production sensors, including Global Navigation Satellite Systems (GNSSs), Inertial Measurement Units (IMUs), cameras, and high-definition (HD) maps. Firstly, we use a novel electronic horizon module to assess GNSS integrity and concurrently load the HD map data surrounding the AVs. This map data are then transformed into a visual space to match the corresponding lane lines captured by the on-board camera using an improved BiSeNet. Consequently, the matched HD map data are used to correct our localization algorithm, which is driven by an extended Kalman filter that integrates multiple sources of information, encompassing a GNSS, IMU, speedometer, camera, and HD maps. Our system is designed with redundancy to handle challenging city tunnel scenarios. To evaluate the proposed system, real-world experiments were conducted on a 36-kilometer city route that includes nine consecutive tunnels, totaling near 13 km and accounting for 35% of the entire route. The experimental results reveal that 99% of lateral localization errors are less than 0.29 m, and 90% of longitudinal localization errors are less than 3.25 m, ensuring reliable lane-level localization for AVs in challenging urban tunnel scenarios. Full article
Show Figures

Figure 1

20 pages, 4703 KiB  
Review
A Review of Crowdsourcing Update Methods for High-Definition Maps
by Yuan Guo, Jian Zhou, Xicheng Li, Youchen Tang and Zhicheng Lv
ISPRS Int. J. Geo-Inf. 2024, 13(3), 104; https://doi.org/10.3390/ijgi13030104 - 20 Mar 2024
Cited by 3 | Viewed by 5425
Abstract
High-definition (HD) maps serve as crucial infrastructure for autonomous driving technology, facilitating vehicles in positioning, environmental perception, and motion planning without being affected by weather changes or sensor-visibility limitations. Maintaining precision and freshness in HD maps is paramount, as delayed or inaccurate information [...] Read more.
High-definition (HD) maps serve as crucial infrastructure for autonomous driving technology, facilitating vehicles in positioning, environmental perception, and motion planning without being affected by weather changes or sensor-visibility limitations. Maintaining precision and freshness in HD maps is paramount, as delayed or inaccurate information can significantly impact the safety of autonomous vehicles. Utilizing crowdsourced data for HD map updating is widely recognized as a superior method for preserving map accuracy and freshness. Although it has garnered considerable attention from researchers, there remains a lack of comprehensive exploration into the entire process of updating HD maps through crowdsourcing. For this reason, it is imperative to review and discuss crowdsourcing techniques. This paper aims to provide an overview of the overall process of crowdsourced updates, followed by a detailed examination and comparison of existing methodologies concerning the key techniques of data collection, information extraction, and change detection. Finally, this paper addresses the challenges encountered in crowdsourced updates for HD maps. Full article
Show Figures

Figure 1

19 pages, 4747 KiB  
Article
Unraveling Spatial–Temporal Patterns and Heterogeneity of On-Ramp Vehicle Merging Behavior: Evidence from the exiD Dataset
by Yiqi Wang, Yang Li, Ruijie Li, Shubo Wu and Linbo Li
Appl. Sci. 2024, 14(6), 2344; https://doi.org/10.3390/app14062344 - 11 Mar 2024
Cited by 1 | Viewed by 1717
Abstract
Understanding the spatiotemporal characteristics of merging behavior is crucial for the advancement of autonomous driving technology. This study aims to analyze on-ramp vehicle merging patterns, and investigate how various factors, such as merging scenarios and vehicle types, influence driving behavior. Initially, a framework [...] Read more.
Understanding the spatiotemporal characteristics of merging behavior is crucial for the advancement of autonomous driving technology. This study aims to analyze on-ramp vehicle merging patterns, and investigate how various factors, such as merging scenarios and vehicle types, influence driving behavior. Initially, a framework based on a high-definition (HD) map is developed to extract trajectory information in a meticulous manner. Subsequently, eight distinct merging patterns are identified, with a thorough examination of their behavioral characteristics from both temporal and spatial perspectives. Merging behaviors are examined temporally, encompassing the sequence of events from approaching the on-ramp to completing the merge. This study specifically analyzes the target lane’s spatial characteristics, evaluates the merging distance (ratio), investigates merging speed distributions, compares merging patterns and identifies high-risk situations. Utilizing the latest aerial dataset, exiD, which provides HD map data, the study presents novel findings. Specifically, it uncovers patterns where the following vehicle in the target lane chooses to accelerate and overtake rather than cutting in front of the merging vehicle, resulting in Time-to-Collision (TTC) values of less than 2.5 s, indicating a significantly higher risk. Moreover, the study finds that differences in merging speed, distance, and duration can be disregarded in patterns where vehicles are present both ahead and behind, or solely ahead, suggesting these patterns could be integrated for simulation to streamline analysis and model development. Additionally, the practice of truck platooning has a significant impact on vehicle merging behavior. Overall, this study enhances the understanding of merging behavior, facilitating autonomous vehicles’ ability to comprehend and adapt to merging scenarios. Furthermore, this research is significant in improving driving safety, optimizing traffic management, and enabling the effective integration of autonomous driving systems with human drivers. Full article
(This article belongs to the Section Transportation and Future Mobility)
Show Figures

Figure 1

Back to TopTop