Next Article in Journal
Effect of Printing Parameters on the Tensile Mechanical Properties of 3D-Printed Thermoplastic Polyurethane
Previous Article in Journal
Statement of Peer Review
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems †

by
Abdelrahman Alabdallah
,
Barham Jeries Barham Farraj
and
Ernő Horváth
*
Vehicle Industry Research Center, Széchenyi István University, H-9026 Győr, Hungary
*
Author to whom correspondence should be addressed.
Presented at the Sustainable Mobility and Transportation Symposium 2025, Győr, Hungary, 16–18 October 2025.
Eng. Proc. 2025, 113(1), 13; https://doi.org/10.3390/engproc2025113013
Published: 28 October 2025

Abstract

High-definition vector maps, such as Lanelet2, are critical for autonomous driving systems, enabling precise localization, path planning, and regulatory compliance. However, creating and maintaining these maps traditionally demands labor-intensive manual annotation or resource-heavy automated pipelines. This paper presents an ROS 2-based framework for semi-automatic vector map generation, leveraging Lanelet2 primitives to streamline map creation while balancing automation with human oversight. The framework integrates multi-sensor inputs (LIDAR, GPS/IMU) within ROS 2 to extract and fuse road features such as lanes, traffic signs, and curbs. The pipeline employs modular ROS 2 nodes for tasks including NDT and SLAM-based pose estimation and the semantic segmentation of drivable areas which serve as a basis for Lanelet2 primitives. To promote adoption, the implementation is released as an open source. This work bridges the gap between automated map generation and human expertise, advancing the practical deployment of dynamic vector maps in autonomous systems.

1. Introduction

HD maps form a fundamental component in building and deploying autonomous vehicles by positing a sub-centimeter accuracy and truly recording all information on road geometry, lane boundaries, and infrastructure elements. Such highly detailed attributes enable autonomous vehicles (AVs) to localize with very precise positioning and safe trajectory planning [1,2], which are very important for strong and dependable autonomous navigation. In shared urban environments where pedestrians and autonomous vehicles coexist, the accurate distinction of drivable and non-drivable areas is critical for ensuring safe navigation, minimizing collision risks [2,3], and enabling context-aware decision-making.
Although HD maps are indispensable, their creation is still a major challenge. The manual method of mapping, wherein a human annotator works with sensor data to finally delineate map features on the map, is very tedious, time-consuming, and almost impossible given the large patches of areas to be covered by the maps. Autonomous mapping solutions, though scalable, are bogged down by sensor noise, dynamic environments, and the accurate semantic interpretation of complex urban scene [4] issues, among other issues. Such inadequacies may lead to incomplete or erroneous maps, thus undermining the safety and competence of AV systems.
To solve such challenges, a semi-automated mapping framework based on Robot Operating System 2 (ROS 2) integrates multiple sensor fusion and human-in-the-loop validation processes. We fused data from LIDAR, GPS, and IMU sensors to create HD maps in a global coordinate frame—colloquially compatible with the Lanelet2 format most commonly used by the AV research community. To balance automation and accuracy, the framework refines maps combining pose estimation and semantic segmentation algorithms with manual refinement tools, thus providing efficient and reliable map generation.
Here are the main contributions of this manuscript:
  • Multi-Sensor Fusion: A robust pipeline is developed that fuses all the data present from LIDAR, GPS, and IMU to give accuracy in vehicle pose estimation and map alignment.
  • Lanelet2 Compatibility: Our system converts map outputs into Lanelet2 format for integration with the existing AV software stack.
  • Semi-Automation: we propose a workflow that automates most of the map-making process while allowing targeted human corrections and validations of the results.
  • Scalability and Adaptability: The framework is designed from the ground up to resume operations across different vehicle platforms for different urban environments, thus enabling scalable HD map generation for diverse AV applications.
Our ROS 2-based framework, which simultaneously addresses the shortcomings of manual and fully automated approaches to mapping, affords a handy and viable solution for producing high-quality HD maps and thus pushes forward the state of autonomous vehicle navigation.

2. Methodology

2.1. System Architecture

The proposed mapping framework is built on ROS 2 Humble, leveraging its modular, real-time communication capabilities for scalable and distributed processing. The architecture consists of five main components:
  • Sensor Acquisition Node: This captures synchronized data streams from sensors, including 64-channel Ouster OS1 LIDAR (San Francisco, CA, USA), GNSS (NovAtel PwrPak7D-E1, SwiftNav Duro Inertial), and IMU (Lord MicroStrain 3DMGX5-AHRS) (Williston, VT, USA). Data are published onto ROS 2 topics for consumption by downstream nodes.
  • Pose Estimation Node: This implements LIO-SAM (LIDAR Inertial Odometry via Smoothing and Mapping) for real-time fusion of LIDAR and IMU data, providing accurate vehicle localization. According to our experiments, a GNSS receiver could theoretically provide the same pose information, but a SLAM-based method proved to be more accurate and robust; see Figure 1. The node publishes accurate and drift-minimized 6-DOF vehicle pose estimates at 20 Hz.
  • Semantic Segmentation Node: This utilizes a modified urban_road_filter algorithm to classify and extract road and sidewalk regions from LIDAR point clouds; see Figure 2. Results are published as an ROS 2 topic for feature extraction.
  • Feature Extraction Node: As part of the filer ecosystem, this defines curbs and sidewalk boundaries.
  • Map Generation Node: This converts extracted geometric features into Lanelet2 [5,6,7] primitives (such as lanes and sidewalks) and provides a manual correction interface through RViz for human-in-the-loop validation.
The system was deployed on two vehicle platforms: a Nissan Leaf and a Lexus RX450h.

2.2. SLAM Integration

The pose estimation is performed by LIO-SAM, which fuses LIDAR scans and IMU data within a factor graph optimization framework. IMU preintegration compensates for motion distortion in LIDAR scans, while LIDAR odometry extracts edge and planar features for scan-to-map registration. GPS data is integrated at 1 Hz to correct long-term drift and to anchor the trajectory in a global (UTM) coordinate frame. Key parameters include a 10 m search radius for scan matching.
Semantic segmentation is conducted by urban_road_filter node, which processes LIDAR point clouds to segment drivable surfaces. Ground plane estimation is performed using three different methods, namely Star-Shaped, X-Zero, and Z-Zero methods.
Lanelet2 Map Generation is performed, and extracted geometric features are automatically converted into Lanelet2 XML format using custom Python 3.13 scripts. Manual refinement tools allow for interactive adjustment of lane vertices, annotation of traffic rules (e.g., speed limits, right-of-way). The final map can be visualized and validated in Rviz or in Foxglove Studio.
The system achieves six-degree-of-freedom pose updates at 10 Hz through tight coupling of LIO-SAM’s factor graph and GPS measurements. ROS 2 intra-process communication reduces latency between LIO-SAM and segmentation nodes [8] by less than 5 milliseconds.

3. Discussion

3.1. Significance of Findings

The presented ROS 2-based framework demonstrates a robust and scalable solution for semi-automatic high-definition (HD) vector map creation, directly addressing the core challenges faced by both manual and fully automated mapping approaches in autonomous driving applications. By leveraging the modularity and distributed processing capabilities of ROS 2, the system effectively integrates heterogeneous sensor data—specifically LIDAR, GNSS, and IMU—into a cohesive pipeline that ensures accurate, globally referenced vehicle trajectories; see Figure 3. A key innovation of the framework is its use of LIO-SAM for SLAM-based pose estimation. This integration enables high-frequency, drift-minimized localization by tightly coupling LIDAR and IMU data within a factor graph optimization framework, while periodic GNSS corrections anchor the trajectory in a global coordinate system. The result is a mapping pipeline capable of maintaining a sub-meter absolute positioning error even in complex urban environments, as demonstrated in deployments on both the Nissan Leaf and Lexus RX450h platforms.
The semantic segmentation component, implemented via the urban_road_filter node, reliably extracts drivable areas and sidewalks from raw LIDAR point clouds. This automated segmentation serves as a robust foundation for subsequent feature extraction. Experimental results underscore the adaptability and scalability of the system. The mapping pipeline not only reduced manual annotation time compared to traditional, fully manual methods, but also maintained a high mapping fidelity across different vehicle platforms and urban scenarios. The semi-automatic workflow, which combines automated feature extraction with targeted human-in-the-loop validation, strikes an effective balance between efficiency and quality assurance. This approach is especially valuable in scenarios where fully automated systems may struggle with occlusions, sensor noise, or ambiguous road geometries.
The semi-automatic vector data generated by our ROS 2-based framework exhibited a high accuracy in defining drivable and non-drivable regions, significantly reducing the manual annotation workload. We have tested our approach on the data collected from our campus; see Figure 4. The results from practical deployment on different vehicle platforms, including the Nissan Leaf and Lexus RX450h, validated the scalability and adaptability of the framework. The integration of human-in-the-loop validation ensured an enhanced map fidelity, especially in complex urban environments prone to sensor noise and occlusions. Although, as is visible from the image, removing the false positive vector data takes time. Manual annotation of a 1 km urban road typically requires approximately 5–10 h of human effort. Our semi-automated pipeline reduced this duration to less than 5 h (including human corrections). This could be further enhanced with more advanced algorithms.

3.2. Limitations and Future Work

First, the framework remains incomplete, with several essential modules and validations yet to be integrated. This restricts its use to controlled scenarios and hinders scalability for diverse real-world applications. Second, the system is only semi-automatic; although it reduces manual effort, human intervention is still required in several stages, such as feature labeling, the validation of geometries, and the correction of topology. This semi-manual process affects efficiency and reproducibility. Third, the current output does not include Lanelet2-specific connectivity definitions, such as routing graphs, regulatory elements, and adjacent lane relations. These aspects are crucial for downstream applications in traffic simulation, motion planning, and autonomous driving logic. Finally, the generated representation only serves as a foundation for constructing a Lanelet2 map. It lacks compliance with the full Lanelet2 specification, including precise geometry refinement, attribute enrichment (e.g., speed limits, access restrictions), and logical consistency checks.
Future development will focus on achieving fully automated pipeline integration, leveraging deep learning-based detection and classification techniques to minimize human input and possibly improve processing speed. Enhanced post-processing modules will be developed to ensure geometric and topological consistency according to Lanelet2 standards. Additionally, work is needed to extend the export functionality to generate fully connected Lanelet2 maps, including regulatory elements and routing networks. This will enable direct deployment in simulation environments and autonomous vehicle software stacks.

Author Contributions

Conceptualization, A.A. and E.H.; methodology, A.A.; software, A.A. and B.J.B.F.; resources, A.A., E.H. and B.J.B.F.; data curation, A.A.; writing—original draft preparation, E.H. and A.A. All authors have read and agreed to the published version of the manuscript.

Funding

The research was supported by the European Union within the framework of the National Laboratory for Autonomous Systems (RRF-2.3.1-21-2022-00002).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source code and data available on https://github.com/jkk-research/urban_road_filter (accessed on 23 May 2025).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Levinson, J.; Askeland, J.; Becker, J.; Dolson, J.; Held, D.; Kammel, S.; Kolter, J.Z.; Langer, D.; Pink, O.; Pratt, V.; et al. Towards Fully Autonomous Driving: Systems and Algorithms. In Proceedings of the IEEE Intelligent Vehicles Symposium, Baden-Baden, Germany, 5–9 June 2011. [Google Scholar]
  2. Urmson, C.; Anhalt, J.; Bagnell, J.; Baker, C.R.; Bittner, R.; Dolan, J.M.; Duggins, D.; Galatali, T.; Geyer, C.; Gowdy, J.; et al. Autonomous Driving in Urban Environments: Boss and the Urban Challenge. J. Field Robot. 2008, 25, 425–466. [Google Scholar] [CrossRef]
  3. Varga, B.; Yang, D.; Martin, M.; Hohmann, S. Cooperative Decision-Making in Shared Spaces: Making Urban Traffic Safer through Human-Machine Cooperation. In Proceedings of the 2023 IEEE 21st Jubilee International Symposium on Intelligent Systems and Informatics (SISY), Pula, Croatia, 21–23 September 2023; pp. 109–114. [Google Scholar]
  4. Pomerleau, F.; Colas, F.; Siegwart, R. Review of Point Cloud Registration Algorithms for Mobile Robotics. Found. Trends Robot. 2015, 4, 1–104. [Google Scholar] [CrossRef]
  5. Bender, P.; Ziegler, J.; Stiller, C. Lanelets: Efficient Map Representation for Autonomous Driving. In Proceedings of the IEEE Intelligent Vehicles Symposium, Dearborn, MI, USA, 8–11 June 2014. [Google Scholar]
  6. Poggenhans, F.; Pauls, J.-H.; Janosovits, J.; Orf, S.; Naumann, M.; Kuhnt, F.; Mayr, M. Lanelet2: A High-Definition Map Framework for the Future of Automated Driving. In Proceedings of the 21st IEEE International Conference on Intelligent Transportation Systems (ITSC), Maui, HI, USA, 4–7 November 2018; pp. 1672–1679. [Google Scholar]
  7. Naumann, M.; Poggenhans, F.; Kuhnt, F.; Mayr, M. Lanelet2 for nuScenes: Enabling Spatial Semantic Relationships and Diverse Map-Based Anchor Paths. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), Vancouver, BC, Canada, 18–22 June 2023; pp. 3249–3258. [Google Scholar]
  8. Behley, J.; Milioto, A.; Stachniss, C. A Benchmark for LIDAR-based Semantic Segmentation. In Proceedings of the IEEE International Conference on Robotics and Automation, Montreal, QC, Canada, 20–24 May 2019. [Google Scholar]
  9. Bai, L.; Zhang, X.; Wang, H.; Du, S. Integrating remote sensing with OpenStreetMap data for comprehensive scene understanding through multi-modal self-supervised learning. Remote Sens. Environ. 2025, 318, 114573. [Google Scholar] [CrossRef]
Figure 1. Global map generated using LIO-SAM SLAM method for Nissan Leaf around the campus.
Figure 1. Global map generated using LIO-SAM SLAM method for Nissan Leaf around the campus.
Engproc 113 00013 g001
Figure 2. Urban_road_filter segmentation process.
Figure 2. Urban_road_filter segmentation process.
Engproc 113 00013 g002
Figure 3. System architecture.
Figure 3. System architecture.
Engproc 113 00013 g003
Figure 4. Example result of the semi-automatic vector data on the left and fitted to openstreetmap [9] as a visual representation on the right.
Figure 4. Example result of the semi-automatic vector data on the left and fitted to openstreetmap [9] as a visual representation on the right.
Engproc 113 00013 g004
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Alabdallah, A.; Farraj, B.J.B.; Horváth, E. ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems. Eng. Proc. 2025, 113, 13. https://doi.org/10.3390/engproc2025113013

AMA Style

Alabdallah A, Farraj BJB, Horváth E. ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems. Engineering Proceedings. 2025; 113(1):13. https://doi.org/10.3390/engproc2025113013

Chicago/Turabian Style

Alabdallah, Abdelrahman, Barham Jeries Barham Farraj, and Ernő Horváth. 2025. "ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems" Engineering Proceedings 113, no. 1: 13. https://doi.org/10.3390/engproc2025113013

APA Style

Alabdallah, A., Farraj, B. J. B., & Horváth, E. (2025). ROS 2-Based Framework for Semi-Automatic Vector Map Creation in Autonomous Driving Systems. Engineering Proceedings, 113(1), 13. https://doi.org/10.3390/engproc2025113013

Article Metrics

Back to TopTop