Next Article in Journal
Multiscale Modeling of C/SiC Ceramic Matrix Composites (CMCs)
Previous Article in Journal
Validation of a Fuzzy Wind Resistance Risk Index for UAV Energy Consumption Using Telemetry Data
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Proceeding Paper

Enhancing Autonomous Navigation: Real-Time LIDAR Detection of Roads and Sidewalks in ROS 2 †

by
Barham Jeries Barham Farraj
,
Abdelrahman Alabdallah
,
Miklós Unger
and
Ernő Horváth
*
Vehicle Industry Research Center, Széchenyi István University, H-9026 Győr, Hungary
*
Author to whom correspondence should be addressed.
Presented at the Sustainable Mobility and Transportation Symposium 2025, Győr, Hungary, 16–18 October 2025.
Eng. Proc. 2025, 113(1), 24; https://doi.org/10.3390/engproc2025113024
Published: 31 October 2025

Abstract

Autonomous navigation in urban environments demands robust real-time detection of drivable surfaces despite high-throughput LIDAR data. While majority of current approaches often rely on camera-based or multi-sensor fusion systems, this paper introduces an enhancement of our previous LIDAR-centric solution integrated within the Robot Operating System 2 (ROS 2) framework to address computational efficiency and precision challenges. We propose a parallelized algorithm suite for LIDAR-based road and sidewalk detection, achieving processing rates exceeding 20 Hz. Validation on the KITTI benchmark and own datasets demonstrates improved accuracy in complex urban scenarios compared to traditional ground-filtering techniques. To foster reproducibility, the ROS 2-compliant implementation, datasets, and evaluation scripts are publicly released. This work underscores the potential of LIDAR sensors coupled with modern robotic frameworks to enhance perception pipelines in autonomous systems.

1. Introduction

Due to the multiplicity and density of dynamic agents, types of roads, and unstructured infrastructure for pedestrians, autonomous navigation in urban environments is a complex problem. Unlike in highway scenarios, occlusions in urban settings are common, curbs may be irregular or absent, sidewalks may have varying heights, and objects may appear unexpectedly; all of which require perception systems with a very high degree of precision. Thus, to navigate safely and competently, autonomous vehicles must possess real-time scene-understanding capabilities that are computationally efficient and robust under variable environmental circumstances.
One of the important building blocks in perception stacks includes accurate and consistent detection of road and sidewalk boundaries. Classical vision-based methods are susceptible to shadows, nighttime conditions, or occlusions, even though they have indeed worked under ideal lighting conditions. And these multi-sensor fusion frameworks, the powerful ones, just carry more complexity in terms of synchronizing and calibrating. LIDAR sensors [1,2,3], on the contrary, create high spatial resolution information with confidence, while ignoring the ambient lighting brightness levels. This thus makes LIDAR [4] well suited for analysis of the road surface in real urban driving contexts. To further support real-world application, the system has been evaluated not only in simulated datasets but also in live testing scenarios across diverse urban layouts. These tests revealed the method’s robustness in handling irregular curbs, occluded features, and varied surface textures without relying on supplementary sensor inputs. The system’s adaptability to unseen environments reinforces its potential utility in scalable autonomous vehicle deployments, including last-mile delivery robots and autonomous urban shuttles.
The present paper constitutes an elaboration of those previous insights and proposes a fully LIDAR-based approach for road and sidewalk detection overcoming the shortcomings of earlier methodologies. A parallel algorithmic approach for analysis of voxelized point clouds and polygonal representation is adopted in the design framework of ROS 2. Accurate outputs at real-time rates greater than 20 Hz are realized by the present system. Computationally inexpensive and modular are the keywords in this design, thereby permitting its use in embedded application scenarios where resources are limited. Through making use of ROS 2’s enhanced data handling and real-time features, the proposed solution puts itself well on track with the requirements of current-day autonomous platforms and offers the robotics research community the benefit of reproducibility.

2. Methodology

The proposed system, named urban_road_filter, is designed to detect and classify road and sidewalk regions using only raw LIDAR data. The pipeline prioritizes computational efficiency and real-time performance, with all components implemented as modular nodes in ROS 2. A key innovation lies in the use of three parallel curb detection algorithms, each operating on distinct geometric principles, to enhance detection robustness under variable urban scenarios.

2.1. System Architecture

The system accepts a ROS 2-native point cloud stream from a LIDAR sensor and performs the following processing steps (Figure 1):
  • Voxelization of the 3D space for structured data manipulation [5,6,7,8,9].
  • Region of Interest (ROI) Filtering to discard irrelevant areas based on user-defined X, Y, and Z bounds.
  • Parallel Feature Extraction using the Star-Shaped, X-Zero, and Z-Zero algorithms.
  • Curb Point Aggregation via logical disjunction of candidate outputs.
  • Polygonal road modeling to obtain a simplified representation using a lightweight boundary-fitting algorithm (Douglas–Peucker) [10].
The final output is a simplified polygon delineating drivable and non-drivable areas, which can be published as a MarkerArray in RViz2 or passed to local planners.

2.2. Sidewalk Detection

Th first method, Star-Shaped Search divides the LIDAR scan into radial segments resembling a star. In each segment, it identifies curb features based on relative elevation changes and spatial density. Its cylindrical coordinate approach makes it robust to sensor tilt and surface irregularities.
The second one, X-Zero method analyzes point continuity across horizontal LIDAR scan rings. It computes angle thresholds from triangular formations across voxels to detect sharp elevation transitions—typical of curb structures.
The third one, Z-Zero method detects curb features by identifying vertical discontinuities across adjacent voxel columns. To quantify these discontinuities, the algorithm computes the angular deviation between neighboring vectors derived from sequential LiDAR returns. Specifically, given two adjacent vectors v 1 and v 2 the angle θ between them is calculated as follows:
θ   =   c o s 1 v 1   · v 2 v 1   · v 2
This formulation enables the detection of sharp elevation transitions, which are characteristic of sidewalk curbs or step changes in surface geometry.
Finally, the outputs of all three methods are logically merged. Although some false positives may persist, they do not affect the final road polygon due to filtering based on the first valid boundary point encountered.
To strengthen the detection reliability, three geometric-based algorithms are executed in parallel and fused using a logical OR operation. The Star-Shaped method divides the scan into radial sectors and detects height jumps, making it robust to perspective and pitch variance. The X-Zero method constructs voxel-based triangles horizontally to detect discontinuities, while the Z-Zero method computes angular deviation in vertical voxel stacks. This complementary design improves detection under uneven terrain or noise. False positives—e.g., poles or pedestrians—are reduced through a filter that only selects the first boundary point per segment for final polygon construction. This technique ensures consistency in the presence of clutter and enhances real-world reliability.

2.3. ROS2 Integration

The system is developed using ROS 2 Humble under Ubuntu 22.04 and supports real-time visualization in RViz2 and Foxglove Studio. Parameters such as angular resolution, ROI bounds, and detection thresholds are dynamically configurable via launch files or parameter servers. The codebase is open-source and available on GitHub, allowing community contributions and benchmarking.

3. Results

The proposed LIDAR-based road and sidewalk detection pipeline was implemented and tested using both synthetic and real-world point cloud datasets. The system was developed within the ROS 2 (Humble) framework and evaluated in terms of processing speed, accuracy, and robustness under urban environment conditions. All experiments were conducted using a desktop-class computing setup, and visualization was performed via RViz2 (Figure 2) and Foxglove Studio.
The system consistently achieved real-time performance, maintaining frame rates above 20 Hz, which satisfies the operational requirements of onboard autonomous vehicle applications. The detection output consisted of both classified point clouds and a simplified polygonal model of drivable areas, which was directly usable by local planners. An example of the system’s output is illustrated in Figure 2, where both the detected curb boundaries and final polygonal segmentation are visualized in a ROS 2 environment.

3.1. Evaluation

The system was validated on a mix of custom-collected and benchmarked datasets, including select urban sequences from the KITTI dataset. Performance was measured across three dimensions: e.g., computational speed, and geometric fidelity of the extracted road polygons.
  • Speed: The entire pipeline, including voxelization, feature extraction, and polygon modeling, maintained execution times below 45 ms per frame on average, thereby supporting continuous operation at over 20 Hz.
  • Robustness: The combination of the Star-Shaped, X-Zero, and Z-Zero algorithms provided complementary strengths, allowing the system to generalize across uneven curbs, occlusions, and shallow sidewalk geometries.
Qualitative results showed improved delineation of road and sidewalk boundaries, particularly in scenes with uneven pavement or poor lighting—scenarios where camera-based systems underperform. The modular architecture further enabled easy integration with other planning and localization modules in ROS 2. Additionally, performance benchmarking was conducted by running the algorithm directly on the local machine without GPU acceleration. On the i7 CPU alone, the method achieved rates enough for real-time operation, averaging around 40 Hz in log-file-based analysis, which is twice the LiDAR frame rate. This confirmed that the solution is efficient enough for deployment on consumer-grade hardware, even without relying on high-end embedded computing systems. The modular architecture further enabled easy integration with other planning and localization modules in ROS 2.
Extensive validation using KITTI urban sequences and our own dataset revealed that the system achieves ~40 Hz processing without GPU support, exceeding the sensor rate and meeting real-time needs. False positives mainly appeared near vertical structures or moving entities, but their influence on the final polygon was minimized by segment-based filtering. These results confirm robustness across occlusions, variable curb heights, and sidewalk ambiguities. The polygon-based representation integrates seamlessly with ROS 2 planning and navigation frameworks.

3.2. Limitations and Future Work

Although the proposed LIDAR-based detection pipeline achieves accurate and real-time road and sidewalk segmentation, it has certain limitations that merit consideration and guide future development.
The system operates under the assumption of a relatively static urban environment. While it performs reliably in structured scenarios, dynamic scenes involving moving pedestrians, cyclists, or vehicles can introduce segmentation noise. These transient elements may be incorrectly classified as curbs or road features, especially when their motion creates elevation artifacts in the LIDAR scan. Incorporating dynamic scene understanding—through temporal filtering, motion compensation, or short-horizon tracking—could significantly improve detection stability. In particular, predictive modeling of curb and road edge continuity offers a promising approach to filtering out false detections in the presence of dynamic obstacles.
Another limitation is the need for manual parameter tuning, such as setting appropriate curb height thresholds or defining region-of-interest bounds [11,12,13]. These parameters may not generalize well across cities or deployment scenarios. Future work could investigate lightweight, adaptive methods that auto-tune based on the observed LIDAR data distribution, improving deployment scalability without relying on high computing overhead.
The current system is designed to be computationally efficient, and initial tests indicate that it performs well on desktop-class hardware. While the target platform is an automotive-grade system such as that available in the Lexus vehicle used for testing, future work should also consider power optimization and resource-aware scheduling to maintain performance under constrained conditions without excessive thermal or battery load. In summary, future directions will prioritize:
  • Enhancing robustness in dynamic environments via noise filtering and predictive edge tracking;
  • Improving parameter adaptability to reduce manual tuning;
  • Optimizing the system for efficient, reliable deployment on in-vehicle hardware.
These refinements aim to further align the system with real-world operational demands in autonomous driving applications. Our current approach is tailored for autonomous vehicle applications; however, it holds potential for future extension to domains such as environmental monitoring and high-definition map generation. Further research efforts will focus on addressing the limitations outlined in this study. As with most software projects, the presented codebase is expected to evolve over time. To support continued development and reproducibility, we have made the source code, comprehensive documentation, usage guidelines, and a code of conduct publicly available, see Data Availability Statement.
The modular design enables use in fallback or degraded perception states. Parameters like ROI size or detection thresholds can be updated in real time to widen detection sensitivity during uncertainty. Moreover, the polygon output is suitable for integration with behavior and decision-making modules in ROS 2. The absence of vision sensors and deep learning makes the system lightweight and easy to deploy, but at the cost of reduced semantic richness.
Nonetheless, this trade-off aligns with the goals of embedded, power-constrained AV systems operating in structured urban environments.

Author Contributions

Conceptualization, M.U. and E.H.; methodology idea, B.J.B.F. an E.H.; software, B.J.B.F.; data curation, B.J.B.F. and A.A.; resources, A.A., E.H. and B.J.B.F.; writing—original draft preparation, E.H., M.U. and B.J.B.F. All authors have read and agreed to the published version of the manuscript.

Funding

The publication was created in the framework of the Széchenyi István University’s VHFO/416/2023-EM_SZERZ project entitled “Preparation of digital and self-driving environmental infrastructure developments and related research to reduce carbon emissions and environmental impact” (Green Traffic Cloud).

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

Source code and data available on https://github.com/jkk-research/urban_road_filter (accessed on 23 May 2025).

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Li, Y.; Ma, L.; Zhong, Z.; Liu, F.; Chapman, M.A.; Cao, D.; Li, J. Deep learning for LiDAR point clouds in autonomous driving: A review. IEEE Trans. Neural Netw. Learn. Syst. 2021, 32, 3412–3432. [Google Scholar] [CrossRef] [PubMed]
  2. Yang, G.; Mentasti, S.; Bersani, M.; Wang, Y.; Braghin, F.; Cheli, F. LiDAR point-cloud processing based on projection methods: A comparison. arXiv 2020, arXiv:2008.00706. [Google Scholar]
  3. Liu, H.; Wu, C.; Wang, H. Real-time object detection using LiDAR and camera fusion for autonomous driving. Sci. Rep. 2023, 13, 8942. [Google Scholar] [CrossRef] [PubMed]
  4. Royo, S.; Ballesta-Garcia, M. An overview of LiDAR imaging systems for autonomous vehicles. Appl. Sci. 2019, 9, 4093. [Google Scholar] [CrossRef]
  5. Zhou, Y.; Tuzel, O. VoxelNet: End-to-End learning for point cloud-based 3D object detection. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA, 18–22 June 2018; pp. 4490–4499. [Google Scholar]
  6. Huang, Z.; Huang, Y.; Zheng, Z.; Hu, H.; Chen, D. HybridPillars: Hybrid Point-Pillar Network for real-time two-stage 3D object detection. IEEE Sens. J. 2024, 24, 11853–11863. [Google Scholar] [CrossRef]
  7. Zhao, Z.; Chen, K.; Yamane, S. CBAM-Unet++: Easier to find the target with the attention module “CBAM”. In Proceedings of the 2021 IEEE Global Conference on Consumer Electronics (GCCE), Osaka, Japan, 12–15 October 2021; pp. 655–657. [Google Scholar]
  8. Das, D.; Adhikary, N.; Chaudhury, S. Sensor fusion in autonomous vehicle using LiDAR and camera sensor. In Proceedings of the IEEE Region 10 Humanitarian Technology Conference (R10-HTC), Hyderabad, India, 30 September–2 October 2022; pp. 336–341. [Google Scholar]
  9. Zhang, Y.; Wang, J.; Wang, X.; Dolan, J.M. Road-segmentation-based curb detection method for self-driving via a 3D-LiDAR sensor. IEEE Trans. Intell. Transp. Syst. 2018, 19, 3981–3991. [Google Scholar] [CrossRef]
  10. Douglas, D.H.; Peucker, T.K. Algorithms for the reduction of the number of points required to represent a digitized line or its caricature. Cartographica 1973, 10, 112–122. [Google Scholar] [CrossRef]
  11. Horváth, E.; Pozna, C.; Unger, M. Real-time LIDAR-based urban road and sidewalk detection for autonomous vehicles. Sensors 2022, 22, 10194. [Google Scholar] [CrossRef]
  12. Rezaei, M. Computer Vision for Road Safety: A System for Simultaneous Monitoring of Driver Behaviour and Road Hazards; Springer: Cham, Switzerland, 2014. [Google Scholar]
  13. Murray, R.M.; Li, Z.; Sastry, S.S. A Mathematical Introduction to Robotic Manipulation; CRC Press: Boca Raton, FL, USA, 2017. [Google Scholar]
Figure 1. Processing Pipeline for LIDAR-Based Urban Surface Segmentation Using Star-Shaped, X-Zero, and Z-Zero Methods.
Figure 1. Processing Pipeline for LIDAR-Based Urban Surface Segmentation Using Star-Shaped, X-Zero, and Z-Zero Methods.
Engproc 113 00024 g001
Figure 2. Real-word measurement results. The left side displays the camera image for context, while the right side shows the corresponding road point cloud. The red marker indicates the boundary between the road and the sidewalk, and the green marker represents the end of the sensing range.
Figure 2. Real-word measurement results. The left side displays the camera image for context, while the right side shows the corresponding road point cloud. The red marker indicates the boundary between the road and the sidewalk, and the green marker represents the end of the sensing range.
Engproc 113 00024 g002
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Farraj, B.J.B.; Alabdallah, A.; Unger, M.; Horváth, E. Enhancing Autonomous Navigation: Real-Time LIDAR Detection of Roads and Sidewalks in ROS 2. Eng. Proc. 2025, 113, 24. https://doi.org/10.3390/engproc2025113024

AMA Style

Farraj BJB, Alabdallah A, Unger M, Horváth E. Enhancing Autonomous Navigation: Real-Time LIDAR Detection of Roads and Sidewalks in ROS 2. Engineering Proceedings. 2025; 113(1):24. https://doi.org/10.3390/engproc2025113024

Chicago/Turabian Style

Farraj, Barham Jeries Barham, Abdelrahman Alabdallah, Miklós Unger, and Ernő Horváth. 2025. "Enhancing Autonomous Navigation: Real-Time LIDAR Detection of Roads and Sidewalks in ROS 2" Engineering Proceedings 113, no. 1: 24. https://doi.org/10.3390/engproc2025113024

APA Style

Farraj, B. J. B., Alabdallah, A., Unger, M., & Horváth, E. (2025). Enhancing Autonomous Navigation: Real-Time LIDAR Detection of Roads and Sidewalks in ROS 2. Engineering Proceedings, 113(1), 24. https://doi.org/10.3390/engproc2025113024

Article Metrics

Back to TopTop