S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments
Abstract
1. Introduction
2. Materials and Methods
2.1. Related Work
2.2. S3PM Explanation
2.2.1. Entropy-Guided Environment Representation
- Along the ray until the first confident surface, the free-space evidence is added;
- At the surface voxel, the evidence is split between occupied vs. dynamic by using (e.g., add to , to );
- Voxels never intersected by valid rays retain unknown mass.
2.2.2. Probabilistic Updates and the Dynamic-Aware Entropy Field
- All voxels along the ray path (excluding the endpoint) receive evidence assigned to the free state:
- 2.
- At the endpoint voxel , the evidence is split between static and dynamic occupancy based on the dynamic likelihood derived in (5):
- 3.
- Spatial unpredictability of moving agents (e.g., motion blur or rapid position changes) prompts strict increases in unknown evidence within the immediate neighborhood of voxels identified as dynamic. If , then for all neighbors
2.2.3. Navigability Classification from Entropy and Risk
2.2.4. Incremental Map Expansion and Uncertainty Reduction
2.2.5. Multi-Criteria Path Optimization with Entropy Regularization
3. Results
3.1. Accuracy of Entropy-Regularized Representation
3.2. SP3M Efficiency
- Raspberry Pi 5 (Broadcom BCM2712 ARM Cortex-A76 CPU @ 2.4GHz, 8 GB LPDDR4X RAM, manufactured by Sony UK Technology);
- Raspberry Pi 5 + Hailo-8 AI Kit (same as 1 + 26 TOPS NPU acceleration, manufactured by Hailo.ai);
- Desktop reference system (Intel Core i7-12700K CPU @ 3.6GHz, 32 GB DDR5 RAM, NVIDIA RTX 3080 GPU).
4. Conclusions
- An 18–27% higher IoU for static/dynamic segmentation;
- A 0.94–0.97 AUC for motion detection;
- A 30–45% reduction in collision events;
- Sustained real-time performance of 12–15 Hz on the bare Raspberry Pi 5 and 25–30 Hz with NPU offloading;
- Memory footprint below 1 GB for 20 × 20 × 6 m rolling maps.
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
References
- Karaman, S.; Frazzoli, E. Sampling-Based Algorithms for Optimal Motion Planning. Int. J. Robot. Res. 2011, 30, 846–894. [Google Scholar] [CrossRef]
- Hornung, A.; Wurm, K.M.; Bennewitz, M.; Stachniss, C.; Burgard, W. OctoMap: An Efficient Probabilistic 3D Mapping Framework Based on Octrees. Auton. Robot. 2013, 34, 189–206. [Google Scholar] [CrossRef]
- Wijayathunga, L.; Rassau, A.; Chai, D. Challenges and Solutions for Autonomous Ground Robot Scene Understanding and Navigation in Unstructured Outdoor Environments: A Review. Appl. Sci. 2023, 13, 9877. [Google Scholar] [CrossRef]
- Elfes, A. Using Occupancy Grids for Mobile Robot Perception and Navigation. Computer 1989, 22, 46–57. [Google Scholar] [CrossRef]
- Sánchez-Ibáñez, J.R.; Pérez-del-Pulgar, C.J.; García-Cerezo, A. Path Planning for Autonomous Mobile Robots: A Review. Sensors 2021, 21, 7898. [Google Scholar] [CrossRef]
- Thrun, S.; Burgard, W.; Fox, D. Probabilistic Robotics; MIT Press: Cambridge, MA, USA, 2005. [Google Scholar]
- Wang, E.; Huo, W.; Xu, S.; Qu, P.; Na, L. Autonomous Navigation of Indoor Wheeled Robots Based on Improved Gmapping and Improved Bidirectional A*. Discov. Robot. 2025, 1, 7. [Google Scholar] [CrossRef]
- Faroni, M.; Pedrocchi, N.; Beschi, M. Adaptive Hybrid Local–Global Sampling for Fast Informed Sampling-Based Optimal Path Planning. Auton. Robot. 2024, 48, 6. [Google Scholar] [CrossRef]
- Huang, Y.; Huang, S.; Wang, H.; Meng, R. 3D Path Planning and Obstacle Avoidance Algorithms for Obstacle-Overcoming Robots. arXiv 2022, arXiv:2209.00871. [Google Scholar] [CrossRef]
- Cui, C.; Wang, Z.; Sui, J.; Zhang, Y.; Guo, C. An Improved RRT Behavioral Planning Method for Robots Based on PTM Algorithm. Sci. Rep. 2024, 14, 21776. [Google Scholar] [CrossRef]
- Zhang, Y.; Wang, H.; Yin, M.; Wang, J.; Hua, C. Bi-AM-RRT*: A Fast and Efficient Sampling-Based Motion Planning Algorithm in Dynamic Environments. IEEE Trans. Intell. Veh. 2023, 9, 1282–1293. [Google Scholar] [CrossRef]
- Xu, T. Recent Advances in Rapidly-Exploring Random Tree: A Review. Heliyon 2024, 10, e32451. [Google Scholar] [CrossRef] [PubMed]
- Yang, L.; Li, P.; Qian, S.; Quan, H.; Miao, J.; Liu, M.; Hu, Y.; Memetimin, E. Path Planning Technique for Mobile Robots: A Review. Machines 2023, 11, 980. [Google Scholar] [CrossRef]
- Qin, H.; Shao, S.; Wang, T.; Yu, X.; Jiang, Y.; Cao, Z. Review of Autonomous Path Planning Algorithms for Mobile Robots. Drones 2023, 7, 211. [Google Scholar] [CrossRef]
- Abu-Jabal, N.; Baziyad, M.; Fareh, R.; Brahmi, B.; Rabie, T.; Bettayeb, M. A Comprehensive Study of Recent Path-Planning Techniques in Dynamic Environments for Autonomous Robots. Sensors 2024, 24, 8089. [Google Scholar] [CrossRef] [PubMed]
- Rösmann, C.; Hoffmann, F.; Bertram, T. Integrated Online Trajectory Planning and Optimization in Distinctive Topologies. Robot. Auton. Syst. 2017, 88, 142–153. [Google Scholar] [CrossRef]
- Roesmann, C.; Feiten, W.; Woesch, T.; Hoffmann, F.; Bertram, T. Trajectory Modification Considering Dynamic Constraints of Autonomous Robots. In Proceedings of the German Conference on Robotics (ROBOTIK 2012), Munich, Germany, 21–22 May 2012; pp. 1–6. [Google Scholar]
- Szabolcsi, R. Model Predictive Control Applied in UAV Flight Path Tracking Missions. Rev. Air Force Acad. 2019, 17, 49–62. [Google Scholar] [CrossRef]
- Williams, G.; Drews, P.; Goldfain, B.; Rehg, J.M.; Theodorou, E.A. Aggressive Driving with Model Predictive Path Integral Control. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Stockholm, Sweden, 16–20 August 2016. [Google Scholar] [CrossRef]
- Fox, D.; Burgard, W.; Thrun, S. The Dynamic Window Approach to Collision Avoidance. IEEE Robot. Autom. Mag. 1997, 4, 23–33. [Google Scholar] [CrossRef]
- Rosinol, A.; Violette, A.; Abate, M.; Hughes, N.; Chang, Y.; Shi, J.; Gupta, A.; Carlone, L. Kimera: From SLAM to Spatial Perception with 3D Dynamic Scene Graphs. Int. J. Robot. Res. 2021, 40, 1510–1546. [Google Scholar] [CrossRef]
- Tai, L.; Paolo, G.; Liu, M. Virtual-To-Real Deep Reinforcement Learning: Continuous Control of Mobile Robots for Mapless Navigation. In Proceedings of the IEEE International Conference on Robotics and Automation (ICRA), Brisbane, Australia, 21–25 May 2018. [Google Scholar] [CrossRef]
- Costante, G.; Ciarfuglia, T.A. LS-VO: Learning Dense Optical Subspace for Robust Visual Odometry Estimation. IEEE Robot. Autom. Lett. 2018, 3, 1735–1742. [Google Scholar] [CrossRef]
- Dellaert, F.; Kaess, M. Factor Graphs for Robot Perception. Found. Trends Robot. 2017, 6, 1–139. [Google Scholar] [CrossRef]
- Ishida, H.; Tanaka, H.; Taniguchi, H.; Moriizumi, T. Mobile Robot Navigation Using Vision and Olfaction to Search for a Gas/Odor Source. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Sendai, Japan, 28 September–2 October 2004; IEEE: Piscataway, NJ, USA, 2005; pp. 3978–3983. [Google Scholar] [CrossRef]
- Khan, D.; Cheng, Z.; Uchiyama, H.; Ali, S.; Asghar, M.; Kiyokawa, K. Recent Advances in Vision-Based Indoor Navigation: A Systematic Literature Review. Comput. Graph. 2022, 104, 24–45. [Google Scholar] [CrossRef]
- Zhang, Y.; Chen, P. Path Planning of a Mobile Robot for a Dynamic Indoor Environment Based on an SAC-LSTM Algorithm. Sensors 2023, 23, 9802. [Google Scholar] [CrossRef]
- Thomas, H.; Zhang, J.; Barfoot, T.D. The Foreseeable Future: Self-Supervised Learning to Predict Dynamic Scenes for Indoor Navigation. arXiv 2022, arXiv:2208.12602. [Google Scholar] [CrossRef]
- Slavova, A.; Hristov, V. Mapless Navigation with Deep Reinforcement Learning in Indoor Environment. Eng. Proc. 2025, 100, 63. [Google Scholar] [CrossRef]
- Francis, A.G.; Faust, A.; Chiang, H.-T.L.; Hsu, J.; Kew, J.C.; Fišer, M.; Lee, T.-W.E. Long-Range Indoor Navigation with PRM-RL. IEEE Trans. Robot. 2020, 36, 1115–1134. [Google Scholar] [CrossRef]
- NVIDIA Isaac Sim. NVIDIA Corporation: Santa Clara, CA, USA, 2025. Available online: https://developer.nvidia.com/isaac/sim (accessed on 7 January 2026).






| Path Length [m] | Minimum Obstacle Distance [m] | Newly Explored Voxels [Count (105) per Keyframe] | * Path Smoothness [rad] | ||||
|---|---|---|---|---|---|---|---|
| 0.25 | 0.25 | 0.25 | 0.25 | 54.2 | 0.48 | 0.21 | 0.42 |
| 0.40 | 0.20 | 0.20 | 0.20 | 53.0 | 0.42 | 0.16 | 0.47 |
| 0.20 | 0.40 | 0.20 | 0.20 | 55.0 | 0.51 | 0.18 | 0.28 |
| 0.20 | 0.20 | 0.40 | 0.20 | 55.7 | 1.23 | 0.15 | 0.45 |
| 0.20 | 0.20 | 0.20 | 0.40 | 56.9 | 0.40 | 0.34 | 0.52 |
| 0.40 | 0.20 | 0.20 | 0.20 | 52.6 | 0.37 | 0.13 | 0.62 |
| 0.20 | 0.40 | 0.30 | 0.10 | 57.0 | 1.06 | 0.18 | 0.25 |
| 0.15 | 0.15 | 0.15 | 0.55 | 58.8 | 0.34 | 0.41 | 0.69 |
| 0.30 | 0.30 | 0.20 | 0.20 | 53.9 | 0.45 | 0.17 | 0.31 |
| Method | Map Accuracy (IOU [%]) | Map Awareness (AUC [%]) | Trajectory Accuracy (RMSE [m]) | Trajectory Safety Rate [%] |
|---|---|---|---|---|
| Default | 71.40 | 76.20 | 0.54 | 52.30 |
| Probabilistic | 82.60 (+15.69%) | 85.10 (+11.68%) | 0.41 (−24.07%) | 83.12 (+58.93%) |
| DWA | 80.20 (+12.32%) | 81.10 (+ 6.68%) | 0.49 (− 9.26%) | 78.20 (+49.52%) |
| TEB | 83.40 (+16.81%) | 86.20 (+13.12%) | 0.35 (−35.18%) | 88.50 (+69.22%) |
| MPPI | 85.10 (+19.19%) | 88.70 (+16.40%) | 0.37 (−31.48%) | 94.10 (+79.92%) |
| S3PM (ours) | 89.80 (+25.77%) | 92.40 (+21.26%) | 0.38 (−29.63%) | 92.70 (+77.25%) |
| S3PM Step | Raspberry Pi 5 | Raspberry Pi 5 + Hailo-8 | Desktop |
|---|---|---|---|
| Optical Flow Estimation [ms] | 22.5 | 6.8 | 4.1 |
| Ego-motion Compensation [ms] | 5.7 | 3.9 | 2.6 |
| Dynamic Likelihood & Voting [ms] | 18.2 | 7.4 | 5.3 |
| Entropy Update & Regularization [ms] | 14.6 | 6.2 | 4.8 |
| Distance Transform [ms] | 9.8 | 5.3 | 3.1 |
| Path Planning [ms] | 11.4 | 5.9 | 4.2 |
| Average Runtime per Cycle [ms] | 82.2 | 35.5 | 24.1 |
| Average Frequency [Hz] | 12.2 | 28.1 | 41.5 |
| Idle Power [W] (System only) | 3.1 | 3.4 | 45.2 |
| Average Power Consumption (S3PM) [W] | 6.8 | 8.2 | 127.9 |
| Peak Power Consumption (S3PM) [W] | 9.4 | 11.6 | 210.5 |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2026 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license.
Share and Cite
Sazonov, A.; Kuchkin, O.; Cherepanska, I.; Lipnickas, A. S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments. Sensors 2026, 26, 731. https://doi.org/10.3390/s26020731
Sazonov A, Kuchkin O, Cherepanska I, Lipnickas A. S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments. Sensors. 2026; 26(2):731. https://doi.org/10.3390/s26020731
Chicago/Turabian StyleSazonov, Artem, Oleksii Kuchkin, Irina Cherepanska, and Arūnas Lipnickas. 2026. "S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments" Sensors 26, no. 2: 731. https://doi.org/10.3390/s26020731
APA StyleSazonov, A., Kuchkin, O., Cherepanska, I., & Lipnickas, A. (2026). S3PM: Entropy-Regularized Path Planning for Autonomous Mobile Robots in Dense 3D Point Clouds of Unstructured Environments. Sensors, 26(2), 731. https://doi.org/10.3390/s26020731

