Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (31)

Search Parameters:
Keywords = planar SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 10964 KB  
Article
Enhancing LiDAR–IMU SLAM for Infrastructure Monitoring via Dynamic Coplanarity Constraints and Joint Observation
by Zhaosheng Feng, Jun Chen, Yaofeng Liang, Wenli Liu and Yongfeng Peng
Sensors 2025, 25(17), 5330; https://doi.org/10.3390/s25175330 - 27 Aug 2025
Viewed by 1086
Abstract
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this [...] Read more.
Real-time acquisition of high-precision 3D spatial information is critical for intelligent maintenance of urban infrastructure. While SLAM technology based on LiDAR–IMU sensor fusion has become a core approach for infrastructure monitoring, its accuracy remains limited by vertical pose estimation drift. To address this challenge, this paper proposes a LiDAR–IMU fusion SLAM algorithm incorporating a dynamic coplanarity constraint and a joint observation model within an improved error-state Kalman filter framework. A threshold-driven ground segmentation method is developed to robustly extract planar features in structured environments, enabling dynamic activation of ground constraints to suppress vertical drift. Extensive experiments on a self-collected long-corridor dataset and the public M2DGR dataset demonstrate that the proposed method significantly improves pose estimation accuracy. In structured environments, the method reduces z-axis endpoint errors by 85.8% compared with Fast-LIO2, achieving an average z-axis RMSE of 0.0104 m. On the M2DGR Hall04 sequence, the algorithm attains a z-axis RMSE of 0.007 m, outperforming four mainstream LiDAR-based SLAM methods. These results validate the proposed approach as an effective solution for high-precision 3D mapping in infrastructure monitoring applications. Full article
Show Figures

Figure 1

28 pages, 7472 KB  
Article
Small but Mighty: A Lightweight Feature Enhancement Strategy for LiDAR Odometry in Challenging Environments
by Jiaping Chen, Kebin Jia and Zhihao Wei
Remote Sens. 2025, 17(15), 2656; https://doi.org/10.3390/rs17152656 - 31 Jul 2025
Viewed by 1199
Abstract
LiDAR-based Simultaneous Localization and Mapping (SLAM) serves as a fundamental technology for autonomous navigation. However, in complex environments, LiDAR odometry often experience degraded localization accuracy and robustness. This paper proposes a computationally efficient enhancement strategy for LiDAR odometry, which improves system performance by [...] Read more.
LiDAR-based Simultaneous Localization and Mapping (SLAM) serves as a fundamental technology for autonomous navigation. However, in complex environments, LiDAR odometry often experience degraded localization accuracy and robustness. This paper proposes a computationally efficient enhancement strategy for LiDAR odometry, which improves system performance by reinforcing high-quality features throughout the optimization process. For non-ground features, the method employs statistical geometric analysis to identify stable points and incorporates a contribution-weighted optimization scheme to strengthen their impact in point-to-plane and point-to-line constraints. In parallel, for ground features, locally stable planar surfaces are fitted to replace discrete point correspondences, enabling more consistent point-to-plane constraint formulation during ground registration. Experimental results on the KITTI and M2DGR datasets demonstrated that the proposed method significantly improves localization accuracy and system robustness, while preserving real-time performance with minimal computational overhead. The performance gains were particularly notable in scenarios dominated by unstructured environments. Full article
(This article belongs to the Special Issue Laser Scanning in Environmental and Engineering Applications)
Show Figures

Figure 1

18 pages, 12540 KB  
Article
SS-LIO: Robust Tightly Coupled Solid-State LiDAR–Inertial Odometry for Indoor Degraded Environments
by Yongle Zou, Peipei Meng, Jianqiang Xiong and Xinglin Wan
Electronics 2025, 14(15), 2951; https://doi.org/10.3390/electronics14152951 - 24 Jul 2025
Viewed by 1303
Abstract
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To [...] Read more.
Solid-state LiDAR systems are widely recognized for their high reliability, low cost, and lightweight design, but they encounter significant challenges in SLAM tasks due to their limited field of view and uneven horizontal scanning patterns, especially in indoor environments with geometric constraints. To address these challenges, this paper proposes SS-LIO, a precise, robust, and real-time LiDAR–Inertial odometry solution designed for solid-state LiDAR systems. SS-LIO uses uncertainty propagation in LiDAR point-cloud modeling and a tightly coupled iterative extended Kalman filter to fuse LiDAR feature points with IMU data for reliable localization. It also employs voxels to encapsulate planar features for accurate map construction. Experimental results from open-source datasets and self-collected data demonstrate that SS-LIO achieves superior accuracy and robustness compared to state-of-the-art methods, with an end-to-end drift of only 0.2 m in indoor degraded scenarios. The detailed and accurate point-cloud maps generated by SS-LIO reflect the smoothness and precision of trajectory estimation, with significantly reduced drift and deviation. These outcomes highlight the effectiveness of SS-LIO in addressing the SLAM challenges posed by solid-state LiDAR systems and its capability to produce reliable maps in complex indoor settings. Full article
(This article belongs to the Special Issue Advancements in Robotics: Perception, Manipulation, and Interaction)
Show Figures

Figure 1

30 pages, 14473 KB  
Article
VOX-LIO: An Effective and Robust LiDAR-Inertial Odometry System Based on Surfel Voxels
by Meijun Guo, Yonghui Liu, Yuhang Yang, Xiaohai He and Weimin Zhang
Remote Sens. 2025, 17(13), 2214; https://doi.org/10.3390/rs17132214 - 27 Jun 2025
Viewed by 2377
Abstract
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an [...] Read more.
Accurate and robust pose estimation is critical for simultaneous localization and mapping (SLAM), and multi-sensor fusion has demonstrated efficacy with significant potential for robotic applications. This study presents VOX-LIO, an effective LiDAR-inertial odometry system. To improve both robustness and accuracy, we propose an adaptive hash voxel-based point cloud map management method that incorporates surfel features and planarity. This method enhances the efficiency of point-to-surfel association by leveraging long-term observed surfel. It facilitates the incremental refinement of surfel features within classified surfel voxels, thereby enabling precise and efficient map updates. Furthermore, we develop a weighted fusion approach that integrates LiDAR and IMU data measurements on the manifold, effectively compensating for motion distortion, particularly under high-speed LiDAR motion. We validate our system through experiments conducted on both public datasets and our mobile robot platforms. The results demonstrate that VOX-LIO outperforms the existing methods, effectively handling challenging environments while minimizing computational cost. Full article
Show Figures

Figure 1

26 pages, 4371 KB  
Article
A Robust Rotation-Equivariant Feature Extraction Framework for Ground Texture-Based Visual Localization
by Yuezhen Cai, Linyuan Xia, Ting On Chan, Junxia Li and Qianxia Li
Sensors 2025, 25(12), 3585; https://doi.org/10.3390/s25123585 - 6 Jun 2025
Cited by 1 | Viewed by 1099
Abstract
Ground texture-based localization leverages environment-invariant, planar-constrained features to enhance pose estimation robustness, thus offering inherent advantages for seamless localization. However, traditional feature extraction methods struggle with reliable performance under large-scale rotations and texture sparsity in the case of ground texture-based localization. This study [...] Read more.
Ground texture-based localization leverages environment-invariant, planar-constrained features to enhance pose estimation robustness, thus offering inherent advantages for seamless localization. However, traditional feature extraction methods struggle with reliable performance under large-scale rotations and texture sparsity in the case of ground texture-based localization. This study addresses these challenges through a learning-based feature extraction framework—Ground Texture Rotation-Equivariant Keypoints and Descriptors (GT-REKD). The GT-REKD framework employs group-equivariant convolutions over the cyclic rotation group, augmented with directional attention and orientation-encoding heads, to produce dense keypoints and descriptors that are exactly invariant to 0–360° in-plane rotations. The experimental results for ground texture localization show that GT-REKD achieves 96.14% matching in pure rotation tests, 94.08% in incremental localization, and relocalization errors of 5.55° and 4.41 px (≈0.1 cm), consistently outperforming baseline methods under extreme rotations and sparse textures, highlighting its applicability to visual localization and simultaneous localization and mapping (SLAM) tasks. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 7483 KB  
Article
An Enhanced LiDAR-Based SLAM Framework: Improving NDT Odometry with Efficient Feature Extraction and Loop Closure Detection
by Yan Ren, Zhendong Shen, Wanquan Liu and Xinyu Chen
Processes 2025, 13(1), 272; https://doi.org/10.3390/pr13010272 - 19 Jan 2025
Cited by 3 | Viewed by 2712
Abstract
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the [...] Read more.
Simultaneous localization and mapping (SLAM) is crucial for autonomous driving, drone navigation, and robot localization, relying on efficient point cloud registration and loop closure detection. Traditional Normal Distributions Transform (NDT) odometry frameworks provide robust solutions but struggle with real-time performance due to the high computational complexity of processing large-scale point clouds. This paper introduces an improved NDT-based LiDAR odometry framework to address these challenges. The proposed method enhances computational efficiency and registration accuracy by introducing a unified feature point cloud framework that integrates planar and edge features, enabling more accurate and efficient inter-frame matching. To further improve loop closure detection, a parallel hybrid approach combining Radius Search and Scan Context is developed, which significantly enhances robustness and accuracy. Additionally, feature-based point cloud registration is seamlessly integrated with full cloud mapping in global optimization, ensuring high-precision pose estimation and detailed environmental reconstruction. Experiments on both public datasets and real-world environments validate the effectiveness of the proposed framework. Compared with traditional NDT, our method achieves trajectory estimation accuracy increases of 35.59% and over 35%, respectively, with and without loop detection. The average registration time is reduced by 66.7%, memory usage is decreased by 23.16%, and CPU usage drops by 19.25%. These results surpass those of existing SLAM systems, such as LOAM. The proposed method demonstrates superior robustness, enabling reliable pose estimation and map construction in dynamic, complex settings. Full article
(This article belongs to the Section Manufacturing Processes and Systems)
Show Figures

Figure 1

36 pages, 20333 KB  
Article
Computational Fluid Dynamics Prediction of the Sea-Keeping Behavior of High-Speed Unmanned Surface Vehicles Under the Coastal Intersecting Waves
by Xiaobin Hong, Guihong Zheng, Ruimou Cai, Yuanming Chen and Guoquan Xiao
J. Mar. Sci. Eng. 2025, 13(1), 83; https://doi.org/10.3390/jmse13010083 - 5 Jan 2025
Cited by 1 | Viewed by 2384
Abstract
To better study the sea-keeping response behavior of unmanned surface vehicles (USVs) in coastal intersecting waves, a prediction is conducted using the CFD method in this paper, in which a USV with the shape of a small-scale catamaran and designed target for high-speed [...] Read more.
To better study the sea-keeping response behavior of unmanned surface vehicles (USVs) in coastal intersecting waves, a prediction is conducted using the CFD method in this paper, in which a USV with the shape of a small-scale catamaran and designed target for high-speed navigating is considered. The CFD method is proved to be good enough at ship response prediction and can be utilized in abundant forms of towing experiment simulations, including planar motion mechanism experiments. The regular and irregular wave generation of numerical CFD can also virtualize the actual wave tank work, making it equally scientific but more efficient than the real test. This research regards the changing trend of encounter characteristics of USVs meeting two trains of waves with different inclination angles and wavelengths by monitoring wave profiles, pitch, heave, acceleration, slamming force, and pressure on specific locations of the USV hull. This paper first introduces the modeling method of intersecting waves in a virtual tank and verifies the wave profiles by comparing them with a theoretical solution. Further, the paper focuses on the sea-keeping motion of USVs and analyzes the complicated influences of encounter parameters. Eventually, this paper analyzes the changing pattern of the motion in encounter frequency and investigates the severity during the sea-keeping period through acceleration analysis. Full article
(This article belongs to the Section Ocean Engineering)
Show Figures

Figure 1

29 pages, 6572 KB  
Article
Robust Parking Space Recognition Approach Based on Tightly Coupled Polarized Lidar and Pre-Integration IMU
by Jialiang Chen, Fei Li, Xiaohui Liu and Yuelin Yuan
Appl. Sci. 2024, 14(20), 9181; https://doi.org/10.3390/app14209181 - 10 Oct 2024
Cited by 1 | Viewed by 2300
Abstract
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition [...] Read more.
Improving the accuracy of parking space recognition is crucial in the fields for Automated Valet Parking (AVP) of autonomous driving. In AVP, accurate free space recognition significantly impacts the safety and comfort of both the vehicles and drivers. To enhance parking space recognition and annotation in unknown environments, this paper proposes an automatic parking space annotation approach with tight coupling of Lidar and Inertial Measurement Unit (IMU). First, the pose of the Lidar frame was tightly coupled with high-frequency IMU data to compensate for vehicle motion, reducing its impact on the pose transformation of the Lidar point cloud. Next, simultaneous localization and mapping (SLAM) were performed using the compensated Lidar frame. By extracting two-dimensional polarized edge features and planar features from the three-dimensional Lidar point cloud, a polarized Lidar odometry was constructed. The polarized Lidar odometry factor and loop closure factor were jointly optimized in the iSAM2. Finally, the pitch angle of the constructed local map was evaluated to filter out ground points, and the regions of interest (ROI) were projected onto a grid map. The free space between adjacent vehicle point clouds was assessed on the grid map using convex hull detection and straight-line fitting. The experiments were conducted on both local and open datasets. The proposed method achieved an average precision and recall of 98.89% and 98.79% on the local dataset, respectively; it also achieved 97.08% and 99.40% on the nuScenes dataset. And it reduced storage usage by 48.38% while ensuring running time. Comparative experiments on open datasets show that the proposed method can adapt to various scenarios and exhibits strong robustness. Full article
Show Figures

Figure 1

25 pages, 4182 KB  
Article
W-VSLAM: A Visual Mapping Algorithm for Indoor Inspection Robots
by Dingji Luo, Yucan Huang, Xuchao Huang, Mingda Miao and Xueshan Gao
Sensors 2024, 24(17), 5662; https://doi.org/10.3390/s24175662 - 30 Aug 2024
Viewed by 2087
Abstract
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of [...] Read more.
In recent years, with the widespread application of indoor inspection robots, high-precision, robust environmental perception has become essential for robotic mapping. Addressing the issues of visual–inertial estimation inaccuracies due to redundant pose degrees of freedom and accelerometer drift during the planar motion of mobile robots in indoor environments, we propose a visual SLAM perception method that integrates wheel odometry information. First, the robot’s body pose is parameterized in SE(2) and the corresponding camera pose is parameterized in SE(3). On this basis, we derive the visual constraint residuals and their Jacobian matrices for reprojection observations using the camera projection model. We employ the concept of pre-integration to derive pose-constraint residuals and their Jacobian matrices and utilize marginalization theory to derive the relative pose residuals and their Jacobians for loop closure constraints. This approach solves the nonlinear optimization problem to obtain the optimal pose and landmark points of the ground-moving robot. A comparison with the ORBSLAM3 algorithm reveals that, in the recorded indoor environment datasets, the proposed algorithm demonstrates significantly higher perception accuracy, with root mean square error (RMSE) improvements of 89.2% in translation and 98.5% in rotation for absolute trajectory error (ATE). The overall trajectory localization accuracy ranges between 5 and 17 cm, validating the effectiveness of the proposed algorithm. These findings can be applied to preliminary mapping for the autonomous navigation of indoor mobile robots and serve as a basis for path planning based on the mapping results. Full article
Show Figures

Figure 1

21 pages, 16631 KB  
Article
An Effective LiDAR-Inertial SLAM-Based Map Construction Method for Outdoor Environments
by Yanjie Liu, Chao Wang, Heng Wu and Yanlong Wei
Remote Sens. 2024, 16(16), 3099; https://doi.org/10.3390/rs16163099 - 22 Aug 2024
Cited by 2 | Viewed by 3516
Abstract
SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction [...] Read more.
SLAM (simultaneous localization and mapping) is essential for accurate positioning and reasonable path planning in outdoor mobile robots. LiDAR SLAM is currently the dominant method for creating outdoor environment maps. However, the mainstream LiDAR SLAM algorithms have a single point cloud feature extraction process at the front end, and most of the loop closure detection at the back end is based on RNN (radius nearest neighbor). This results in low mapping accuracy and poor real-time performance. To solve this problem, we integrated the functions of point cloud segmentation and Scan Context loop closure detection based on the advanced LiDAR-inertial SLAM algorithm (LIO-SAM). First, we employed range images to extract ground points from raw LiDAR data, followed by the BFS (breadth-first search) algorithm to cluster non-ground points and downsample outliers. Then, we calculated the curvature to extract planar points from ground points and corner points from clustered segmented non-ground points. Finally, we used the Scan Context method for loop closure detection to improve back-end mapping speed and reduce odometry drift. Experimental validation with the KITTI dataset verified the advantages of the proposed method, and combined with Walking, Park, and other datasets comprehensively verified that the proposed method had good accuracy and real-time performance. Full article
Show Figures

Figure 1

25 pages, 2259 KB  
Article
RC-SLAM: Road Constrained Stereo Visual SLAM System Based on Graph Optimization
by Yuan Zhu, Hao An, Huaide Wang, Ruidong Xu, Mingzhi Wu and Ke Lu
Sensors 2024, 24(2), 536; https://doi.org/10.3390/s24020536 - 15 Jan 2024
Cited by 11 | Viewed by 2519
Abstract
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation [...] Read more.
Intelligent vehicles are constrained by road, resulting in a disparity between the assumed six degrees of freedom (DoF) motion within the Visual Simultaneous Localization and Mapping (SLAM) system and the approximate planar motion of vehicles in local areas, inevitably causing additional pose estimation errors. To address this problem, a stereo Visual SLAM system with road constraints based on graph optimization is proposed, called RC-SLAM. Addressing the challenge of representing roads parametrically, a novel method is proposed to approximate local roads as discrete planes and extract parameters of local road planes (LRPs) using homography. Unlike conventional methods, constraints between the vehicle and LRPs are established, effectively mitigating errors arising from assumed six DoF motion in the system. Furthermore, to avoid the impact of depth uncertainty in road features, epipolar constraints are employed to estimate rotation by minimizing the distance between road feature points and epipolar lines, robust rotation estimation is achieved despite depth uncertainties. Notably, a distinctive nonlinear optimization model based on graph optimization is presented, jointly optimizing the poses of vehicle trajectories, LPRs, and map points. The experiments on two datasets demonstrate that the proposed system achieved more accurate estimations of vehicle trajectories by introducing constraints between the vehicle and LRPs. The experiments on a real-world dataset further validate the effectiveness of the proposed system. Full article
Show Figures

Figure 1

17 pages, 11891 KB  
Article
LFVB-BioSLAM: A Bionic SLAM System with a Light-Weight LiDAR Front End and a Bio-Inspired Visual Back End
by Ruilan Gao, Zeyu Wan, Sitong Guo, Changjian Jiang and Yu Zhang
Biomimetics 2023, 8(5), 410; https://doi.org/10.3390/biomimetics8050410 - 5 Sep 2023
Cited by 5 | Viewed by 2406
Abstract
Simultaneous localization and mapping (SLAM) is one of the crucial techniques applied in autonomous robot navigation. The majority of present popular SLAM algorithms are built within probabilistic optimization frameworks, achieving high accuracy performance at the expense of high power consumption and latency. In [...] Read more.
Simultaneous localization and mapping (SLAM) is one of the crucial techniques applied in autonomous robot navigation. The majority of present popular SLAM algorithms are built within probabilistic optimization frameworks, achieving high accuracy performance at the expense of high power consumption and latency. In contrast to robots, animals are born with the capability to efficiently and robustly navigate in nature, and bionic SLAM algorithms have received increasing attention recently. Current bionic SLAM algorithms, including RatSLAM, with relatively low accuracy and robustness, tend to fail in certain challenging environments. In order to design a bionic SLAM system with a novel framework and relatively high practicality, and to facilitate the development of bionic SLAM research, in this paper we present LFVB-BioSLAM, a bionic SLAM system with a light-weight LiDAR-based front end and a bio-inspired vision-based back end. We adopt a range flow-based LiDAR odometry as the front end of the SLAM system, providing the odometry estimation for the back end, and we propose a biologically-inspired back end processing algorithm based on the monocular RGB camera, performing loop closure detection and path integration. Our method is verified through real-world experiments, and the results show that LFVB-BioSLAM outperforms RatSLAM, a vision-based bionic SLAM algorithm, and RF2O, a laser-based horizontal planar odometry algorithm, in terms of accuracy and robustness. Full article
(This article belongs to the Special Issue Design and Control of a Bio-Inspired Robot: 2nd Edition)
Show Figures

Figure 1

24 pages, 8459 KB  
Article
Robust Localization of Industrial Park UGV and Prior Map Maintenance
by Fanrui Luo, Zhenyu Liu, Fengshan Zou, Mingmin Liu, Yang Cheng and Xiaoyu Li
Sensors 2023, 23(15), 6987; https://doi.org/10.3390/s23156987 - 6 Aug 2023
Cited by 5 | Viewed by 2573
Abstract
The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase [...] Read more.
The precise localization of unmanned ground vehicles (UGVs) in industrial parks without prior GPS measurements presents a significant challenge. Simultaneous localization and mapping (SLAM) techniques can address this challenge by capturing environmental features, using sensors for real-time UGV localization. In order to increase the real-time localization accuracy and efficiency of UGVs, and to improve the robustness of UGVs’ odometry within industrial parks—thereby addressing issues related to UGVs’ motion control discontinuity and odometry drift—this paper proposes a tightly coupled LiDAR-IMU odometry method based on FAST-LIO2, integrating ground constraints and a novel feature extraction method. Additionally, a novel maintenance method of prior maps is proposed. The front-end module acquires the prior pose of the UGV by combining the detection and correction of relocation with point cloud registration. Then, the proposed maintenance method of prior maps is used to hierarchically and partitionally segregate and perform the real-time maintenance of the prior maps. At the back-end, real-time localization is achieved by the proposed tightly coupled LiDAR-IMU odometry that incorporates ground constraints. Furthermore, a feature extraction method based on the bidirectional-projection plane slope difference filter is proposed, enabling efficient and accurate point cloud feature extraction for edge, planar and ground points. Finally, the proposed method is evaluated, using self-collected datasets from industrial parks and the KITTI dataset. Our experimental results demonstrate that, compared to FAST-LIO2 and FAST-LIO2 with the curvature feature extraction method, the proposed method improved the odometry accuracy by 30.19% and 48.24% on the KITTI dataset. The efficiency of odometry was improved by 56.72% and 40.06%. When leveraging prior maps, the UGV achieved centimeter-level localization accuracy. The localization accuracy of the proposed method was improved by 46.367% compared to FAST-LIO2 on self-collected datasets, and the located efficiency was improved by 32.33%. The z-axis-located accuracy of the proposed method reached millimeter-level accuracy. The proposed prior map maintenance method reduced RAM usage by 64% compared to traditional methods. Full article
Show Figures

Figure 1

31 pages, 4504 KB  
Article
Robot Navigation in Complex Workspaces Employing Harmonic Maps and Adaptive Artificial Potential Fields
by Panagiotis Vlantis, Charalampos P. Bechlioulis and Kostas J. Kyriakopoulos
Sensors 2023, 23(9), 4464; https://doi.org/10.3390/s23094464 - 3 May 2023
Cited by 7 | Viewed by 3044
Abstract
In this work, we address the single robot navigation problem within a planar and arbitrarily connected workspace. In particular, we present an algorithm that transforms any static, compact, planar workspace of arbitrary connectedness and shape to a disk, where the navigation problem can [...] Read more.
In this work, we address the single robot navigation problem within a planar and arbitrarily connected workspace. In particular, we present an algorithm that transforms any static, compact, planar workspace of arbitrary connectedness and shape to a disk, where the navigation problem can be easily solved. Our solution benefits from the fact that it only requires a fine representation of the workspace boundary (i.e., a set of points), which is easily obtained in practice via SLAM. The proposed transformation, combined with a workspace decomposition strategy that reduces the computational complexity, has been exhaustively tested and has shown excellent performance in complex workspaces. A motion control scheme is also provided for the class of non-holonomic robots with unicycle kinematics, which are commonly used in most industrial applications. Moreover, the tuning of the underlying control parameters is rather straightforward as it affects only the shape of the resulted trajectories and not the critical specifications of collision avoidance and convergence to the goal position. Finally, we validate the efficacy of the proposed navigation strategy via extensive simulations and experimental studies. Full article
(This article belongs to the Special Issue Mobile Robots: Navigation, Control and Sensing)
Show Figures

Figure 1

17 pages, 19083 KB  
Article
Robot Localization Using Situational Graphs (S-Graphs) and Building Architectural Plans
by Muhammad Shaheer, Hriday Bavle, Jose Luis Sanchez-Lopez and Holger Voos
Robotics 2023, 12(3), 65; https://doi.org/10.3390/robotics12030065 - 28 Apr 2023
Cited by 3 | Viewed by 3407
Abstract
This paper presents robot localization using building architectural plans and hierarchical SLAM. We extract geometric, semantic as well as topological information from the architectural plans in the form of walls and rooms, and create the topological and metric-semantic layer of the Situational Graphs [...] Read more.
This paper presents robot localization using building architectural plans and hierarchical SLAM. We extract geometric, semantic as well as topological information from the architectural plans in the form of walls and rooms, and create the topological and metric-semantic layer of the Situational Graphs (S-Graphs) before navigating in the environment. When the robot navigates in the construction environment, it uses the robot odometry and 3D lidar measurements to extract planar wall surfaces. A particle filter method exploits the previously built situational graph and its available geometric, semantic, and topological information to perform global localization. We validate our approach in simulated and real datasets captured on ongoing construction sites presenting state-of-the-art results when comparing it against traditional geometry-based localization techniques. Full article
(This article belongs to the Section Sensors and Control in Robotics)
Show Figures

Figure 1

Back to TopTop