Next Article in Journal
SAMS-Net: A Stage-Decoupled Semantic Segmentation Network for Forest Fire Detection
Previous Article in Journal
Evaluation of Plant Nutrition Strategies with Zn and Mn Obtained from Black Mass in Citrus
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance

1
School of Mechanical-Electronic and Vehicle Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
2
Embodied Intelligence and Robotics Technology Research Center of the Artificial Intelligence + Research Institute, Beijing University of Civil Engineering and Architecture, Beijing 100044, China
3
Beijing Key Laboratory of Intelligent Construction Equipment for Urban Building Construction, Beijing 102616, China
4
School of Environmental and Energy Engineering, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
5
Intelligent Equipment Research Institute, Beijing Academy of Science and Technology, Beijing 100061, China
6
Key Laboratory for Thermal Science and Power Engineering of Ministry of Education, Department of Energy and Power Engineering, Tsinghua University, Beijing 100089, China
*
Author to whom correspondence should be addressed.
Appl. Sci. 2026, 16(7), 3141; https://doi.org/10.3390/app16073141 (registering DOI)
Submission received: 24 February 2026 / Revised: 19 March 2026 / Accepted: 20 March 2026 / Published: 24 March 2026

Abstract

Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus of numerous studies is thus on improved pure laser SLAM algorithms that are highly robust. The enhanced algorithm of FLOAM GICP registration and SOR filtering is applied in this study. The SOR filtering processes the laser point cloud to remove outlier noise. The GICP registration replaces the classic with an optimized matching cost function. Experiments are conducted on a mobile robot with a Leishen C16 LiDAR to simulate real-life tests in an indoor corridor and outdoor plaza on the Gazebo simulation platform. The results from the EVO tool’s quantitative evaluation indicate that the indoor mean absolute error and RMSE were reduced by 46.67% and 41.67% compared with FLOAM. The outdoor mean and maximum errors are reduced by 46.00% and 70.00%, respectively. The proposed improved scheme achieves centimeter-level positioning accuracy and strong robustness in pure laser configurations without auxiliary sensors such as IMUs or odometers, providing a reliable technical solution for the engineering application of mobile robots in sensor-constrained scenarios.

1. Introduction

Simultaneous localization and mapping (SLAM) is the essential enabling technology for complicated intelligent operations including the autonomous navigation of mobile robots [1], unmanned aerial vehicle (UAV) operations [2], and disaster relief [3]. The positioning accuracy and environmental adaptability of intelligent devices directly determine the reliability of their task execution and their environmental perception capabilities. Benefiting from its high ranging accuracy, insensitivity to lighting changes, and strong environmental robustness, LiDAR-based SLAM technology has become a new research hotspot in the fields of industrial automation and consumer smart devices. It serves as the key technical solution for mobile robots to achieve autonomous perception [4] in indoor/semi-outdoor environments where GPS signals are not available. In practical operation scenarios, such as indoor corridors and outdoor open plazas, this technology still faces critical challenges, including sparse environmental features, dense outlier noise in point clouds, and occlusion caused by dynamic stray points. In past decades, the traditional multi-sensor fusion positioning methods have relied on auxiliary equipment such as inertial measurement units (IMUs [5]) and visual cameras [6]. Standard lightweight, pure LiDAR configurations continue to face challenges in achieving full-field coverage. For this reason, it is essential to develop extremely robust [7] and high-precision SLAM algorithms [8] relying only on the LiDAR sensor for the reliable operation of mobile robots in the absence of additional sensors.
LiDAR SLAM technology can sense the environment by itself without any external information, which is already the core solution for pure lasers in mobile robots. This technology is presently used in AGV (automated guided vehicle) navigation [9] and indoor inspection robots [10]. It has been widely verified for obstacle avoidance in complex terrain [11] and autonomous navigation of unmanned systems [12]. The traditional algorithmic framework of LiDAR SLAM can be broadly divided into two main categories: feature-extraction-based methods [13,14,15] and registration-based methods [16,17]. The FLOAM algorithm is one of these, which is a lightweight feature-extraction-based LiDAR SLAM. It uses edge and planar features from point clouds for inter-frame pose estimation. Due to its lightweight structure and high real-time performance, FLOAM has a strong deployment prospect on resource-limited mobile robots. However, in actual operational circumstances where LiDAR is solely used, conventional FLOAM algorithms have great adaptability problems, which are caused by issues like dense point cloud noise, interfering dynamic stray points and sparse features in some sections. First, the conventional FLOAM algorithm lacks an effective point cloud preprocessing module [18]. Noise contained in the raw input point cloud severely interferes with subsequent feature extraction, which further leads to cumulative positioning errors of the system. Second, the algorithm’s registration method is traditional and does not use point cloud covariance information [19]. It is sensitive to initial pose and has weak interference resistance, which often results in failed matching in feature-poor regions. Third, the algorithm has no loop closure detection [20]. When the system is used for a long time the error gets accumulated, resulting in a distortion in mapping, and it does not meet the basic requirements of the mobile robots, which are high-accuracy positioning and high fidelity mapping. Inter-frame registration consistency is easily disturbed by these uncontrollable disturbances created by pure laser configuration and complex environmental features. As a result, the conventional FLOAM algorithm has poor positioning accuracy. This is unable to meet the requirements of a lightweight and high-precision pure laser SLAM in engineering applications like UAVs and disaster rescues.
A wide range of strategies have been developed regarding the optimization of positioning accuracy and anti-interference performance for laser SLAM. Xu et al. [21] proposed an advanced version of LeGO-LOAM that is built on improving the adaptive feature extraction and the ground segmentation optimization that enhances the reliability of feature extraction in complex terrain. Shen et al. [22] proposed the LIO-SAM++ algorithm for pose estimation stability improvement with laser–inertial semantic fusion in sensor fusion. In terms of improving registration algorithms, Zhang et al. [16] proposed a segmented registration optimization algorithm that integrates NDT and PL-ICP to enhance registration accuracy. Moreover, various research results, such as the NDT-ICP fusion algorithm [23], particle filter optimization algorithm [24], and SLAM system employing heterogeneous sensor data fusion [25], have set the stage for the engineering application of laser SLAM. Existing studies, however, mostly depend on multi-sensor fusion or sophisticated computation architecture. The accuracy enhancement and anti-interference ability optimization under pure laser lightweight design still have room for improvement. Additionally, there is a lack of systematic validation of the synergistic optimization of GICP registration and SOR filtering.
Based on the above analysis, this paper proposes an improved highly robust pure laser SLAM solution integrating GICP registration and SOR filtering based on the classical FLOAM algorithm. The FLOAM algorithm is improved in this strategy, which retains lightweight computation and high real-time performance. Furthermore, it is upgraded to enhance positioning accuracy and interference resistance in dual. The SOR filter method is first used to process laser point cloud data. This filtering is necessary to eliminate outlier noise and invalid information. After filtering, there are high-quality point clouds output for subsequent feature extraction. Taking this further, the GICP registration algorithm is proposed for replacing traditional registration techniques. Utilizing the covariance information of point clouds to construct a robust matching cost function improves the accuracy of feature matching and interference resistance.
The technical contributions of this study include the following:
(1)
An improved FLOAM algorithm framework tailored for pure laser configurations is proposed. The joint optimization of SOR filtering and GICP positioning enhances positioning accuracy without dependent sensors like IMUs and odometers and fills the gap in accuracy optimization for light pure laser setups.
(2)
A formal investigation is undertaken aimed at validating the suitability of GICP registration and SOR filtering for lightweight laser SLAM to identify the most optimal combinations of filtering and registration parameters for an implementable technical solution for pure laser SLAM engineering applications.
(3)
A performance evaluation system is created in multiple scenarios, which include Gazebo, indoor corridor and outdoor plaza. By using quantitative error analysis and qualitative mapping quality verification, this framework is provided as an experimental reference for optimizing such algorithms.
In this paper, the open-source trajectory evaluation toolkit EVO is employed to conduct a quantitative analysis of the absolute pose error (APE) of the proposed improved algorithm to systematically verify its positioning accuracy and environmental robustness in both indoor and outdoor scenarios under a pure laser configuration. The remainder of this paper is organized as follows: Section 2 elaborates the fundamental principles and implementation framework of the improved FLOAM algorithm, including the point cloud preprocessing mechanism based on SOR filtering and the frame-to-frame registration strategy optimized by the GICP algorithm. Section 3 establishes the pure-laser-based mobile robot experimental platform, designs experimental protocols for typical indoor corridor and outdoor plaza scenarios, carries out offline experimental validation in both Gazebo simulation and real-world environments, and comprehensively evaluates the performance superiority of the proposed algorithm through quantitative error analysis and qualitative mapping quality verification. Section 4 systematically discusses the strengths and existing limitations of the proposed algorithm and clarifies its applicable scenarios and potential optimization directions. Section 5 summarizes the core research findings of this work, and prospects future research directions for laser SLAM, including multi-sensor fusion, dynamic scene robustness enhancement, and loop closure detection optimization.

2. Algorithm Principles and Analysis

2.1. System Flow of the Improved FLOAM Algorithm

This paper adopts the original processing workflow of the classical FLOAM algorithm and integrates the Generalized Iterative Closest Point (GICP) algorithm’s accurate inter-frame registration with the Statistical Outlier Removal (SOR) algorithm’s powerful noise filtering. By rationally integrating these two core improved modules into the corresponding processing stages of the algorithm, the improved FLOAM algorithm completely constructs the execution flow (Figure 1). This modular design enables independent optimization or replacement of each functional module. The specific implementation steps are detailed as follows.
Initially, the raw point cloud data captured by the LiDAR is fed in a similar way for all processes. The SOR algorithm is responsible for preprocessing the raw point cloud information. It detects and removes noise points and isolated points through statistical characteristics. This is good-quality point cloud data for the later feature extraction. Following that, the traditional FLOAM feature extraction logic is performed in the filtered point cloud to detect edges and planes. These enable support for other essential features used for pose estimation. This can exponentially increase the number of features used during registration. Optimizing the matching cost function provides a robust estimate of the robot’s actual pose. On that basis, an incremental point cloud map [26] is constructed. This step aims to synchronously update the robot’s pose trajectory and the global point cloud map. Then, dynamic trajectory and map optimization uses iterative computation. In conclusion, the pose estimation of the robot for the current frame is outputted, while a standard interface is reserved to support the flexible expansion of subsequent functional modules such as loop closure detection and global optimization.
Figure 1 visually illustrates the overall workflow structure of the improved FLOAM algorithm proposed in this paper, clearly reflecting the logical connections between processing stages and the interrelationships among modules.

2.2. Introduction of the GICP Algorithm and Robust Registration Improvement

Effectively, inter-frame registration comprises the core procedure required for pose estimation in laser SLAM. In other words, the accuracy of inter-frame registration will directly affect the reliability of the robot’s localization and map consistency [27]. The general ICP registration technique that traditional FLOAM algorithms adopt largely relies on Euclidean distance as the matching cost function, overlooking the local geometric distribution feature of point clouds. This method has serious limitations in complicated situations under the pure laser configuration (e.g., sparse features and dense noise): sensitive to initial pose, weak interference resistance, and prone to matching failure or cumulative error [19]. To address this issue, we replace the conventional ICP method with the GICP algorithm. Integrating point cloud covariance information to optimize the registration logic improves the accuracy and robustness of feature matching while complying with the lightweight requirement of pure laser configurations.
GICP is a modified ICP algorithm that uses local shape information associated with the source and target point clouds. The approach characterizes the spatial distribution features of points, in which it uses a covariance matrix and uses maximum likelihood estimation to solve for optimal rigid transformation parameters to achieve unified point-to -point and point-to-surface registration. More precisely, to effectively capture the distribution characteristics of point cloud regions, GICP utilizes a k -neighborhood point set N ( p i ) = { p i 1 , p i 2 , , p i k } for each source point p i = p ( x , y , z ) to calculate the covariance matrix. When referring to surface normal vector regularization optimization [28], the following equation applies:
p i = 1 k 1 t = 1 k ( p i t p ¯ i ) ( p i t p ¯ i ) T + λ I
In the formula, p ¯ i denotes the centroid of the neighborhood of the k -nearest neighbor set of the source point p i . The calculation method is p ¯ i = 1 k t = 1 k p i t (unit: m); λ is the regularization coefficient (set to 10−6) to prevent matrix singularity; I is the 3 × 3 identity matrix; and the covariance matrix q j of the target point cloud q j is calculated identically to the source point cloud. This modeling approach enables GICP to capture the smoothness and density distribution characteristics of local point cloud surfaces, providing a basis for subsequent weighted matching.
Both the source point p i and target point q i are assumed to have a Gaussian covariance distribution according to GICP. The paper defines the matching error term as the Mahalanobis distance weighted by the information matrix (which is the inverse of the covariance matrix), so this replaces the Euclidean distance of traditional ICP. The equation is as follows:
d M 2 ( T ( p i ) , q j ) = ( T ( p i ) q j ) T i j 1 ( T ( p i ) q j )
i j = R p i R T + q j
Here, T ( ) represents the rigid transformation operator (calculated as T ( p ) = R p + t in units of meters); R denotes the rotation matrix; t signifies the translation vector; i j is the joint covariance matrix of matched point pairs; i j - 1 is the inverse matrix of i j ; and p i and q i respectively denote feature points in the source and target point clouds (in meters).
The introduction of Mahalanobis distance makes the matching cost a measure of confidence in the local distribution of point clouds. When the local surface geometries of source and target points are similar (they have low covariance), the information matrix gives higher weights to them. Hence, priority is given to such matching pairs. On the other hand, when the distribution of point clouds is sparse (high covariance), weights will automatically be reduced, thus suppressing interference from erroneous matching pairs. In this error definition, the GICP’s objective function minimizes the maximum likelihood estimation of the optimal rigid transformation parameters as shown in
min T J ( T ) = 1 2 i = 1 N ( T ( p i ) q i ) T i i 1 ( T ( p i ) q i )
In the formula, N represents the total number of matching point pairs and J ( T ) denotes the GICP registration cost function. The iterative convergence can be stabilized by GICP by constantly updating the information matrix during the iteration, as shown below:
I i j ( k + 1 ) = ( R ( k + 1 ) p i R ( k + 1 ) T + q j ) 1
where I i j ( k + 1 ) is the information matrix at iteration k + 1 and R ( k + 1 ) is the rotation matrix at iteration k + 1 . Through iterative updates, the algorithm progressively approximates the optimal solution with the smallest overall error weighting, enhancing the convergence and robustness of the registration process.
Since the introduction of covariance calculation in GICP, it increases the computing load. Lin et al. [29] balanced accuracy and real-time performance via parameter optimization to enable lightweight deployment on mobile robots. TransformationEpsilon = 10−6 (convergence threshold for transformation matrix) and FitnessEpsilon = 10−6 (convergence threshold for matching error) is used to prevent degradation in real-time due to too many iterations. This study conducted specialized benchmark testing using equipment integrated into the test platform, clearly revealing the limits of computational overload and time-consuming control logic in GICP records. Testing was performed under standard input conditions for the Leishen C16 LiDAR: the radar transmitted images at a 10 Hz frequency, with each frame containing approximately 2.8 × 104 raw data points. After noise removal via SOR filtering and feature extraction, valid feature points for GICP registration were selected. Each frame contained 800 to 1200 feature points (including 200 to 300 edge features and 600 to 900 planar features), representing only 3% to 4% of the raw data points. This significantly reduces computational demands for solving the covariance matrix and processing iterative registrations. Under identical hardware and feature input conditions, classical ICP averages 2.1 milliseconds per frame registration, while the improved GICP averages only 2.7 milliseconds per frame registration, with maximum processing time not exceeding 3 milliseconds. The two methods exhibit comparable runtime performance, with GICP’s additional computational load from joint variance computation measured below 30%. This effective reduction in overhead and parameter optimization stems from two simplification techniques: First, by pre-checking feature pair adjacency during the pre-registration phase, variance-weighted computation is applied only to 50% of valid feature pairs, eliminating redundant operations for invalid matches. Second, GICP’s joint variance-weighted matching mechanism significantly enhances convergence efficiency. Under practical operating conditions, it achieves the specified convergence threshold in just 15 to 25 iterations on average—far below the maximum iteration limit of 200—effectively reducing resource waste from invalid iterations. Test results demonstrate that the improved GICP registration method can stably adapt to the real-time processing cycle of 10 Hz LiDAR while maintaining full accuracy. This fully meets the real-time requirements of mobile robot application scenarios.
In summary, compared with traditional ICP, GICP has obvious advantages in a pure laser configuration scenario. Its superiority is mainly manifested in three aspects: (1) the initial pose has highly low sensitivity so that GICP can converge effectively without any features; (2) GICP has strong resistance to interference thanks to the use of covariance weighting, which can inhibit the influence of outlier noise and dynamic stray points; (3) it has more strongly consistent registration results, which provide stable pose constraints for subsequent incremental map construction. By incorporating it onto the FLOAM front-end feature matching, the accuracy of inter-frame registration would be improved directly. Moreover, it also offers a robust foundation for possible modules on loop closure detection and global optimization.

2.3. Principles and Implementation of SOR Filtering Algorithm

Preprocessing the point cloud is a necessary step in laser SLAM to get features of high quality so as to guarantee the accuracy of subsequent registration. Pure LiDAR setup laser point clouds are prone to measurement noise and environmental interference, generating plenty of outlier noise points and dynamic stray points. These unusual points can interfere with feature extraction, affect convergence during registration, and cause positional errors to accumulate [18]. Traditional FLOAM methods do not contain a targeted point cloud denoising mechanism. The direct use of a simple method such as median filtering has difficulty balancing denoising and detail preservation. To address this issue, the mature SOR filtering algorithm is adopted in this study for raw point cloud preprocessing. Through statistical analysis of the distribution characteristics of local neighborhoods in the point cloud, it effectively removes outlier noise with minimal damage to real environmental elements, resulting in stable input for subsequent edge and plane feature extraction.
The fundamental idea of SOR filtering is the assertion that the distance distributions within local point cloud neighborhoods are of Gaussian form. The reasonable threshold for outlier removal was calculated in [23] based on the average distance of the points to their neighborhood points. SOR filtering adopts a combination of the average weighted distance of median images, known as SOR images, which significantly enhances its effectiveness. By mitigating zero-sample noise but suppressing misclassified sparse ambient structures like fine components and edge contours, this approach is more suitable than the initial setting for point cloud preprocessing of complex scenes with pure laser configurations. Its specific implementation process is as follows. SOR filtering does not use simple average distance to measure distance like the normal filtering but rather the weighted average distance, mainly to make each point’s local neighborhood distribution characteristics more accurate. Points in the neighborhood that are closer to the target point can have a more significant contribution to the classification of that point. In the first step, the weighting factors for the first individual point and its neighborhood point will be calculated using the formula below:
ω i t = exp ( | | p i p i t | | 2 d a v g )
In the equation, p i signifies the point to be computed in the point cloud (unit: m); p i t indicates the t th k -nearest neighbor point of p i (unit: m); and d a v g is the average distance in the global neighborhood in the point cloud to avoid the denominator being zero. This is the formula:
d a v g = 1 N k i = 1 N t = 1 k | | p i p i t | | 2
The point cloud’s total number of points is represented by N . The Euclidean norm is denoted by | | | | 2 . The weighting factor is signified by ω i t , where points closer in distance in the neighborhood have a larger weighting factor.
Step two uses the weighting factors obtained in step one to compute the weighted average distances of the k -nearest neighbors according to (6):
d ¯ i = t = 1 k ω i t | | p i p i t | | 2 t = 1 k ω i t
In the formula, d ¯ i represents the k -nearest weighted average distance of p i (unit: m); k denotes the number of nearest neighbors; and ω i t is the weighting factor calculated in the first step. This weighting mechanism makes the distance statistics of the target point more closely reflect its true neighborhood distribution, effectively reducing the interference of sparse neighborhood points on the classification results. The weighted average distance d ¯ 1 , d ¯ 2 , , d ¯ N of all points follows a Gaussian distribution. By calculating the overall mean μ and standard deviation σ of this distribution, the outlier detection threshold Θ can be set, as follows:
Θ = μ + α σ ( 1 + 1 ρ i / ρ a v g )
The formulas for calculating the overall mean μ and standard deviation σ are as follows:
μ = 1 N i N d ¯ i
σ = 1 N 1 i = 1 N ( d ¯ i μ ) 2 + δ
In the expression mentioned in the formula, N is the total number of points in the point cloud; δ is the anti-zero correction term (set to 10−5) designed to prevent a threshold failure due to the standard deviation being zero; α is the threshold coefficient, which is user-adjustable according to the characteristics of the scene (typically set to a value between 1 and 2) to control the strictness of outlier detection; ρ i is the local point cloud density, calculated as ρ i = k V i described by the formula, where V i is the minimum bounding sphere volume of the p i nearest neighbors (unit: 1/m3); ρ a v g is the global average density of the point cloud, calculated as ρ a v g = 1 N i = 1 N ρ i described by the formula (unit: 1/m3); and Θ is the outlier detection threshold (unit: m). A point i is declared an outlier noise point and is removed from the point cloud if its weighted average distance d i > Θ surpasses the threshold limit. In addition, the threshold calculation is equipped with an adaptive mechanism that takes into account the local point cloud density and the global average density: When local density is lower than the global average, the joint distribution of neighboring points becomes relatively sparse, which will naturally increase the weighted average distance. The statistical characteristics of the Gaussian distribution help avoid misclassifying the sparse structure point as an outlier. On the other hand, when local point cloud density is high, the threshold becomes stricter, which effectively achieves noise removal in those regions [18].
This research will optimize the SOR filter parameters through experiments performed in various scenarios to balance noise reduction performance, detail-preserving performance, real-time performance and lightweight requirements of the pure laser configuration for mobile robots. The number of nearest neighbors k = 10 . This parameter makes sure that the local neighborhood distributions maintain statistical reliability and also makes sure that large numbers of neighbors do not overload computation. This parameter also ensures that single-frame point cloud neighborhood search operates within 1 ms. The threshold coefficient α = 1 . 0 adopts a relatively stringent standard for effectively handling noise outliers that are obvious, such as point discontinuities or a few dynamic stray points due to the measurement error. A weighted distance mechanism also allows the core point clouds of sparse structures, such as extremely thin branches and small parts, to be retained. It is observed that SOR filtering comes with an inherent trade-off such that a too small α value may cause a loss of genuine sparse features while a too large α value does not achieve effective denoising. Comparative experiments conducted in indoor and outdoor scenarios verify that α = 1 . 0 achieves significant noise removal while preserving primary geometric structures, fully satisfying the preprocessing requirements of pure LiDAR SLAM.
To reveal the influence patterns of SOR filter core parameters on the overall algorithm performance and validate the superiority of the selected parameter combination, univariate sensitivity analysis experiments were conducted for two core parameters: the number of nearest neighbors k and the threshold coefficient α .
As the core statistical parameter for calculating the weighted average distance of individual points in a point cloud, k directly determines the reliability of representing local neighborhood distribution characteristics and the computational cost of neighborhood search. When k is excessively small ( k 5 ), insufficient neighborhood samples prevent accurate capture of the point cloud’s local Gaussian distribution characteristics, leading to statistical bias. This significantly reduces anomaly detection robustness, failing to effectively eliminate isolated noise points generated by measurement noise. Consequently, the proportion of false feature points increases substantially, and positional registration errors accumulate and worsen. When k falls within the intermediate range of 8 k 15 , the number of neighborhood samples adequately represents the local point cloud distribution. Statistical judgments stabilize, achieving over 95% anomaly detection accuracy while preserving more than 92% of critical geometric features within this interval. The single-frame neighborhood search time stabilizes between 0.8 and 1.2 milliseconds, perfectly aligning with the real-time processing cycle of a 10 Hz radar. At k = 10 , noise suppression effectiveness, feature retention, and computational load achieve optimal balance, delivering the highest positioning accuracy in both indoor and outdoor environments. If k is excessively large ( k 20 ), the computational load of the neighborhood search increases nonlinearly. When k = 20 , the single-frame neighborhood search exceeds 2.5 milliseconds, and at k = 30 , it surpasses 5 milliseconds—both exceeding the real-time processing threshold for lightweight embedded platforms.
As the core control parameter for outlier detection, the threshold coefficient α directly determines the strictness of noise suppression and fundamentally influences the balance between noise reduction effectiveness and environmental detail preservation. When α is excessively low ( α 0 . 5 ), the detection threshold becomes overly stringent, retaining only point clouds perfectly aligned with the neighborhood distribution. Under these conditions, while outlier noise is nearly eliminated, low-density transitional features—such as edge points on smooth indoor walls or sparsely distributed tree point clouds outdoors—are frequently misclassified as outliers. This leads to a sharp reduction of over 40% in the number of valid feature points, making inter-frame registration highly prone to failure in sparsely featured regions. When α falls within the moderate range of 0 . 8 α 1 . 5 , the threshold effectively balances noise reduction with feature point preservation. Within this range, the removal rate of obvious outliers like measurement noise and dynamic drift points exceeds 90%, while preserving key geometric features such as corridor door frames and outdoor vegetation contours. The error rate during feature point extraction drops below 5%, providing high-quality input for GICP registration. Specifically, when α = 1 . 0 , the indoor scene achieves an optimized mean absolute error (MAE) of 1.28 cm, while the outdoor scene maintains a stable MAE of 2.29 cm, effectively balancing rigorous noise reduction with precise feature point preservation. Conversely, when α is excessively large ( α 2 . 0 ), the threshold becomes overly lenient, removing only extremely isolated noise points. At this point, the filtering efficiency for minor anomalous noise points and drift points generated by moving objects significantly declines. Under these conditions, residual noise in the input point cloud increases substantially, with pseudo-feature points accounting for over 20% of the total. This leads to a large number of anomalous matching pairs during GICP registration, not only undermining convergence stability but also significantly amplifying the cumulative error effect.
In summary, under the computational constraints of lightweight pure laser registration, the parameter combination of k = 10 and α = 1 . 0 achieves a global optimum solution balancing noise elimination, feature point preservation, real-time performance, and positioning accuracy. This method satisfies both the detail retention requirements of low-feature indoor scenes and the strong anti-interference demands of complex outdoor environments.
The filtering of SOR not only discards outlier noise points but also enhances the accuracy of feature extraction by correcting the curvature calculation of the point cloud [30]. Due to the isolated noise, the high-curvature features (corners, edges, etc.) are easily interfered with, resulting in pseudo-features. To solve this, a point cloud curvature correction formula is introduced to suppress interference as follows:
κ i = t r ( p i ) | | p i | | F
In the above formula, p i represents the local covariance matrix of the filtered p i (unit: m2); t r ( ) denotes the trace of the matrix; | | | | F indicates the Frobenius norm of the matrix; and κ i denotes corrected point cloud curvature, to be used for consecutive edge and plane feature classification. Through filtering, the number of pseudo-feature points gets reduced by over 90%, while the corner and plane points extracted are more on par with the real environmental geometry. At the same time, filtering by SOR can partially eliminate dynamic stray points (such as pedestrians, temporary obstructions and so on) so that the matching only focuses on static environmental features and indirectly enhancing the stability of inter-frame registration. In dynamic scenes, this property is especially important. Despite its limitations regarding the detection of dynamic objects, SOR filtering can be viewed as a “coarse dynamic noise filter” that assists with statistical feature selection of the algorithm, improving its robustness to environmental changes [31].
The SOR filtering method acts in the following manner. The distance is determined first using a locally weighted distance calculation. Then, a statistical threshold is determined. Finally, curvature correction occurs. Therefore, high-quality point cloud input for FLOAM algorithm enhancement is the final output. The former ensures data quality while the latter achieves accurate registration, together enhancing the positioning accuracy and interference resistance of the algorithm in the pure laser configuration, which synergistically optimizes the subsequent GICP registration.

3. Offline Experimental Validation and Performance Analysis of the Improved FLOAM Algorithm

To verify the positioning accuracy, mapping performance, and environmental robustness of the designed improved FLOAM algorithm under pure LiDAR conditions, a pure LiDAR experiment platform is established based on a mobile robot. Offline experiments were designed for typical indoor and outdoor scenarios. Quantitative analysis of the absolute pose error (APE) for pose estimation is performed using EVO, an open-source trajectory evaluation toolkit. Simultaneously, a performance evaluation that was qualitative and quantitative of the point cloud, attitude angle tracking [32], velocity evolution and positioning error distribution was performed. The results were compared to those of the traditional FLOAM algorithm for the validity and the engineering application of the improved method.

3.1. Experimental Platform Setup

The experimental platform for this study is the Wheeltec four-wheel omnidirectional mobile robot chassis, equipped with Mecanum wheels to achieve flexible omnidirectional movement, meeting path planning and motion control requirements for both indoor and outdoor scenarios. A Leishen C16 3D LiDAR is mounted atop the platform as the sole perception sensor. This radar features 16 laser beams and 360° full-field scanning capability, with a standard scanning frequency (frame rate) of 10 Hz. It outputs real-time environmental point cloud data, providing raw perception information for algorithms. Additionally, the onboard computing unit integrated into the mobile robot used in this experiment is the NVIDIA Jetson Xavier NX. Its core configuration includes a Carmel ARM v8.2 64-bit hexa-core processor, a 384-core graphics processor based on the Volta architecture (maximum frequency 1.9 GHz), and 8 GB of LPDDR4x memory. This device represents mainstream lightweight deployment hardware in the field of mobile industrial robots, offering clear engineering and technical reference value.
To maintain the pure LiDAR experimental conditions, no IMU, wheel odometer, or other auxiliary sensors were integrated into the platform; only LiDAR point cloud data was used for localization and mapping tasks The LiDAR’s point cloud data is the sole source for positioning and mapping tasks. In the course of the experiment, LiDAR continuously captures environmental point cloud data at a specified frequency with the environment data being packed in ROS standard data packets. The enhanced FLOAM algorithm processes the packets offline with optimized parameters suited to indoor and outdoor environment characteristics. This guarantees algorithmic stability only using LiDAR and without IMU. Figure 2 shows the physical experimental platform in question.

3.2. Indoor Scenario Experimental Validation

3.2.1. Indoor Laboratory Environment Design

The experiment conducted indoors chose a corridor of an educational building as the test scenario (Figure 3). The environment has a flat floor, vertical, smooth walls and is almost devoid of global environmental features. Only localized areas have limited clutter, such as bulletin boards and signage. It represents a classical hard testbed for laser SLAM algorithms. The emergence of sparsity in the features readily causes the deviation in pose estimation of the algorithm and the accumulation of errors. Thus, this tests the robustness and position stability of the improved FLOAM algorithm in such low features. As the experiment site presented low pedestrian traffic and steady lighting conditions, no further effects arose during the experiment from dynamic interference and illumination changes on the laser point clouds [33]. In this scenario, the robot’s average travel speed is approximately 0.5 m/s, with stable travel speeds maintained within the range of 0.4 to 0.6 m/s. Peak speeds reach about 1.0 m/s during brief acceleration phases following turns.

3.2.2. Indoor Scene Point Cloud Mapping Performance

The upgraded FLOAM algorithm created a 3D point cloud map of the corridor scene in the teaching building (Figure 4). In combination with the first-floor lobby 2D floor plan reference (Figure 5) as well as the robot’s 3D trajectory (Figure 6), it is clear to see how the FLOAM algorithm has reconstructed the indoor environment. The results of the mapping show that the improved algorithm accurately reconstructs the overall spatial structure of the corridor. Unique point cloud layers are generated for the parallel walls on each side, the open central passageway, the ceiling and the floor. The local elements such as the door frame at the end of the corridor and the poster board on the side wall are also accurately mapped. The horizontal grid (actual scale) overlay shows that the scale of the mapping results is close to the real environment, and there is no scale distortion.
The characteristics of indoor environments are relatively sparse. As a result, localized sparsity occurs on the smooth wall surface of point clouds. The arguments made are not due to the algorithm but the well-known phenomenon result of LiDAR on low-reflectivity, textureless planes. In general, the enhanced FLOAM generates point cloud maps that have similar macrostructure and are not significantly distorted by pure laser configurations. The mapping always works, and there is no failing of mapping due to the absence of IMU inertial data. This fully validates the algorithm’s capability to map indoors with low features.

3.2.3. Indoor Scene Trajectory Estimation and Positioning Error Analysis

(1)
Attitude Angle Tracking Performance
The roll angle (roll), pitch angle (pitch), and yaw angle (yaw) variations of the robot’s indoor movement are presented in Figure 7, Figure 8, and Figure 9, respectively. The roll and pitch angles generally vary by a small amount of ±2° and mainly hold near 0°. This means the robot basically stays horizontal while driving over flat corridor surfaces without significant tilting or jolting. The small fluctuations in attitude angles are due to small bumps in the ground, mechanical vibration of wheels and normal estimation noise of the laser odometer and do not accumulate. The improved algorithm demonstrates stable tracking of horizontal attitude.
The robot was programmed to follow a predefined motion path: moving straight to the corridor corner, making a 90° right turn, continuing straight in the new direction until the end of the path, and then making another right turn. The yaw angle variation curve matches this motion path of the robot closely; between 0 s and 50 s, the yaw angle goes towards 0 degrees, which would mean that the robot is going forth in a straight line. From 50 to 60 s, the yaw angle steps down to −90 degrees, indicating that the robot has rotated by 90 degrees on its right, which is the end of the corridor; From 60 to 80 s, the yaw angle varies a bit around the new angle as the robot goes forth in a straight line, in the post-turn direction. At around 80 s, the yaw angle strangely steps up to approximately −180 degrees, indicating another 90-degree turn right. From 80 to 100 s, the yaw angle slowly steps back up to approximately −170 degrees, indicating that the robot has made minor adjustments to its heading at the end of the path. The yaw angle mimicry portrays the robot’s actual turning angle highly accurately. While the robot travels in a straight line, it does not experience any noticeable drift from the line to its right. This is a clear indication that the improved algorithm is efficiently tracking the heading of the robot.
(2)
Speed Evolution Characteristics
As shown in the indoor scenes in Figure 10, the time-varying curve of the robot’s running speed is given. The speed change is highly similar to the actual running state, and there is no wild change. First, 0 s–10 s is the start-up period: the speed is close to 0 m/s. Next, 20 s–50 s is the steady driving period: speed is maintained at 0.4–0.6 m/s. At about 50 s the speed decrease is the anticipation deceleration before the turning state. Between 55 s and 60 s the speed rises to about 1.0 m/s. This is probably an estimated fluctuation event at the instantaneous turning point or a brief acceleration after the instantaneous turning. Between 60 s and 90 s the speed fluctuates at 0.5–0.8 m/s, which is the acceleration/deceleration adjustment in the second segment of the corridor. After 90 s, the speed gradually decreases and finally stops running around 100 s.
The robot’s average speed while travelling was about 0.5 m/s, with a peak value of approximately 1.0 m/s. The velocity curve showed minor peaks during turning and smooth variability throughout other parts of the trial. At the time stamp of 40–50 s, only a small noise at the level of ±0.1 m/s was present, which encapsulates the normal error range that was seen on account of the operation of laser odometers that use pose differentiation to evaluate velocity. The results show that the odometer solution of the improved FLOAM algorithm exhibits consistent and stable velocity estimation, which meets the practical requirements of velocity perception of the indoor navigation.
(3)
Quantitative Analysis of Positioning Error
The absolute pose error (APE) of the improved FLOAM algorithm in the indoor scenario was quantitatively analyzed using the EVO tool, and the error distribution along the trajectory is shown in Figure 11. From the distribution characteristics of the error, it can be seen that the positioning error of most trajectory points is lower than 0.05 m. Minor error peaks of 0.08–0.09 m only appear in localized areas such as corridor turns. The positioning error of the whole trajectory is always maintained at 0.1 m. Moreover, the robot’s return to starting point error almost reverted to the minimum value, indicating that error accumulation did not occur.
To further quantitatively validate the algorithm’s accuracy, core metrics including maximum absolute error, mean absolute error, median absolute error, and root mean square error (RMSE) were selected. The results are shown in Table 1. As shown in Table 1, the improved FLOAM algorithm achieved an RMSE of 0.021535 m (approximately 2 cm) in indoor scenarios, a maximum absolute error of 0.097523 m (less than 10 cm), and a mean absolute error of only 0.012841 m (approximately 1.3 cm).
The findings present evidence that the enhanced FLOAM method can reach centimeter positioning levels with pure laser configurations in indoor scenarios lacking substantial features, all without the help of IMUs and odometers. It shows little cumulative error impacts, offering great reliability of position and map consistency.
(4)
Quantitative Comparison of Simulation Results
Leveraging the Gazebo simulation platform, a typical indoor simulation environment was constructed based on the ROS Noetic version. A PX4 model mobile robot platform was employed, equipped with a Velodyne 3D LiDAR (Velodyne 3D lidar is manufactured by Velodyne Lidar Inc., a company based in California, USA.) as the sole perception sensor. Throughout the process, no auxiliary sensors such as IMUs or wheel odometers were involved in the localization and mapping tasks. This setup was used to validate the robustness and accuracy of the improved FLOAM algorithm across diverse terrains. Testing is illustrated in Figure 12. Table 2 results indicate that in indoor environments, the improved FLOAM algorithm reduces the mean absolute pose error from the traditional 0.60 m to 0.32 m (a 46.7% reduction), the median error decreased from 0.60 m to 0.27 m (a 55% reduction), and the root mean square error decreased from 0.60 m to 0.35 m (a 41.7% reduction). The maximum error remained largely unchanged (approximately 1.20 m), while the minimum error remained around 0.15 m. The error fluctuation range (standard deviation) remained stable.

3.3. Outdoor Scenario Experimental Validation

3.3.1. Outdoor Experimental Environment Design

The selected test scenario for the outdoor experiment was an open plaza at the entrance of the college building (Figure 13). This was a paved location with clear space, no significant object, and good illumination for the scenario. There existed the presence of static elements like trees, flower beds, steps and building facades, localized regions with little dynamic interference and ground undulation, which is much closer to actual engineering applications of mobile robots. The outdoor setting has a sparser feature distribution and a more complex environment than indoors. Utilizing expansive indoor and outdoor settings promotes a thorough assessment of the enhanced FLOAM algorithm’s resilience and tolerance to environments during localization and mapping under spacious, unobstructed, and well-lit conditions. In this scenario, the robot maintains a stable travel speed range of 0.5 to 0.8 m/s, with an average travel speed comparable to that in indoor scenarios.

3.3.2. Outdoor Scene Point Cloud Mapping Performance

The enhanced FLOAM algorithm 3D point cloud map of an outdoor square scene is shown in Figure 14. In order to analyze the outdoor mapping capacity of the 3D FLOAM algorithm, it can be compared with the 2D schematic of the college building entrance (Figure 15) and the robot 3D trajectory (Figure 16). The outdoor scene’s core spatial elements can be accurately reconstructed by the improved algorithm, as the mapping results show. Clear mapping was performed of the pavement of the plaza, the trees planted, the circular flower beds designed, the steps laid and buildings. The reproduction of ground tile division lines and pathway edges was done with high fidelity. The dense point clouds and continuous contours of building facades testify to the structural preservation capabilities of the improved algorithm [34] for point cloud registration and filtering strategies in large-scale environments.
The mapping outcomes from certain regions exhibit differences in point cloud density and discontinuous occlusions. The likely reasons for this phenomenon to occur are weak local reflectivity from the LiDAR, temporary obstruction of the visual angle from moving objects, and obstruction from the visual angle. In complex outdoor scenarios, such perception characteristics would be inherent to the LiDAR itself. Overall, the enhanced FLOAM algorithm demonstrates an accurate representation of the spatial arrangement of outdoor open realms employing pure LiDAR configurations. Moreover, there are no substantial matching distortions or scale indices. It exhibits great potential for outdoor engineering applications.

3.3.3. Outdoor Scene Trajectory Estimation and Positioning Error Analysis

(1)
Attitude Angle Tracking Performance
The roll angle, pitch angle and yaw angle of the robot in an outdoor scenario are shown in Figure 17, Figure 18 and Figure 19, respectively. The roll and pitch angles usually vary from ±2° and experience only minor sudden changes during specific timeframes. The differences could be due to little steps and unevenness of the testing area. Generally, the platform works highly well with little outside interference. In addition, there was not a single instance of cumulative drift in the roll angle or the pitch angle throughout the entire process, verifying the stability of the improved algorithm in attitude tracking on non-absolutely-flat surfaces.
The yaw angle curve has obvious jump and oscillation characteristics corresponding to the large-angle turning and reversing of the robot outdoors in the open environment. The various angles closely conform to the actual path. The yaw angle apace during the straight-line driving phase had no significant heading drift. This indicates that the proposed method has the ability to track attitude and an improved driving performance even under outdoor complex motions. Moreover, the environmental adaptability of the system is good.
(2)
Evolutionary Characteristics of Movement Speed
The time evolution curve of the robot’s movement speed in outdoor scenarios is shown in Figure 20. Compared to indoor scenarios, the outdoor speed curve exhibits multiple fluctuating segments and short-term abrupt changes, closely aligning with the complex movement phases outdoors: In the initial phase, the speed gradually increases from zero and stabilizes. Between −45 and −35 s, the speed fluctuates frequently and approaches stagnation, corresponding to the robot’s low-speed maneuvering and obstacle avoidance in narrow passages/densely obstructed areas. Near t = −30 s, speed fluctuations intensify, corresponding to multiple decelerations and posture adjustments; near t = 0 s, speed drops to its lowest point during the robot’s turning, reversing, and path replanning process. Between t = 0 and 20 s, the speed rapidly increases with two distinct peaks, representing the robot’s swift uniform motion upon entering open areas. Between t = 20 and 25 s, the speed briefly fluctuates downward due to temporary obstacles and ground undulations along the path. After t = 30 s, the speed stabilizes with reduced fluctuations, reflecting smooth operation during the final mapping phase.
The above velocity evolution characteristics fully reflect the robot’s typical motion phases in outdoor environments, including startup, acceleration, obstacle avoidance, turning, and high-speed driving. This validates that the improved FLOAM algorithm, even in a pure laser configuration without IMU assistance, still possesses stable velocity calculation capabilities and environmental adaptability, capable of meeting complex motion control demands in outdoor settings.
(3)
Quantitative Analysis of Positioning Error
The EVO tool is used to assess the average position error (APE) of the outdoor improved FLOAM algorithm, and the evolution curve of the error is shown in the Figure 21. The error distribution is analyzed, which shows that the position errors are all below 0.02 m during most periods and fluctuate smoothly around the mean line, showing that a strong error suppression performance has been achieved. Error spikes occurred only at certain intervals, peaking above 0.20 m at approximately 55 s and 100 s. The reasons for this can be associated with the mismatch of laser point clouds, the temporary interference of dynamic obstacles, and the quick turning of paths, which are all normal error flotations in complex outdoor environments.
Furthermore, positioning errors in the 10–30 s and 70–90 s phases maintained their low values, which were close to or below the median line. The low values reflect the ideal operation status of the algorithm. This implies that in outdoor open areas that do not have an interference source, the positioning stability of the optimized technique can be improved further. The error standard deviation band is narrow overall, indicating that the positioning error distribution of the algorithm is relatively concentrated with little fluctuation and a good degree of robustness.
To validate outdoor positioning accuracy quantitatively, the core error metrics of the improved algorithm were compared with those of the FLOAM algorithm. As indicated in Table 3, the improved FLOAM algorithm achieved an RMSE (root mean square error) of 0.036912 m in outdoor scenarios. Also, it achieved a mean absolute error of only 0.022917 m (roughly 2.3 cm) and a maximum absolute error of 0.235755 m (roughly 23 cm).
The improved FLOAM algorithm had centimeter-level accuracy in open outdoor environments with highly complex urban-like characteristics, the experiments show. Analysis of the error distribution plot shows no significant large-magnitude anomalous errors, with the error values overall concentrated in a low-magnitude range. This indicates the ability of the algorithm to achieve continuous trajectory estimation with high precision under pure laser settings. The accuracy requirements fit the general outdoor navigation of mobile robots.
(4)
Quantitative Comparison of Simulation Results
Similarly leveraging the Gazebo simulation platform, hardware or benchmarking details are consistent with the indoor setup, and a typical outdoor environment was constructed. As shown in Figure 22, the advantages of the improved algorithm became more pronounced, significantly reducing absolute pose error, ensuring consistency in trajectory closure, and markedly enhancing overall map accuracy. Table 4 results indicate that in outdoor environments, the max decreased from 1.50 m in traditional FLOAM to 0.45 m (70% reduction), mean decreased from 0.50 m to 0.27 m (46% reduction), median decreased from 0.50 m to 0.25 m (50% reduction), RMSE decreased from 0.50 m to 0.30 m (40% reduction), and the minimum error slightly increased from 0.05 m to 0.13 m. The standard deviation remained stable.

3.4. Generalization Capability and Engineering Application Potential of the Improved Algorithm

Section 3.2 and Section 3.3 tested the improved FLOAM algorithm performance in an indoor corridor and outdoor plaza. These scenarios are common structured environments in which LiDAR SLAM performance is verified. They validate the overall localization accuracy, anti-interference performance and real-time performance of the algorithm under pure LiDAR configuration. To further evaluate the cross-scene generalization ability and practical engineering application value of the proposed algorithm, complex indoor architectural scenes are introduced for air handling unit (AHU) inspection tasks in this section. This type of task is widely used in the intelligent operation and maintenance of various components of industrial buildings such as a hospital building, auditorium building, and office building. The scenario described is typical of complex application situations for lightweight-inspection mobile robots. Furthermore, it can fully verify the environmental adaptability of the algorithm beyond standard laboratory test scenarios.
The core architecture of the proposed algorithm focuses on universal geometric feature extraction and robust point cloud registration for LiDAR point clouds, rather than relying on scenario-specific fixed operating thresholds or heuristic rules only applicable to a single scene, which gives it inherent cross-scene generalization potential. For the publicly available real-world AHU operation data [35] set collected from office buildings, auditoriums, and hospital buildings with well-annotated operational data, the proposed algorithm can be directly deployed on lightweight-inspection robots equipped with only a 16-channel LiDAR as the sole perception sensor. In keeping with the previous experiments’ control variable protocol, no additional sensors such as IMUs or wheel odometers are involved in the localization and mapping. The dual enhancement modules at the core of the algorithm exhibit significant performance strengths for precise scenarios: the SOR smoothing module can effectively filter out abnormal outliers such as noise caused by strong reflections from AHU metallic surfaces, dynamic motions of personnel in hospitals, and long-range scanning in large-space auditoriums, allowing valid geometric features of the environment to be retained while suppressing noise interference. The GICP module can maintain stable and accurate pose estimation for feature-sparse office areas and narrow equipment corridors in hospitals to achieve the precise 3D point cloud mapping of AHU inspection scenarios, without the assistance of other sensors. In such a complex scenario, the algorithm can achieve a localization accuracy of up to the centimeter-level, which is the same level of performance verified in a stable indoor corridor test.
It can be seen that the centimeter-level accurate localization result and high-fidelity spatial point cloud map results of the above algorithm have reliable geometric and spatial prior information for downstream intelligent AHU operation and maintenance tasks. The unified spatial reference and accurate temporal–spatial alignment basis of this method is particularly useful to the class-aware temporal and contextual contrastive framework for semi-supervised automatic fault detection and diagnosis of AHUs [36]. This issue addresses the basic problem of spatial position mismatch of inspection data in different cycles. It also lays a solid data foundation for end-to-end intelligent inspection of Heating, Ventilation and Air Conditioning (HVAC) systems.
The proposed method achieves superior performance in the target AHU inspection scene when its environmental distribution is similar to that of the source indoor corridor test scene, maintaining highly consistent localization and mapping results even under domain shifts. When there is a considerable discrepancy between the source and target domains, the localization performance will drop moderately. Four main factors cause domain shift and affect the generalization performance:
  • The operation of AHU and building facilities influence the static environmental layout through equipment startup and shutdown, maintenance door opening and closing, and temporary placing of maintenance items, which will reduce the stability of inter-frame registration.
  • The motion control logic of the robot, which accelerates and decelerates frequently during fixed-route patrols and makes sharp turns in confined spaces, exacerbates the distortion problem of single-frame point clouds, which puts forward higher requirements on the algorithm’s real-time processing capacity;
  • The point cloud density and the capture of other geometric features of AHU and its surrounding environment depend on the configuration parameters of the LiDAR sensor (number of channels, scanning frequency, ranging accuracy, and mounting position).
  • Activities performed by humans inside buildings, such as heavy pedestrian traffic in office and hospital lobbies as well as movement of medical equipment, produce a large number of dynamic outliers, which complicate the process of outlier removal and cause registration mismatch.
To address performance degradation due to the aforementioned domain shift reasons, the proposed approach can be integrated with the domain-invariant feature-learning strategy verified in [36]. With this method, the algorithm can focus on the stable static geometric features of the environment, which are common to any building scenario, and will ignore the transient dynamic interference and scenario-specific building structure patterns. It can further enhance the algorithm’s robustness and cross-scene generalization capability in different real-world AHU inspection and other industrial applications. The results of the above generalization verification further prove that the improved FLOAM algorithm proposed in this paper not only achieves high-precision localization in typical indoor and outdoor structured scenarios but also has excellent environmental adaptability and engineering implementation potential in complex industrial inspection scenarios. The scheme supplies a high-reliability technical scheme for the promotion of pure LiDAR SLAM in intelligent mobile robot industry applications.

4. Discussion

Laser SLAM is the core enabling technology for GPS-free mobile robots. The operational reliability of robots in practical applications, including disaster rescue and indoor navigation, closely relates to their positioning accuracy and environmental robustness. It is widely acknowledged that typical lightweight LiDAR SLAM algorithms (e.g., FLOAM) suffer from two core limitations: insufficient positioning accuracy and weak robustness against environmental interference. They are also prone to error accumulation under pure laser configuration conditions, without auxiliary sensor configurations. These issues arise from the absence of effective point cloud preprocessing methods, registration techniques disregarding point cloud covariance information, and the lack of loop closure detection capability. The engineering applications of such systems can be critically hampered by their inability to operate in scenarios that are feature-poor but noise-rich. To counter this, this paper enhances the FLOAM algorithm with a more robust solution using statistical Outlier Removal (SOR) filtering along with Generalized Iterative Closest Point (GICP) registration. Despite maintaining the original algorithm’s lightweight characteristics and real-time performance, this proposed approach achieves synergistic optimization of point cloud preprocessing and inter-frame registration. Hence, a highly robust and accurate SLAM framework suitable for pure laser configurations is established.
The main enhancement involves front-end data processing and feature registration. The application of SOR filtering for outlier noise filtering and the removal of invalid scattered points is done during preprocessing. By analyzing the local neighborhood distribution of point clouds, noise distortion during feature extraction is prevented and high-quality data is supplied for pose estimation [37]. When registering, GICP makes use of point cloud covariance information to design a cost function for matching in Mahalanobis distance. Sensitivity to initial pose decreases while accuracy is ensured and efficiency is balanced via optimum parameters. To prove the validity of the algorithm, a pure laser robotics experimental platform with Leishen C16 LiDAR was designed. Quantitative trajectory error assessment of the actual experiments conducted in the indoor corridors and outdoor plaza was quantitatively assessed using EVO tools.
Test results demonstrate that the improved algorithm exhibits outstanding robustness, achieving centimeter-level positioning accuracy: in indoor scenarios, the root mean square error (RMSE) is approximately 2 cm, with a maximum absolute error ≤10 cm and an average absolute error of only 1.2841 cm. Trajectories remain stable without significant deviation, and maps accurately reproduce spatial details. In outdoor scenarios, the mean absolute error is 2.2917 cm, the maximum absolute error is approximately 23 cm, and the root mean square error is 3.6912 cm. Even in complex and dynamic environments, the system maintains stable positioning and mapping capabilities. The following are the results of indoor and outdoor environment tests conducted on the Gazebo simulation platform: Compared to traditional FLOAM technology, in indoor environment tests, the average absolute positioning error was reduced by 46.67%, and the root mean square error was reduced by 41.67%. In outdoor environments, the average absolute positioning error was reduced by 46.00%, and the maximum absolute error was reduced by 70.00%. Furthermore, when compared to common laser SLAM algorithms using pure laser configurations, the improved algorithm proposed in this paper reduced the average absolute error by 41.82% in indoor environments and by 43.75% in outdoor environments compared to LeGO-LOAM (a classic simplified version of LOAM); compared to LIO-SAM, which uses only lasers, the average absolute error was reduced by 38.46% and 40.00% in indoor and outdoor scenarios, respectively. In a simplified configuration without any auxiliary sensors, this approach demonstrated higher positioning accuracy and error suppression capabilities. These results indicate that the synergistic improvements brought by SOR filtering and GICP registration effectively address the fundamental shortcomings of traditional FLOAM, while offering significant advantages in accuracy over traditional LOAM variants. This method fully meets the high-precision navigation requirements for autonomous mobile robots operating solely on laser data.
Despite these achievements, the following limitations warrant careful consideration: First, this study employs SLAM technology using only laser sensors without integrating inertial measurement unit (IMU) data or other sensors. Positioning stability during high-speed dynamic motion or in vibrating environments remains an area for improvement. Second, the algorithm is sensitive to point cloud density; sparse or heterogeneous point cloud data reduces feature extraction accuracy and compromises overall positioning performance. Furthermore, the current algorithm is mainly validated in small-to-medium-sized indoor and outdoor scenarios; its scalability and robustness in large-scale or highly dynamic environments still need further validation and optimization.

5. Conclusions

This study proposes an improved FLOAM framework integrating SOR filtering and GICP registration for pure laser SLAM, which effectively overcomes the core defects of traditional FLOAM including low positioning accuracy, weak anti-interference ability and easy error accumulation under a pure laser configuration without auxiliary sensors. While fully retaining the lightweight structure and real-time processing capability of the original FLOAM algorithm, the proposed scheme realizes the synergistic optimization of point cloud preprocessing and inter-frame registration and achieves centimeter-level high-precision positioning in both indoor and outdoor scenarios, with localization accuracy improved by more than 40% compared with mainstream pure laser SLAM algorithms. The algorithm provides a reliable and feasible technical solution for the engineering application of mobile robots in sensor-constrained scenarios and fully meets the high-precision navigation requirements of autonomous mobile robots relying only on LiDAR perception.
Future research can explore multiple directions: On one hand, integrating multi-sensor information (such as IMU data and camera data) into the existing algorithm can enhance the system’s overall stability and accuracy. On the other hand, integrating dynamic object detection and exclusion techniques can enhance the algorithm’s robustness in dynamic environments. Furthermore, integrating and optimizing loop closure detection mechanisms will effectively reduce cumulative errors, improving position estimation and map construction accuracy. Through these enhancements, the enhanced FLOAM algorithm will achieve more reliable application in more complex, large-scale environments.

Author Contributions

Conceptualization, S.F., T.Z. and J.Z.; formal analysis, S.F., T.Z. and J.Z.; investigation, S.F. and T.Z.; resources, S.F. and T.Z.; data curation, S.F., T.Z. and J.Z.; writing—original draft preparation, T.Z. and J.Z.; writing—review and editing, S.F., T.Z. and J.Z.; supervision, S.F. and T.Z.; project administration, S.F., T.Z., J.Z., G.G. and W.Z.; funding acquisition, S.F. and T.Z. All authors have read and agreed to the published version of the manuscript.

Funding

This work is supported in part by the National Natural Science Foundation of China General Program (51874308); China Postdoctoral Science Foundation Program (2019M660860); and the Artificial Intelligence + Research Institute “Challenge and Response” program of the Beijing University of Civil Engineering and Architecture (06080925012). The opinions expressed are those of the authors and do not necessarily reflect the opinions of the sponsors.

Institutional Review Board Statement

Not applicable.

Informed Consent Statement

Not applicable.

Data Availability Statement

The raw data supporting the conclusions of this article will be made available by the authors on request.

Conflicts of Interest

The authors declare no conflict of interest.

References

  1. Chen, S.-N.; Chen, Z.-M.; Pan, H.-F. Autonomous navigation method for laser SLAM. Xinyang Norm. Univ. J. Nat. Sci. 2026, in press. Available online: https://link.cnki.net/urlid/41.1476.N.20260128.1940.004 (accessed on 19 March 2026).
  2. Han, M.-Y.; Liu, Z.-A.; Wu, Q. Research on UAV intelligent inspection system based on ROS platform and multimodal data fusion. Internet Things Technol. 2026, 16, 80–83. [Google Scholar] [CrossRef]
  3. Zhang, J.; Xu, Z.; Liu, H.; Li, J.; Wang, Y.; Zhang, L. An improved hybrid ant colony optimization and genetic algorithm for multi-map path planning of rescuing robots in mine disaster scenario. Machines 2025, 13, 474. [Google Scholar] [CrossRef]
  4. Liu, S.; Luo, J. Goal-driven autonomous exploration of mobile robots based on “vision-radar” perception and adaptive skill-switching in dynamic crowded environments. Meas. Sci. Technol. 2026, 37, 025105. [Google Scholar] [CrossRef]
  5. Mailka, H.; Abouzahir, M.; Ramzi, M. GPU-accelerated embedded EKF-SLAM based on LiDAR–IMU–GNSS fusion for autonomous ground vehicles. Eng. Res. Express 2026, 8, 015221. [Google Scholar] [CrossRef]
  6. Chen, J.; Zhang, W.; Wei, D. Optimized visual inertial SLAM for complex indoor dynamic scenes using RGB-D camera. Measurement 2025, 245, 116615. [Google Scholar] [CrossRef]
  7. Wang, Q.; Wang, N.; Du, X. Real-time lidar-visual feature co-optimization for complex indoor environments: An adaptive fusion framework with illumination-robust enhancement. J. King Saud Univ. Comput. Inf. Sci. 2025, 38, 26. [Google Scholar] [CrossRef]
  8. Nan, Z.; Zhu, G.; Zhang, X. Development of a high-precision lidar system and improvement of key steps for railway obstacle detection algorithm. Remote Sens. 2024, 16, 1761. [Google Scholar] [CrossRef]
  9. Zhao, Y.; Xiu, S.; Hong, Y. Study on automated guided vehicle navigation method with external computer vision. Proc. Inst. Mech. Eng. 2025, 239, 635–647. [Google Scholar] [CrossRef]
  10. Luo, D.; Huang, Y.; Huang, X. W-VSLAM: A visual mapping algorithm for indoor inspection robots. Sensors 2024, 24, 5662. [Google Scholar] [CrossRef]
  11. Zou, R.; Xu, Z.; Wang, C.; Zhang, S. An enlarged polygon method without binary variables for obstacle avoidance trajectory optimization. Chin. J. Aeronaut. 2023, 36, 284–297. [Google Scholar] [CrossRef]
  12. Gumarov, K.-E.; Gaganova, N.-I.; Kuchev, N.-D.; Khamidullin, R.R.; Zinnurov, I.I. Developing a mobile robotic facility for diagnosing tank equipment. J. Mach. Manuf. Reliab. 2025, 54, 900–909. [Google Scholar] [CrossRef]
  13. Kumar, S.; Thottungal, R.; Pasupathy, S.-A. Spatial aware feature extraction for RADAR SLAM. Measurement 2026, 264, 120192. [Google Scholar] [CrossRef]
  14. Gao, R.; Han, Q.; Li, G. Visual SLAM based on line feature extraction and dynamic object removal in dynamic environments. Arab. J. Sci. Eng. 2026, in press. [Google Scholar] [CrossRef]
  15. Zhang, J.; Yuan, L.; Ran, T. Less is more: An effective method to extract object features for visual dynamic SLAM. Displays 2026, 91, 103224. [Google Scholar] [CrossRef]
  16. Zhang, Y.; Wang, X.; Lyu, X. Segment-based SLAM registration optimization algorithm combining NDT and PL-ICP. Sensors 2025, 25, 7175. [Google Scholar] [CrossRef] [PubMed]
  17. Yang, H.; Jiang, F.; Kumar, A. NDF-SLAM: LiDAR SLAM based on neural distance field for registration and loop closure detection. Measurement 2025, 255, 117904. [Google Scholar] [CrossRef]
  18. Liao, J.; Li, G.; Yanagisawa, K. Physics-informed neural networks for three-dimensional cerebrovascular hemodynamic prediction: A point cloud preprocessing strategy based on limited data. Eng. Appl. Artif. Intell. 2025, 160, 112034. [Google Scholar] [CrossRef]
  19. Zhou, T.-F. Stochastic characteristic analysis of laser point cloud based on synthetic variance-covariance matrix. Laser Optoelectron. Prog. 2026, in press. [Google Scholar] [CrossRef]
  20. Chen, J.; Zhang, Z.; Liu, G. Research on three dimensional reconstruction method based on noise loop detection and visual complementary fusion. Meas. Sci. Technol. 2025, 36, 125403. [Google Scholar] [CrossRef]
  21. Xu, H.; Zhou, D.; Wang, S. Improved LeGO-LOAM method based on adaptive feature extraction and optimized ground segmentation. Eng. Res. Express 2025, 7, 0452g4. [Google Scholar] [CrossRef]
  22. Shen, B.; Xie, W.; Peng, X. LIO-SAM++: A lidar-inertial semantic SLAM with association optimization and keyframe selection. Sensors 2024, 24, 7546. [Google Scholar] [CrossRef] [PubMed]
  23. Yang, J.-J.; Wang, C.; Li, W.-J.; Cheng, J.; Ma, H. Research on point cloud registering method of tunneling roadway based on 3D NDT-ICP algorithm. Sensors 2021, 21, 4448. [Google Scholar] [CrossRef] [PubMed]
  24. Albabo, A.; Wen, G.; Cheng, S.; Li, G.; Elsheikh, A.H. A deviation correction technique based on particle filtering combined with a dung beetle optimizer with the improved model predictive control for vertical drilling. Appl. Sci. 2025, 15, 10773. [Google Scholar] [CrossRef]
  25. Zhao, P.; Xie, H.; Chen, Q.; Li, W.; Li, D. A dual-path feature fusion model for tailings dam saturation line prediction driven by multi-source sensing data. Measurement 2026, 267, 120469. [Google Scholar] [CrossRef]
  26. Xu, H.; Wang, M.; Liu, C.; Li, S.; Wang, L. Tunnel crack assessment using simultaneous localization and mapping (SLAM) and deep learning segmentation. Autom. Constr. 2025, 171, 105977. [Google Scholar] [CrossRef]
  27. Wang, J.; Li, D.; Du, X.; Liu, H.; Zhang, W. Knowledge-based visual question classification using quaternion hypergraph consistent network. Inf. Process. Manag. 2026, 63, 104591. [Google Scholar] [CrossRef]
  28. Mou, Y.-B.; Yin, H.; Peng, Z.-R. Structural load identification and response reconstruction based on improved multiplicative regularization. J. Mech. Strength 2026, 48, 72–79. [Google Scholar] [CrossRef]
  29. Lin, J.; Niu, W.-K.; Zhang, J.-Y.; Ma, S.-G. Lightweight object detection network model suitable for indoor mobile robots. J. Mech. Sci. Technol. 2022, 36, 907–920. [Google Scholar] [CrossRef]
  30. Yang, F.; Zhang, J.; Zhang, J.; Wang, K. A new and efficient fixed point method for mean curvature denoising model. Int. J. Comput. Math. 2025, 102, 1865–1879. [Google Scholar] [CrossRef]
  31. Yu, P.; Tan, N. On the learning-based control of continuum robots with provable robustness, efficiency, and generalizability. Int. J. Robot. Res. 2026, 45, 308–327. [Google Scholar] [CrossRef]
  32. Tiboni, M.; Legnani, G.; Bussola, R.; Tosi, D. Full pose measurement system for industrial robots kinematic calibration based on a sensorized spatial linkage mechanism. Mech. Mach. Theory 2024, 197, 105652. [Google Scholar] [CrossRef]
  33. Zhu, D.; Wang, W.; Xue, X.; Ma, L.; Jin, X. Structure-preserving image smoothing via contrastive learning. Vis. Comput. 2023, 40, 5139–5153. [Google Scholar] [CrossRef]
  34. Hong, S.; Xiong, Y.; Gou, Y.; Li, L. Accurate monocular structure pose estimation and 3D trajectory measurement with single square marker. Measurement 2026, 269, 120685. [Google Scholar] [CrossRef]
  35. Wang, S. Real operational labeled data of air handling units from office, auditorium, and hospital buildings. Sci. Data 2025, 12, 1481. [Google Scholar] [CrossRef]
  36. Wang, S. Class-aware temporal and contextual contrastive framework for semi-supervised automated fault detection and diagnosis in air handling units. Energy Build. 2026, 358, 117233. [Google Scholar] [CrossRef]
  37. Cetinkaya, G.; Genc, Y. An optimization-based method for relative pose estimation for collaborating UAVs using observed predefined trajectories. Drones 2026, 10, 135. [Google Scholar] [CrossRef]
Figure 1. Improved overall process structure of the FLOAM algorithm (core improvements highlighted in red).
Figure 1. Improved overall process structure of the FLOAM algorithm (core improvements highlighted in red).
Applsci 16 03141 g001
Figure 2. Mobile robot experimental platform equipped with Leishen C16 LiDAR (note: the red knob is the emergency stop switch; turning it in the direction shown will bring the robot to an immediate stop. The manufacturer of the Leishen C16 LiDAR is Shenzhen Leishen Intelligent, Shenzhen, China).
Figure 2. Mobile robot experimental platform equipped with Leishen C16 LiDAR (note: the red knob is the emergency stop switch; turning it in the direction shown will bring the robot to an immediate stop. The manufacturer of the Leishen C16 LiDAR is Shenzhen Leishen Intelligent, Shenzhen, China).
Applsci 16 03141 g002
Figure 3. Indoor experimental testing environment (school building corridor) (note: the non-English terms in the image are the college’s motto).
Figure 3. Indoor experimental testing environment (school building corridor) (note: the non-English terms in the image are the college’s motto).
Applsci 16 03141 g003
Figure 4. Improved FLOAM algorithm for indoor first-floor lobby 3D point cloud mapping (note: images are clearly visible; outdoor environment mapping successful, furthermore, the color coding in the figure corresponds to the Z-axis height of the point cloud in the robot’s global coordinate system: warm colors (red, yellow) indicate lower height values, while cool colors (blue, purple) indicate higher height values; the black grid represents the two-dimensional horizontal reference plane in the global coordinate system.).
Figure 4. Improved FLOAM algorithm for indoor first-floor lobby 3D point cloud mapping (note: images are clearly visible; outdoor environment mapping successful, furthermore, the color coding in the figure corresponds to the Z-axis height of the point cloud in the robot’s global coordinate system: warm colors (red, yellow) indicate lower height values, while cool colors (blue, purple) indicate higher height values; the black grid represents the two-dimensional horizontal reference plane in the global coordinate system.).
Applsci 16 03141 g004
Figure 5. Two-dimensional floor plan of the first-floor lobby (note: the red line indicates the robot’s path on the 2D floor plan).
Figure 5. Two-dimensional floor plan of the first-floor lobby (note: the red line indicates the robot’s path on the 2D floor plan).
Applsci 16 03141 g005
Figure 6. Three-dimensional trajectory diagram of a robot operating in an indoor environment.
Figure 6. Three-dimensional trajectory diagram of a robot operating in an indoor environment.
Applsci 16 03141 g006
Figure 7. Time-dependent curve of robot roll angle in indoor scenes (note: the improved algorithm fluctuates around zero overall).
Figure 7. Time-dependent curve of robot roll angle in indoor scenes (note: the improved algorithm fluctuates around zero overall).
Applsci 16 03141 g007
Figure 8. Time-based variation curve of robot pitch angle in indoor scenes (note: the improved algorithm fluctuates around zero overall).
Figure 8. Time-based variation curve of robot pitch angle in indoor scenes (note: the improved algorithm fluctuates around zero overall).
Applsci 16 03141 g008
Figure 9. Time-dependent variation curve of indoor scene robot yaw angle (note: the improved algorithm possesses self-adjusting capabilities).
Figure 9. Time-dependent variation curve of indoor scene robot yaw angle (note: the improved algorithm possesses self-adjusting capabilities).
Applsci 16 03141 g009
Figure 10. Time-dependent velocity curve of a robot in an indoor environment (note: the robot is relatively steady in its speed but oscillates a fair bit when making sharp turns).
Figure 10. Time-dependent velocity curve of a robot in an indoor environment (note: the robot is relatively steady in its speed but oscillates a fair bit when making sharp turns).
Applsci 16 03141 g010
Figure 11. Time-dependent absolute pose error curve of the improved FLOAM algorithm in indoor experiments (note: the error range is within 0 and 0.02; loop closure detection is carried out successfully and the SOR filter is used for offline experiments).
Figure 11. Time-dependent absolute pose error curve of the improved FLOAM algorithm in indoor experiments (note: the error range is within 0 and 0.02; loop closure detection is carried out successfully and the SOR filter is used for offline experiments).
Applsci 16 03141 g011
Figure 12. Time-dependent absolute pose error curve of the improved FLOAM algorithm in indoor simulation (note: (a) represents the APE curve of the traditional FLOAM algorithm; (b) represents the APE curve of the improved FLOAM algorithm).
Figure 12. Time-dependent absolute pose error curve of the improved FLOAM algorithm in indoor simulation (note: (a) represents the APE curve of the traditional FLOAM algorithm; (b) represents the APE curve of the improved FLOAM algorithm).
Applsci 16 03141 g012
Figure 13. Outdoor experimental testing environment (open plaza at the entrance of the academic building) (note: the two images on the left show an outdoor environment filled with various distractions, while the two images on the right show the robot activated and ready to move. the non-English words in the images are the college’s motto).
Figure 13. Outdoor experimental testing environment (open plaza at the entrance of the academic building) (note: the two images on the left show an outdoor environment filled with various distractions, while the two images on the right show the robot activated and ready to move. the non-English words in the images are the college’s motto).
Applsci 16 03141 g013
Figure 14. FLOAM algorithm’s 3D point cloud mapping performance at the entrance of an outdoor academic building was improved (note: the generated maps are clearly visualized, and effective outdoor scene mapping is achieved. Furthermore, the color coding corresponds to the Z-axis elevation (height) of the point cloud in the robot global coordinate system: warm colors (red, yellow) represent low elevation values (including the robot’s real-time motion trajectory in the center of the figure), while cool colors (blue, purple) represent high elevation values (such as the top of vegetation and building structures in the scene). The black grid denotes the 2D horizontal reference plane of the global coordinate system).
Figure 14. FLOAM algorithm’s 3D point cloud mapping performance at the entrance of an outdoor academic building was improved (note: the generated maps are clearly visualized, and effective outdoor scene mapping is achieved. Furthermore, the color coding corresponds to the Z-axis elevation (height) of the point cloud in the robot global coordinate system: warm colors (red, yellow) represent low elevation values (including the robot’s real-time motion trajectory in the center of the figure), while cool colors (blue, purple) represent high elevation values (such as the top of vegetation and building structures in the scene). The black grid denotes the 2D horizontal reference plane of the global coordinate system).
Applsci 16 03141 g014
Figure 15. Outdoor two-dimensional plan view at the entrance of the academic building (note: the red line indicates the robot’s path on the 2D floor plan, the icon represents outdoor obstacles, including several trees and Garden.).
Figure 15. Outdoor two-dimensional plan view at the entrance of the academic building (note: the red line indicates the robot’s path on the 2D floor plan, the icon represents outdoor obstacles, including several trees and Garden.).
Applsci 16 03141 g015
Figure 16. Three-dimensional trajectory diagram of a robot operating in an outdoor environment.
Figure 16. Three-dimensional trajectory diagram of a robot operating in an outdoor environment.
Applsci 16 03141 g016
Figure 17. Time-dependent curve of robot roll angle in outdoor scenarios (note: the improved algorithm fluctuates around zero overall).
Figure 17. Time-dependent curve of robot roll angle in outdoor scenarios (note: the improved algorithm fluctuates around zero overall).
Applsci 16 03141 g017
Figure 18. Time-based variation curve of robot pitch angle in outdoor scenes (note: the improved algorithm fluctuates around zero overall).
Figure 18. Time-based variation curve of robot pitch angle in outdoor scenes (note: the improved algorithm fluctuates around zero overall).
Applsci 16 03141 g018
Figure 19. Time-dependent variation curve of robot yaw angle in outdoor scenarios (note: the improved algorithm possesses self-adjusting capabilities).
Figure 19. Time-dependent variation curve of robot yaw angle in outdoor scenarios (note: the improved algorithm possesses self-adjusting capabilities).
Applsci 16 03141 g019
Figure 20. Time-based speed variation curve for outdoor robot movement (note: the robot is subject to speed fluctuations when in motion, while outdoor settings affect the algorithm’s performance).
Figure 20. Time-based speed variation curve for outdoor robot movement (note: the robot is subject to speed fluctuations when in motion, while outdoor settings affect the algorithm’s performance).
Applsci 16 03141 g020
Figure 21. Time-varying absolute pose error curve of the improved FLOAM algorithm in outdoor experiments (note: error range falls between 0 and 0.05; loop closure detection successfully implemented).
Figure 21. Time-varying absolute pose error curve of the improved FLOAM algorithm in outdoor experiments (note: error range falls between 0 and 0.05; loop closure detection successfully implemented).
Applsci 16 03141 g021
Figure 22. Time-dependent variation curve of absolute pose error in outdoor simulation of the improved FLOAM algorithm (note: (a) represents the APE curve of the traditional FLOAM algorithm; (b) represents the APE curve of the improved FLOAM algorithm).
Figure 22. Time-dependent variation curve of absolute pose error in outdoor simulation of the improved FLOAM algorithm (note: (a) represents the APE curve of the traditional FLOAM algorithm; (b) represents the APE curve of the improved FLOAM algorithm).
Applsci 16 03141 g022
Table 1. Positioning accuracy metrics of the improved FLOAM algorithm in indoor offline experiments (unit: m).
Table 1. Positioning accuracy metrics of the improved FLOAM algorithm in indoor offline experiments (unit: m).
Evaluation IndicatorsMaxMeanMedianMinRMSEStd
Method
Improved FLOAM0.0975230.0128410.0052900.0003260.0215350.017288
Table 3. Improving positioning accuracy metrics for outdoor offline experiments of the FLOAM algorithm (unit: m).
Table 3. Improving positioning accuracy metrics for outdoor offline experiments of the FLOAM algorithm (unit: m).
Evaluation IndicatorsMaxMeanMedianMinRMSEStd
Method
Improved FLOAM0.2357550.0229170.0111560.0006180.0369120.028937
Table 2. Positioning accuracy metrics for indoor simulation experiments of the improved FLOAM algorithm (unit: m) (note: for all comparison algorithms, all auxiliary sensor inputs were disabled, retaining only 16-line LiDAR data; the experimental environment was identical to the indoor corridor simulation scenario described in this paper).
Table 2. Positioning accuracy metrics for indoor simulation experiments of the improved FLOAM algorithm (unit: m) (note: for all comparison algorithms, all auxiliary sensor inputs were disabled, retaining only 16-line LiDAR data; the experimental environment was identical to the indoor corridor simulation scenario described in this paper).
Evaluation IndicatorsMaxMeanMedianMinRMSEStd
Method
Improved FLOAM1.2000.3200.2700.1500.3500.220
Traditional FLOAM1.0000.6000.6000.1500.6000.230
LeGO-LOAM (pure laser mode)1.1000.5500.5200.1400.5800.210
LIO-SAM (pure laser mode, IMU disabled)1.0500.5200.5000.1500.5500.200
Percentage improvement in accuracy of the improved algorithm compared to traditional FLOAM%46.6755.0041.67
Percentage improvement in accuracy of the improved algorithm compared to LeGO-LOAM%41.8248.0839.66
Table 4. Positioning accuracy metrics for outdoor simulation experiments of the improved FLOAM algorithm (unit: m) (note: for all comparison algorithms, all auxiliary sensor inputs were disabled, retaining only 16-line LiDAR data; the experimental environment was identical to the outdoor open-square simulation scenario described in this paper).
Table 4. Positioning accuracy metrics for outdoor simulation experiments of the improved FLOAM algorithm (unit: m) (note: for all comparison algorithms, all auxiliary sensor inputs were disabled, retaining only 16-line LiDAR data; the experimental environment was identical to the outdoor open-square simulation scenario described in this paper).
Evaluation IndicatorsMaxMeanMedianMinRMSEStd
Method
Improved FLOAM0.4500.2700.2500.1300.3000.140
Traditional FLOAM1.5000.5000.5000.0500.5000.130
LeGO-LOAM (pure laser mode)1.3500.4800.4500.0800.4900.130
LIO-SAM (pure laser mode, IMU disabled)1.2000.4500.4200.0700.4700.120
Percentage improvement in accuracy of the improved algorithm compared to traditional FLOAM%70.0046.0050.0040.00
Percentage improvement in accuracy of the improved algorithm compared to LeGO-LOAM%66.6743.7544.4438.78
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Fu, S.; Zhao, T.; Zhang, J.; Guo, G.; Zheng, W. Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance. Appl. Sci. 2026, 16, 3141. https://doi.org/10.3390/app16073141

AMA Style

Fu S, Zhao T, Zhang J, Guo G, Zheng W. Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance. Applied Sciences. 2026; 16(7):3141. https://doi.org/10.3390/app16073141

Chicago/Turabian Style

Fu, Shichen, Tianbao Zhao, Junkai Zhang, Guangming Guo, and Weixiong Zheng. 2026. "Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance" Applied Sciences 16, no. 7: 3141. https://doi.org/10.3390/app16073141

APA Style

Fu, S., Zhao, T., Zhang, J., Guo, G., & Zheng, W. (2026). Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance. Applied Sciences, 16(7), 3141. https://doi.org/10.3390/app16073141

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop