Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,487)

Search Parameters:
Keywords = SLAM

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
19 pages, 9525 KB  
Article
Evaluating UAV and Handheld LiDAR Point Clouds for Radiative Transfer Modeling Using a Voxel-Based Point Density Proxy
by Takumi Fujiwara, Naoko Miura, Hiroki Naito and Fumiki Hosoi
Sensors 2026, 26(2), 590; https://doi.org/10.3390/s26020590 - 15 Jan 2026
Abstract
The potential of UAV-based LiDAR (UAV-LiDAR) and handheld LiDAR scanners (HLSs) for forest radiative transfer models (RTMs) was evaluated using a Voxel-Based Point Density Proxy (VPDP) as a diagnostic tool in a Larix kaempferi forest. Structural analysis-computed coverage gap ratio (CGR) revealed distinct [...] Read more.
The potential of UAV-based LiDAR (UAV-LiDAR) and handheld LiDAR scanners (HLSs) for forest radiative transfer models (RTMs) was evaluated using a Voxel-Based Point Density Proxy (VPDP) as a diagnostic tool in a Larix kaempferi forest. Structural analysis-computed coverage gap ratio (CGR) revealed distinct behaviors. UAV-LiDARs effectively captured canopy structures (10–45% CGR), whereas HLS provided superior understory coverage, but exhibited a high upper-canopy CGR (>40%). Integrating datasets reduced the CGR to below 10%, demonstrating strong complementarity. Radiative transfer simulations correlated well with Sentinel-2 NIR reflectance, with the UAV-LiDAR (r = 0.73–0.75) outperforming the HLS (r = 0.64–0.69). These results highlight the critical importance of upper-canopy modeling for nadir-viewing sensors. Although integrating HLS data did not improve correlation due to the dominance of upper-canopy signals, structural analysis confirmed that fusion is essential for achieving volumetric completeness. A voxel size range of 50–100 cm was identified as effective for balancing structural detail and radiative stability. These findings provide practical guidelines for selecting and integrating LiDAR platforms in forest monitoring, emphasizing that while aerial sensors suffice for top-of-canopy reflectance, multi-platform fusion is requisite for full 3D structural characterization. Full article
(This article belongs to the Special Issue Progress in LiDAR Technologies and Applications)
Show Figures

Figure 1

24 pages, 39327 KB  
Article
Forest Surveying with Robotics and AI: SLAM-Based Mapping, Terrain-Aware Navigation, and Tree Parameter Estimation
by Lorenzo Scalera, Eleonora Maset, Diego Tiozzo Fasiolo, Khalid Bourr, Simone Cottiga, Andrea De Lorenzo, Giovanni Carabin, Giorgio Alberti, Alessandro Gasparetto, Fabrizio Mazzetto and Stefano Seriani
Machines 2026, 14(1), 99; https://doi.org/10.3390/machines14010099 - 14 Jan 2026
Abstract
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation [...] Read more.
Forest surveying and inspection face significant challenges due to unstructured environments, variable terrain conditions, and the high costs of manual data collection. Although mobile robotics and artificial intelligence offer promising solutions, reliable autonomous navigation in forest, terrain-aware path planning, and tree parameter estimation remain open challenges. In this paper, we present the results of the AI4FOREST project, which addresses these issues through three main contributions. First, we develop an autonomous mobile robot, integrating SLAM-based navigation, 3D point cloud reconstruction, and a vision-based deep learning architecture to enable tree detection and diameter estimation. This system demonstrates the feasibility of generating a digital twin of forest while operating autonomously. Second, to overcome the limitations of classical navigation approaches in heterogeneous natural terrains, we introduce a machine learning-based surrogate model of wheel–soil interaction, trained on a large synthetic dataset derived from classical terramechanics. Compared to purely geometric planners, the proposed model enables realistic dynamics simulation and improves navigation robustness by accounting for terrain–vehicle interactions. Finally, we investigate the impact of point cloud density on the accuracy of forest parameter estimation, identifying the minimum sampling requirements needed to extract tree diameters and heights. This analysis provides support to balance sensor performance, robot speed, and operational costs. Overall, the AI4FOREST project advances the state of the art in autonomous forest monitoring by jointly addressing SLAM-based mapping, terrain-aware navigation, and tree parameter estimation. Full article
Show Figures

Figure 1

26 pages, 4529 KB  
Review
Key Technologies for Intelligent Operation of Plant Protection UAVs in Hilly and Mountainous Areas: Progress, Challenges, and Prospects
by Yali Zhang, Zhilei Sun, Wanhang Peng, Yeqing Lin, Xinting Li, Kangting Yan and Pengchao Chen
Agronomy 2026, 16(2), 193; https://doi.org/10.3390/agronomy16020193 - 13 Jan 2026
Viewed by 31
Abstract
Hilly and mountainous areas are important agricultural production regions globally. Their dramatic topography, dense fruit tree planting, and steep slopes severely restrict the application of traditional plant protection machinery. Pest and disease control has long relied on manual spraying, resulting in high labor [...] Read more.
Hilly and mountainous areas are important agricultural production regions globally. Their dramatic topography, dense fruit tree planting, and steep slopes severely restrict the application of traditional plant protection machinery. Pest and disease control has long relied on manual spraying, resulting in high labor intensity, low efficiency, and pesticide utilization rates of less than 30%. Plant protection UAVs, with their advantages of flexibility, high efficiency, and precise application, provide a feasible technical approach for plant protection operations in hilly and mountainous areas. However, steep slopes and dense orchard environments place higher demands on key technologies such as drone positioning and navigation, attitude control, trajectory planning, and terrain following. Achieving accurate identification and adaptive following of the undulating fruit tree canopy while maintaining a constant spraying distance to ensure uniform pesticide coverage has become a core technological bottleneck. This paper systematically reviews the key technologies and research progress of plant protection UAVs in hilly and mountainous operations, focusing on the principles, advantages, and limitations of core methods such as multi-sensor fusion positioning, intelligent SLAM navigation, nonlinear attitude control and intelligent control, three-dimensional trajectory planning, and multimodal terrain following. It also discusses the challenges currently faced by these technologies in practical applications. Finally, this paper discusses and envisions the future of plant protection UAVs in achieving intelligent, collaborative, and precise operations on steep slopes and in dense orchards, providing theoretical reference and technical support for promoting the mechanization and intelligentization of mountain agriculture. Full article
(This article belongs to the Section Precision and Digital Agriculture)
Show Figures

Figure 1

29 pages, 9411 KB  
Article
A Real-Time Mobile Robotic System for Crack Detection in Construction Using Two-Stage Deep Learning
by Emmanuella Ogun, Yong Ann Voeurn and Doyun Lee
Sensors 2026, 26(2), 530; https://doi.org/10.3390/s26020530 - 13 Jan 2026
Viewed by 42
Abstract
The deterioration of civil infrastructure poses a significant threat to public safety, yet conventional manual inspections remain subjective, labor-intensive, and constrained by accessibility. To address these challenges, this paper presents a real-time robotic inspection system that integrates deep learning perception and autonomous navigation. [...] Read more.
The deterioration of civil infrastructure poses a significant threat to public safety, yet conventional manual inspections remain subjective, labor-intensive, and constrained by accessibility. To address these challenges, this paper presents a real-time robotic inspection system that integrates deep learning perception and autonomous navigation. The proposed framework employs a two-stage neural network: a U-Net for initial segmentation followed by a Pix2Pix conditional generative adversarial network (GAN) that utilizes adversarial residual learning to refine boundary accuracy and suppress false positives. When deployed on an Unmanned Ground Vehicle (UGV) equipped with an RGB-D camera and LiDAR, this framework enables simultaneous automated crack detection and collision-free autonomous navigation. Evaluated on the CrackSeg9k dataset, the two-stage model achieved a mean Intersection over Union (mIoU) of 73.9 ± 0.6% and an F1-score of 76.4 ± 0.3%. Beyond benchmark testing, the robotic system was further validated through simulation, laboratory experiments, and real-world campus hallway tests, successfully detecting micro-cracks as narrow as 0.3 mm. Collectively, these results demonstrate the system’s potential for robust, autonomous, and field-deployable infrastructure inspection. Full article
(This article belongs to the Special Issue Sensing and Control Technology of Intelligent Robots)
Show Figures

Figure 1

21 pages, 2930 KB  
Article
Robust Model Predictive Control with a Dynamic Look-Ahead Re-Entry Strategy for Trajectory Tracking of Differential-Drive Robots
by Diego Guffanti, Moisés Filiberto Mora Murillo, Santiago Bustamante Sanchez, Javier Oswaldo Obregón Gutiérrez, Marco Alejandro Hinojosa, Alberto Brunete, Miguel Hernando and David Álvarez
Sensors 2026, 26(2), 520; https://doi.org/10.3390/s26020520 - 13 Jan 2026
Viewed by 39
Abstract
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To [...] Read more.
Accurate trajectory tracking remains a central challenge in differential-drive mobile robots (DDMRs), particularly when operating under real-world conditions. Model Predictive Control (MPC) provides a powerful framework for this task, but its performance degrades when the robot deviates significantly from the nominal path. To address this limitation, robust recovery mechanisms are required to ensure stable and precise tracking. This work presents an experimental validation of an MPC controller applied to a four-wheel DDMR, whose odometry is corrected by a SLAM algorithm running in ROS 2. The MPC is formulated as a quadratic program with state and input constraints on linear (v) and angular (ω) velocities, using a prediction horizon of Np=15 future states, adjusted to the computational resources of the onboard computer. A novel dynamic look-ahead re-entry strategy is proposed, which activates when the robot exits a predefined lateral error band (δ=0.05 m) and interpolates a smooth reconnection trajectory based on a forward look-ahead point, ensuring gradual convergence and avoiding abrupt re-entry actions. Accuracy was evaluated through lateral and heading errors measured via geometric projection onto the nominal path, ensuring fair comparison. From these errors, RMSE, MAE, P95, and in-band percentage were computed as quantitative metrics. The framework was tested on real hardware at 50 Hz through 5 nominal experiments and 3 perturbed experiments. Perturbations consisted of externally imposed velocity commands at specific points along the path, while configuration parameters were systematically varied across trials, including the weight R, smoothing distance Lsmooth, and activation of the re-entry strategy. In nominal conditions, the best configuration (ID 2) achieved a lateral RMSE of 0.05 m, a heading RMSE of 0.06 rad, and maintained 68.8% of the trajectory within the validation band. Under perturbations, the proposed strategy substantially improved robustness. For instance, in experiment ID 6 the robot sustained a lateral RMSE of 0.12 m and preserved 51.4% in-band, outperforming MPC without re-entry, which suffered from larger deviations and slower recoveries. The results confirm that integrating MPC with the proposed re-entry strategy enhances both accuracy and robustness in DDMR trajectory tracking. By combining predictive control with a spatially grounded recovery mechanism, the approach ensures consistent performance in challenging scenarios, underscoring its relevance for reliable mobile robot navigation in uncertain environments. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Figure 1

20 pages, 2119 KB  
Article
Intelligent Logistics Sorting Technology Based on PaddleOCR and SMITE Parameter Tuning
by Zhaokun Yang, Yue Li, Lizhi Sun, Yufeng Qiu, Licun Fang, Zibin Hu and Shouna Guo
Appl. Sci. 2026, 16(2), 767; https://doi.org/10.3390/app16020767 - 12 Jan 2026
Viewed by 93
Abstract
To address the current reliance on manual labor in traditional logistics sorting operations, which leads to low sorting efficiency and high operational costs, this study presents the design of an unmanned logistics vehicle based on the Robot Operating System (ROS). To overcome bounding-box [...] Read more.
To address the current reliance on manual labor in traditional logistics sorting operations, which leads to low sorting efficiency and high operational costs, this study presents the design of an unmanned logistics vehicle based on the Robot Operating System (ROS). To overcome bounding-box loss issues commonly encountered by mainstream video-stream image segmentation algorithms under complex conditions, the novel SMITE video image segmentation algorithm is employed to accurately extract key regions of mail items while eliminating interference. Extracted logistics information is mapped to corresponding grid points within a map constructed using Simultaneous Localization and Mapping (SLAM). The system performs global path planning with the A* heuristic graph search algorithm to determine the optimal route, autonomously navigates to the target location, and completes the sorting task via a robotic arm, while local path planning is managed using the Dijkstra algorithm. Experimental results demonstrate that the SMITE video image segmentation algorithm maintains stable and accurate segmentation under complex conditions, including object appearance variations, illumination changes, and viewpoint shifts. The PaddleOCR text recognition algorithm achieves an average recognition accuracy exceeding 98.5%, significantly outperforming traditional methods. Through the analysis of existing technologies and the design of a novel parcel-grasping control system, the feasibility of the proposed system is validated in real-world environments. Full article
Show Figures

Figure 1

23 pages, 5900 KB  
Article
Hybrid Attention Mechanism Combined with U-Net for Extracting Vascular Branching Points in Intracavitary Images
by Kaiyang Xu, Haibin Wu, Liang Yu and Xin He
Electronics 2026, 15(2), 322; https://doi.org/10.3390/electronics15020322 - 11 Jan 2026
Viewed by 138
Abstract
To address the application requirements of Visual Simultaneous Localization and Mapping (VSLAM) in intracavitary environments and the scarcity of gold-standard datasets for deep learning methods, this study proposes a hybrid attention mechanism combined with U-Net for vascular branch point extraction in endoluminal images [...] Read more.
To address the application requirements of Visual Simultaneous Localization and Mapping (VSLAM) in intracavitary environments and the scarcity of gold-standard datasets for deep learning methods, this study proposes a hybrid attention mechanism combined with U-Net for vascular branch point extraction in endoluminal images (SuperVessel). The network is initialized via transfer learning with pre-trained SuperRetina model parameters and integrated with a vascular feature detection and matching method based on dual branch fusion and structure enhancement, generating a pseudo-gold-standard vascular branch point dataset. The framework employs a dual-decoder architecture, incorporates a dynamic up-sampling module (CBAM-Dysample) to refine local vessel features through hybrid attention mechanisms, designs a Dice-Det loss function weighted by branching features to prioritize vessel junctions, and introduces a dynamically weighted Triplet-Des loss function optimized for descriptor discrimination. Experiments on the Vivo test set demonstrate that the proposed method achieves an average Area Under Curve (AUC) of 0.760, with mean feature points, accuracy, and repeatability scores of 42,795, 0.5294, and 0.46, respectively. Compared to SuperRetina, the method maintains matching stability while exhibiting superior repeatability, feature point density, and robustness in low-texture/deformation scenarios. Ablation studies confirm the CBAM-Dysample module’s efficacy in enhancing feature expression and convergence speed, offering a robust solution for intracavitary SLAM systems. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

30 pages, 5328 KB  
Article
DTVIRM-Swarm: A Distributed and Tightly Integrated Visual-Inertial-UWB-Magnetic System for Anchor Free Swarm Cooperative Localization
by Xincan Luo, Xueyu Du, Shuai Yue, Yunxiao Lv, Lilian Zhang, Xiaofeng He, Wenqi Wu and Jun Mao
Drones 2026, 10(1), 49; https://doi.org/10.3390/drones10010049 - 9 Jan 2026
Viewed by 161
Abstract
Accurate Unmanned Aerial Vehicle (UAV) positioning is vital for swarm cooperation. However, this remains challenging in situations where Global Navigation Satellite System (GNSS) and other external infrastructures are unavailable. To address this challenge, we propose to use only the onboard Microelectromechanical System Inertial [...] Read more.
Accurate Unmanned Aerial Vehicle (UAV) positioning is vital for swarm cooperation. However, this remains challenging in situations where Global Navigation Satellite System (GNSS) and other external infrastructures are unavailable. To address this challenge, we propose to use only the onboard Microelectromechanical System Inertial Measurement Unit (MIMU), Magnetic sensor, Monocular camera and Ultra-Wideband (UWB) device to construct a distributed and anchor-free cooperative localization system by tightly fusing the measurements. As the onboard UWB measurements under dynamic motion conditions are noisy and discontinuous, we propose an adaptive adjustment method based on chi-squared detection to effectively filter out inconsistent and false ranging information. Moreover, we introduce the pose-only theory to model the visual measurement, which improves the efficiency and accuracy for visual-inertial processing. A sliding window Extended Kalman Filter (EKF) is constructed to tightly fuse all the measurements, which is capable of working under UWB or visual deprived conditions. Additionally, a novel Multidimensional Scaling-MAP (MDS-MAP) initialization method fuses ranging, MIMU, and geomagnetic data to solve the non-convex optimization problem in ranging-aided Simultaneous Localization and Mapping (SLAM), ensuring fast and accurate swarm absolute pose initialization. To overcome the state consistency challenge inherent in the distributed cooperative structure, we model not only the UWB noisy uncertainty but also the neighbor agent’s position uncertainty in the measurement model. Furthermore, we incorporate the Covariance Intersection (CI) method into our UWB measurement fusion process to address the challenge of unknown correlations between state estimates from different UAVs, ensuring consistent and robust state estimation. To validate the effectiveness of the proposed methods, we have established both simulation and hardware test platforms. The proposed method is compared with state-of-the-art (SOTA) UAV localization approaches designed for GNSS-challenged environments. Extensive experiments demonstrate that our algorithm achieves superior positioning accuracy, higher computing efficiency and better robustness. Moreover, even when vision loss causes other methods to fail, our proposed method continues to operate effectively. Full article
(This article belongs to the Special Issue Autonomous Drone Navigation in GPS-Denied Environments)
Show Figures

Figure 1

22 pages, 416 KB  
Review
A Roadmap of Mathematical Optimization for Visual SLAM in Dynamic Environments
by Hui Zhang, Xuerong Zhao, Ruixue Luo, Ziyu Wang, Gang Wang and Kang An
Mathematics 2026, 14(2), 264; https://doi.org/10.3390/math14020264 - 9 Jan 2026
Viewed by 149
Abstract
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation [...] Read more.
The widespread application of robots in complex and dynamic environments demands that Visual SLAM is both robust and accurate. However, dynamic objects, varying illumination, and environmental complexity fundamentally challenge the static world assumptions underlying traditional SLAM methods. This review provides a comprehensive investigation into the mathematical foundations of V-SLAM and systematically analyzes the key optimization techniques developed for dynamic environments, with particular emphasis on advances since 2020. We begin by rigorously deriving the probabilistic formulation of V-SLAM and its basis in nonlinear optimization, unifying it under a Maximum a Posteriori (MAP) estimation framework. We then propose a taxonomy based on how dynamic elements are handled mathematically, which reflects the historical evolution from robust estimation to semantic modeling and then to deep learning. This framework provides detailed analysis of three main categories: (1) robust estimation theory-based methods for outlier rejection, elaborating on the mathematical models of M-estimators and switch variables; (2) semantic information and factor graph-based methods for explicit dynamic object modeling, deriving the joint optimization formulation for multi-object tracking and SLAM; and (3) deep learning-based end-to-end optimization methods, discussing their mathematical foundations and interpretability challenges. This paper delves into the mathematical principles, performance boundaries, and theoretical controversies underlying these approaches, concluding with a summary of future research directions informed by the latest developments in the field. The review aims to provide both a solid mathematical foundation for understanding current dynamic V-SLAM techniques and inspiration for future algorithmic innovations. By adopting a math-first perspective and organizing the field through its core optimization paradigms, this work offers a clarifying framework for both understanding and advancing dynamic V-SLAM. Full article
(This article belongs to the Section E2: Control Theory and Mechanics)
Show Figures

Figure 1

54 pages, 8516 KB  
Review
Interdisciplinary Applications of LiDAR in Forest Studies: Advances in Sensors, Methods, and Cross-Domain Metrics
by Nadeem Fareed, Carlos Alberto Silva, Izaya Numata and Joao Paulo Flores
Remote Sens. 2026, 18(2), 219; https://doi.org/10.3390/rs18020219 - 9 Jan 2026
Viewed by 255
Abstract
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, [...] Read more.
Over the past two decades, Light Detection and Ranging (LiDAR) technology has evolved from early National Aeronautics and Space Administration (NASA)-led airborne laser altimetry into commercially mature systems that now underpin vegetation remote sensing across scales. Continuous advancements in laser engineering, signal processing, and complementary technologies—such as Inertial Measurement Units (IMU) and Global Navigation Satellite Systems (GNSS)—have yielded compact, cost-effective, and highly sophisticated LiDAR sensors. Concurrently, innovations in carrier platforms, including uncrewed aerial systems (UAS), mobile laser scanning (MLS), Simultaneous Localization and Mapping (SLAM) frameworks, have expanded LiDAR’s observational capacity from plot- to global-scale applications in forestry, precision agriculture, ecological monitoring, Above Ground Biomass (AGB) modeling, and wildfire science. This review synthesizes LiDAR’s cross-domain capabilities for the following: (a) quantifying vegetation structure, function, and compositional dynamics; (b) recent sensor developments encompassing ALS discrete-return (ALSD), and ALS full-waveform (ALSFW), photon-counting LiDAR (PCL), emerging multispectral LiDAR (MSL), and hyperspectral LiDAR (HSL) systems; and (c) state-of-the-art data processing and fusion workflows integrating optical and radar datasets. The synthesis demonstrates that many LiDAR-derived vegetation metrics are inherently transferable across domains when interpreted within a unified structural framework. The review further highlights the growing role of artificial-intelligence (AI)-driven approaches for segmentation, classification, and multitemporal analysis, enabling scalable assessments of vegetation dynamics at unprecedented spatial and temporal extents. By consolidating historical developments, current methodological advances, and emerging research directions, this review establishes a comprehensive state-of-the-art perspective on LiDAR’s transformative role and future potential in monitoring and modeling Earth’s vegetated ecosystems. Full article
(This article belongs to the Special Issue Digital Modeling for Sustainable Forest Management)
Show Figures

Graphical abstract

14 pages, 3931 KB  
Article
Experimental Determination of Material Behavior Under Compression of a Carbon-Reinforced Epoxy Composite Boat Damaged by Slamming-like Impact
by Erkin Altunsaray, Mustafa Biçer, Haşim Fırat Karasu and Gökdeniz Neşer
Polymers 2026, 18(2), 173; https://doi.org/10.3390/polym18020173 - 8 Jan 2026
Viewed by 206
Abstract
Carbon-reinforced epoxy laminated composite (CREC) structures are increasingly utilized in high-speed marine vehicles (HSMVs) due to their high specific strength and stiffness; however, they are frequently subjected to impact loads like slamming and aggressive environmental agents during operation. This study experimentally investigates the [...] Read more.
Carbon-reinforced epoxy laminated composite (CREC) structures are increasingly utilized in high-speed marine vehicles (HSMVs) due to their high specific strength and stiffness; however, they are frequently subjected to impact loads like slamming and aggressive environmental agents during operation. This study experimentally investigates the Compression After Impact (CAI) behavior of CREC plates with varying lamination sequences under both atmospheric and accelerated aging conditions. The samples were produced using the vacuum-assisted resin infusion method with three specific orientation types: quasi-isotropic, cross-ply, and angle-ply. To simulate the marine environment, specimens were subjected to accelerated aging in a salt fog and cyclic corrosion cabin for periods of 2, 4, and 6 weeks. Before and following the aging process, low-velocity impact tests were conducted at an energy level of 30 J, after which the residual compressive strength was measured by CAI tests. At the end of the aging process, after the sixth week, the performance of plates with different layer configuration characteristics can be summarized as follows: Plates 1 and 2, which are quasi-isotropic, exhibit opposite behavior. Plate 1, with an initial toughness of 23,000 mJ, increases its performance to 27,000 mJ as it ages, while these values are around 27,000 and 17,000 mJ, respectively, for Plate 2. It is thought that the difference in configurations creates this difference, and the presence of the 0° layer under the effect of compression load at the beginning and end of the configuration has a performance-enhancing effect. In Plates 3 and 4, which have a cross-ply configuration, almost the same performance is observed; the performance, which is initially 13,000 mJ, increases to around 23,000 mJ with the effect of aging. Among the options, angle-ply Plates 5 and 6 demonstrate the highest performance with values around 35,000 mJ, along with an undefined aging effect. Scanning Electron Microscopy (SEM) and Energy-Dispersive X-ray Spectroscopy (EDS) analyses confirmed the presence of matrix cracking, fiber breakage, and salt accumulation (Na and Ca compounds) on the aged surfaces. The study concludes that the impact of environmental aging on CRECs is not uniformly negative; while it degrades certain configurations, it can enhance the toughness and energy absorption of brittle, cross-ply structures through matrix plasticization. Full article
Show Figures

Figure 1

18 pages, 7305 KB  
Article
SERail-SLAM: Semantic-Enhanced Railway LiDAR SLAM
by Weiwei Song, Shiqi Zheng, Xinye Dai, Xiao Wang, Yusheng Wang, Zihao Wang, Shujie Zhou, Wenlei Liu and Yidong Lou
Machines 2026, 14(1), 72; https://doi.org/10.3390/machines14010072 - 7 Jan 2026
Viewed by 232
Abstract
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce [...] Read more.
Reliable state estimation in railway environments presents significant challenges due to geometric degeneracy resulting from repetitive structural layouts and point cloud sparsity caused by high-speed motion. Conventional LiDAR-based SLAM systems frequently suffer from longitudinal drift and mapping artifacts when operating in such feature-scarce and dynamically complex scenarios. To address these limitations, this paper proposes SERail-SLAM, a robust semantic-enhanced multi-sensor fusion framework that tightly couples LiDAR odometry, inertial pre-integration, and GNSS constraints. Unlike traditional approaches that rely on rigid voxel grids or binary semantic masking, we introduce a Semantic-Enhanced Adaptive Voxel Map. By leveraging eigen-decomposition of local point distributions, this mapping strategy dynamically preserves fine-grained stable structures while compressing redundant planar surfaces, thereby enhancing spatial descriptiveness. Furthermore, to mitigate the impact of environmental noise and segmentation uncertainty, a confidence-aware filtering mechanism is developed. This method utilizes raw segmentation probabilities to adaptively weight input measurements, effectively distinguishing reliable landmarks from clutter. Finally, a category-weighted joint optimization scheme is implemented, where feature associations are constrained by semantic stability priors, ensuring globally consistent localization. Extensive experiments in real-world railway datasets demonstrate that the proposed system achieves superior accuracy and robustness compared to state-of-the-art geometric and semantic SLAM methods. Full article
(This article belongs to the Special Issue Dynamic Analysis and Condition Monitoring of High-Speed Trains)
Show Figures

Figure 1

15 pages, 4002 KB  
Article
LiDAR–Visual–Inertial Multi-UGV Collaborative SLAM Framework
by Hongyu Wei, Pingfan Wu, Xutong Zhang, Jianyong Zheng, Jianzheng Zhang and Kun Wei
Drones 2026, 10(1), 31; https://doi.org/10.3390/drones10010031 - 5 Jan 2026
Viewed by 383
Abstract
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM [...] Read more.
The collaborative execution of tasks by multiple Unmanned Ground Vehicles (UGVs) has become a development trend in the field of unmanned systems. Existing collaborative Simultaneous Localization and Mapping (SLAM) frameworks mainly employ methods based on visual–inertial or LiDAR–inertial. However, the use of C-SLAM based on these three types of sensors is relatively less common. Therefore, these systems cannot achieve robust and accurate global localization performance in real-world environments. In order to address this issue, a LiDAR–visual–inertial multi-UGV collaborative SLAM framework is proposed in this paper. The whole system is divided into three parts. The first part constructs a front-end odometry by integrating the raw information from LiDAR, visual, and inertial sensors, which provides the accurate initial pose estimation and local mapping of each UGV for the collaborative system. The second part utilizes the similarity of different local mappings to form a global mapping of the environment. The third part achieves global localization and mapping optimization for multi-UGV localization system. In order to verify the effectiveness of the proposed framework, a series of real-world experiments have been conducted. Over an average trajectory length of 237 m, the framework achieves a mean Absolute Pose Error (APE) of 1.49 m and Relative Pose Error (RPE) of 1.68° after the global optimization. The experimental results demonstrate that the proposed framework achieves superior collaborative localization and mapping performance, with the mean APE reduced by 5.4% and mean RPE reduced by 1.4% compared to other methods. Full article
Show Figures

Figure 1

19 pages, 38545 KB  
Article
Improving Dynamic Visual SLAM in Robotic Environments via Angle-Based Optical Flow Analysis
by Sedat Dikici and Fikret Arı
Electronics 2026, 15(1), 223; https://doi.org/10.3390/electronics15010223 - 3 Jan 2026
Viewed by 254
Abstract
Dynamic objects present a major challenge for visual simultaneous localization and mapping (Visual SLAM), as feature measurements originating from moving regions can corrupt camera pose estimation and lead to inaccurate maps. In this paper, we propose a lightweight, semantic-free front-end enhancement for ORB-SLAM [...] Read more.
Dynamic objects present a major challenge for visual simultaneous localization and mapping (Visual SLAM), as feature measurements originating from moving regions can corrupt camera pose estimation and lead to inaccurate maps. In this paper, we propose a lightweight, semantic-free front-end enhancement for ORB-SLAM that detects and suppresses dynamic features using optical flow geometry. The key idea is to estimate a global motion direction point (MDP) from optical flow vectors and to classify feature points based on their angular consistency with the camera-induced motion field. Unlike magnitude-based flow filtering, the proposed strategy exploits the geometric consistency of optical flow with respect to a motion direction point, providing robustness not only to depth variation and camera speed changes but also to different camera motion patterns, including pure translation and pure rotation. The method is integrated into the ORB-SLAM front-end without modifying the back-end optimization or cost function. Experiments on public dynamic-scene datasets demonstrate that the proposed approach reduces absolute trajectory error by up to approximately 45% compared to baseline ORB-SLAM, while maintaining real-time performance on a CPU-only platform. These results indicate that reliable dynamic feature suppression can be achieved without semantic priors or deep learning models. Full article
(This article belongs to the Section Computer Science & Engineering)
Show Figures

Figure 1

24 pages, 7208 KB  
Article
Dynamic SLAM by Combining Rigid Feature Point Set Modeling and YOLO
by Pengchao Ding, Weidong Wang, Xian Wu, Kangle Xu, Dongmei Wu and Zhijiang Du
Sensors 2026, 26(1), 235; https://doi.org/10.3390/s26010235 - 30 Dec 2025
Viewed by 338
Abstract
To obtain accurate location information in dynamic environments, we propose a dynamic visual–inertial SLAM algorithm that can operate in real-time. In this paper, we combine the YOLO-V5 algorithm and the depth threshold extraction algorithm to achieve real-time pixel-level segmentation of objects. Meanwhile, to [...] Read more.
To obtain accurate location information in dynamic environments, we propose a dynamic visual–inertial SLAM algorithm that can operate in real-time. In this paper, we combine the YOLO-V5 algorithm and the depth threshold extraction algorithm to achieve real-time pixel-level segmentation of objects. Meanwhile, to address the situation where dynamic targets are occluded by other objects, we design the object depth extraction method based on K-means clustering. We also design a factor graph optimization with rigid and non-rigid dynamic objects based on object category division, in order to better utilize the motion information of dynamic objects. We use the Kalman filter algorithm to achieve object matching and tracking. At the same time, to obtain as many rigid targets as possible, we design the adaptive rigid point set modeling algorithm to further supplement the rigid objects. Finally, we evaluate the algorithm through public datasets and self-built datasets, verifying its ability to handle dynamic environments. Full article
(This article belongs to the Section Sensing and Imaging)
Show Figures

Figure 1

Back to TopTop