Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (953)

Search Parameters:
Keywords = indoor-navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
24 pages, 7868 KB  
Article
An Indoor UAV Localization Framework with ESKF Tightly-Coupled Fusion and Multi-Epoch UWB Outlier Rejection
by Jianmin Zhao, Zhongliang Deng, Enwen Hu, Wenju Su, Boyang Lou and Yanxu Liu
Sensors 2025, 25(24), 7673; https://doi.org/10.3390/s25247673 - 18 Dec 2025
Abstract
Unmanned aerial vehicles (UAVs) are increasingly used indoors for inspection, security, and emergency tasks. Achieving accurate and robust localization under Global Navigation Satellite System (GNSS) unavailability and obstacle occlusions is therefore a critical challenge. Due to their inherent physical limitations, Inertial Measurement Unit [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly used indoors for inspection, security, and emergency tasks. Achieving accurate and robust localization under Global Navigation Satellite System (GNSS) unavailability and obstacle occlusions is therefore a critical challenge. Due to their inherent physical limitations, Inertial Measurement Unit (IMU)–based localization errors accumulate over time, Ultra-Wideband (UWB) measurements suffer from systematic biases in Non-Line-of-Sight (NLOS) environments and Visual–Inertial Odometry (VIO) depends heavily on environmental features, making it susceptible to long-term drift. We propose a tightly coupled fusion framework based on the Error-State Kalman Filter (ESKF). Using an IMU motion model for prediction, the method incorporates raw UWB ranges, VIO relative poses, and TFmini altitude in the update step. To suppress abnormal UWB measurements, a multi-epoch outlier rejection method constrained by VIO is developed, which can robustly eliminate NLOS range measurements and effectively mitigate the influence of outliers on observation updates. This framework improves both observation quality and fusion stability. We validate the proposed method on a real-world platform in an underground parking garage. Experimental results demonstrate that, in complex indoor environments, the proposed approach exhibits significant advantages over existing algorithms, achieving higher localization accuracy and robustness while effectively suppressing UWB NLOS errors as well as IMU and VIO drift. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

20 pages, 4309 KB  
Article
Targetless Radar–Camera Calibration via Trajectory Alignment
by Ozan Durmaz and Hakan Cevikalp
Sensors 2025, 25(24), 7574; https://doi.org/10.3390/s25247574 - 13 Dec 2025
Viewed by 311
Abstract
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This [...] Read more.
Accurate extrinsic calibration between radar and camera sensors is essential for reliable multi-modal perception in robotics and autonomous navigation. Traditional calibration methods often rely on artificial targets such as checkerboards or corner reflectors, which can be impractical in dynamic or large-scale environments. This study presents a fully targetless calibration framework that estimates the rigid spatial transformation between radar and camera coordinate frames by aligning their observed trajectories of a moving object. The proposed method integrates You Only Look Once version 5 (YOLOv5)-based 3D object localization for the camera stream with Density-Based Spatial Clustering of Applications with Noise (DBSCAN) and Random Sample Consensus (RANSAC) filtering for sparse and noisy radar measurements. A passive temporal synchronization technique, based on Root Mean Square Error (RMSE) minimization, corrects timestamp offsets without requiring hardware triggers. Rigid transformation parameters are computed using Kabsch and Umeyama algorithms, ensuring robust alignment even under millimeter-wave (mmWave) radar sparsity and measurement bias. The framework is experimentally validated in an indoor OptiTrack-equipped laboratory using a Skydio 2 drone as the dynamic target. Results demonstrate sub-degree rotational accuracy and decimeter-level translational error (approximately 0.12–0.27 m depending on the metric), with successful generalization to unseen motion trajectories. The findings highlight the method’s applicability for real-world autonomous systems requiring practical, markerless multi-sensor calibration. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

30 pages, 22912 KB  
Article
HV-LIOM: Adaptive Hash-Voxel LiDAR–Inertial SLAM with Multi-Resolution Relocalization and Reinforcement Learning for Autonomous Exploration
by Shicheng Fan, Xiaopeng Chen, Weimin Zhang, Peng Xu, Zhengqing Zuo, Xinyan Tan, Xiaohai He, Chandan Sheikder, Meijun Guo and Chengxiang Li
Sensors 2025, 25(24), 7558; https://doi.org/10.3390/s25247558 - 12 Dec 2025
Viewed by 323
Abstract
This paper presents HV-LIOM (Adaptive Hash-Voxel LiDAR–Inertial Odometry and Mapping), a unified LiDAR–inertial SLAM and autonomous exploration framework for real-time 3D mapping in dynamic, GNSS-denied environments. We propose an adaptive hash-voxel mapping scheme that improves memory efficiency and real-time state estimation by subdividing [...] Read more.
This paper presents HV-LIOM (Adaptive Hash-Voxel LiDAR–Inertial Odometry and Mapping), a unified LiDAR–inertial SLAM and autonomous exploration framework for real-time 3D mapping in dynamic, GNSS-denied environments. We propose an adaptive hash-voxel mapping scheme that improves memory efficiency and real-time state estimation by subdividing voxels according to local geometric complexity and point density. To enhance robustness to poor initialization, we introduce a multi-resolution relocalization strategy that enables reliable localization against a prior map under large initial pose errors. A learning-based loop-closure module further detects revisited places and injects global constraints, while global pose-graph optimization maintains long-term map consistency. For autonomous exploration, we integrate a Soft Actor–Critic (SAC) policy that selects informative navigation targets online, improving exploration efficiency in unknown scenes. We evaluate HV-LIOM on public datasets (Hilti and NCLT) and a custom mobile robot platform. Results show that HV-LIOM improves absolute pose accuracy by up to 15.2% over FAST-LIO2 in indoor settings and by 7.6% in large-scale outdoor scenarios. The learned exploration policy achieves comparable or superior area coverage with reduced travel distance and exploration time relative to sampling-based and learning-based baselines. Full article
(This article belongs to the Section Radar Sensors)
Show Figures

Figure 1

15 pages, 1140 KB  
Article
Skyglow-Induced Luminance Gradients Influence Orientation in a Migratory Moth
by Yi Ji, Yibo Ma, Zhangsu Wen, Boya Gao, James J. Foster, Daihong Yu, Yan Wu, Guijun Wan and Gao Hu
Insects 2025, 16(12), 1252; https://doi.org/10.3390/insects16121252 - 10 Dec 2025
Viewed by 388
Abstract
Artificial light at night (ALAN) is altering nocturnal ecosystems. While the effects of direct light sources on insect behavior are well studied, the influence of large-scale skyglow on migratory orientation remains unclear. Here, we tested how skyglow-induced luminance gradients influence the flight orientation [...] Read more.
Artificial light at night (ALAN) is altering nocturnal ecosystems. While the effects of direct light sources on insect behavior are well studied, the influence of large-scale skyglow on migratory orientation remains unclear. Here, we tested how skyglow-induced luminance gradients influence the flight orientation of the fall armyworm, Spodoptera frugiperda, a globally invasive nocturnal migrant that performs seasonal migration in China, using controlled indoor simulations and field assays. Surprisingly, individuals consistently oriented toward darker regions, suggesting that luminance gradients may influence their heading away from the expected seasonal migratory direction. This response was highly consistent across both settings, indicating that skyglow-generated luminance gradients can function as directional cues and potentially interfere with seasonal orientation processes. Such gradients may thus function as ecological traps and represent an underrecognized factor in nocturnal insect navigation. Our findings point to a previously overlooked pathway through which skyglow may affect long-distance orientation in nocturnal migrants, underscoring the need for further work to evaluate its ecological significance within light-polluted environments. Full article
(This article belongs to the Section Insect Pest and Vector Management)
Show Figures

Figure 1

22 pages, 4020 KB  
Article
From Simulation to Reality: Comparative Performance Analysis of SLAM Toolbox and Cartographer in ROS 2
by İbrahim İnce, Derya Yiltas-Kaplan and Fatih Keleş
Electronics 2025, 14(24), 4822; https://doi.org/10.3390/electronics14244822 - 8 Dec 2025
Viewed by 552
Abstract
This paper presents a comparative analysis of SLAM Toolbox and Cartographer mapping performance in both simulated and real-world environments using ROS 2. The aim of the study is to evaluate the effectiveness, accuracy, and resource utilization of each Simultaneous Localization and Mapping (SLAM) [...] Read more.
This paper presents a comparative analysis of SLAM Toolbox and Cartographer mapping performance in both simulated and real-world environments using ROS 2. The aim of the study is to evaluate the effectiveness, accuracy, and resource utilization of each Simultaneous Localization and Mapping (SLAM) tool under identical conditions. The experiments were conducted using the Humble Hawksbill distribution of ROS 2, with mapping tasks performed in indoor environments via Gazebo simulation and physical robot tests. Results show that SLAM Toolbox demonstrated slightly more consistent map generation in environments that included human movement and small object relocations. It achieved an Absolute Trajectory Error (ATE) of 0.13 m, compared to 0.21 m for Cartographer under identical test conditions. However, Toolbox required approximately 70% CPU usage, 293 MB RAM, and a startup time of 5.2 s, reflecting higher computational demand and configuration complexity. In contrast, Cartographer exhibited slower map generation and slightly higher RAM usage (299 MB) in simulation, while requiring higher CPU load (80%) and showing greater sensitivity to parameter tuning, which contributed to less accurate localization in noise-free simulations. This study highlights the advantages and limitations of both SLAM technologies and provides practical guidance for selecting appropriate SLAM solutions in robotic mapping and autonomous navigation tasks, particularly for systems deployed on resource-constrained platforms. Full article
Show Figures

Figure 1

24 pages, 29138 KB  
Article
FloorTag: A Hybrid Indoor Localization System Based on Floor-Deployed Visual Markers and Pedometer Integration
by Gaetano Carmelo La Delfa, Marta Plaza-Hernandez, Javier Prieto, Albano Carrera and Salvatore Monteleone
Electronics 2025, 14(24), 4819; https://doi.org/10.3390/electronics14244819 - 7 Dec 2025
Viewed by 262
Abstract
With the widespread adoption of smartphones and wearable devices, localization systems have become increasingly important in modern society. While Global Positioning System (GPS) technology is widely accepted as a standard outdoors, accurately determining user location indoors remains a significant challenge despite extensive research [...] Read more.
With the widespread adoption of smartphones and wearable devices, localization systems have become increasingly important in modern society. While Global Positioning System (GPS) technology is widely accepted as a standard outdoors, accurately determining user location indoors remains a significant challenge despite extensive research efforts. Indoor positioning systems (IPSs) play a critical role in various sectors, including retail, tourism, transportation, healthcare, and emergency services. However, existing solutions require costly infrastructure deployments, complex area mapping, or offer suboptimal user experiences without achieving satisfactory accuracy. This paper introduces FloorTag, a scalable, low-cost, and minimally invasive hybrid IPS designed specifically for smartphone platforms. FloorTag leverages a combination of 2D visual markers placed on floor surfaces at key locations, and inertial sensor data from mobile devices. Each marker is associated with a unique identifier and precise spatial coordinates, enabling an immediate reset of accumulated localization errors upon detection. Between markers, a pedometer-based dead reckoning module maintains continuous location tracking. The localization process is designed to be seamless and unobtrusive to the user. When activated by the app during navigation, the phone’s rear camera, naturally angled toward the floor during walking, captures markers. This solution avoids explicit user scans while preserving the performance benefits of visual positioning. To model the indoor environment, FloorTag introduces the concept of Path-Points, which discretize the walkable space, and Informative Layers, which add semantic context to the navigation experience. This paper details the proposed methodology and the client–server system architecture and presents experimental results obtained from a prototype deployed in an academic building at the University of Catania, Italy. The findings demonstrate reliable localization at approximately 2 m spatial granularity and near-real-time performance across varying lighting conditions, confirming the feasibility of the approach and the effectiveness of the system. Full article
Show Figures

Figure 1

24 pages, 5142 KB  
Article
A Method for Extracting Indoor Structural Landmarks Based on Indoor Fire Protection Plan Images of Buildings
by Yueyong Pang, Heng Xu, Lizhi Miao and Jieying Zheng
Buildings 2025, 15(24), 4411; https://doi.org/10.3390/buildings15244411 - 6 Dec 2025
Viewed by 234
Abstract
Indoor landmarks play a crucial role in the process of indoor positioning and route planning for pedestrians or unmanned devices. Indoor structural landmarks, a type of indoor landmarks, can provide rich steering and semantic descriptions for indoor navigation services. However, most traditional indoor [...] Read more.
Indoor landmarks play a crucial role in the process of indoor positioning and route planning for pedestrians or unmanned devices. Indoor structural landmarks, a type of indoor landmarks, can provide rich steering and semantic descriptions for indoor navigation services. However, most traditional indoor landmark extraction methods rely on indoor points of interest and indoor vector map data. These methods face the problem of difficult acquisition of indoor data and overlook the exploration of indoor structural landmarks. Therefore, this paper innovatively proposes a method for extracting indoor structural landmarks based on the commonly available indoor fire protection plan images. First, the HSV model is employed to eliminate noise from the original image, and vector data of indoor components is obtained using the constructed Canny operator. Subsequently, the visibility is calculated based on the grids of indoor space segmentation. Finally, the identification and extraction of indoor structural landmarks are achieved through grid visibility classification, directional clustering analysis, and spatial proximity verification. This approach opens up new ideas for indoor landmark extraction methods. The experimental results show that the method proposed in this paper can effectively extract indoor structural landmarks, the extraction accuracy of indoor structural landmarks reaches over 90%, verifying the feasibility of using indoor fire protection plan data for landmark extraction and expanding the data sources for indoor landmark extraction. Full article
(This article belongs to the Section Building Structures)
Show Figures

Figure 1

28 pages, 2836 KB  
Article
MA-EVIO: A Motion-Aware Approach to Event-Based Visual–Inertial Odometry
by Mohsen Shahraki, Ahmed Elamin and Ahmed El-Rabbany
Sensors 2025, 25(23), 7381; https://doi.org/10.3390/s25237381 - 4 Dec 2025
Viewed by 426
Abstract
Indoor localization remains a challenging task due to the unavailability of reliable global navigation satellite system (GNSS) signals in most indoor environments. One way to overcome this challenge is through visual–inertial odometry (VIO), which enables real-time pose estimation by fusing camera and inertial [...] Read more.
Indoor localization remains a challenging task due to the unavailability of reliable global navigation satellite system (GNSS) signals in most indoor environments. One way to overcome this challenge is through visual–inertial odometry (VIO), which enables real-time pose estimation by fusing camera and inertial measurements. However, VIO suffers from performance degradation under high-speed motion and in poorly lit environments. In such scenarios, motion blur, sensor noise, and low temporal resolution reduce the accuracy and robustness of the estimated trajectory. To address these limitations, we propose a motion-aware event-based VIO (MA-EVIO) system that adaptively fuses asynchronous event data, frame-based imagery, and inertial measurements for robust and accurate pose estimation. MA-EVIO employs a hybrid tracking strategy combining sparse feature matching and direct photometric alignment. A key innovation is its motion-aware keyframe selection, which dynamically adjusts tracking parameters based on real-time motion classification and feature quality. This motion awareness also enables adaptive sensor fusion: during fast motion, the system prioritizes event data, while under slow or stable motion, it relies more on RGB frames and feature-based tracking. Experimental results on the DAVIS240c and VECtor benchmarks demonstrate that MA-EVIO outperforms state-of-the-art methods, achieving a lower mean position error (MPE) of 0.19 on DAVIS240c compared to 0.21 (EVI-SAM) and 0.24 (PL-EVIO), and superior performance on VECtor with MPE/mean rotation error (MRE) of 1.19%/1.28 deg/m versus 1.27%/1.42 deg/m (EVI-SAM) and 1.93%/1.56 deg/m (PL-EVIO). These results validate the effectiveness of MA-EVIO in challenging dynamic indoor environments. Full article
(This article belongs to the Special Issue Multi-Sensor Integration for Mobile and UAS Mapping)
Show Figures

Figure 1

17 pages, 5641 KB  
Article
A Novel Smartphone PDR Framework Based on Map-Aided Adaptive Particle Filter with a Reduced State Space
by Mengchi Ai, Ilyar Asl Sabbaghian Hokmabadi and Xuan Zhao
ISPRS Int. J. Geo-Inf. 2025, 14(12), 476; https://doi.org/10.3390/ijgi14120476 - 2 Dec 2025
Viewed by 385
Abstract
Accurate, reliable and infrastructure-free indoor positioning using a smartphone is considered an essential topic for applications such as indoor emergency response and indoor path planning. While the inertial measurement units (IMU) offer continuous and high-frequency motion data, pedestrian dead reckoning (PDR) based on [...] Read more.
Accurate, reliable and infrastructure-free indoor positioning using a smartphone is considered an essential topic for applications such as indoor emergency response and indoor path planning. While the inertial measurement units (IMU) offer continuous and high-frequency motion data, pedestrian dead reckoning (PDR) based on IMU data suffers from significant and accumulative errors. Map-aided particle filters (PFs) are important pose estimation frameworks that have exhibited capabilities to eliminate drifts by incorporating additional constraints from a pre-built floor map, without relying on other wireless or perception-based infrastructures. However, despite the recent approaches, a key challenging issue remains: existing map-aided PF-PDR solutions are computationally demanding, as they typically rely on a large number of particles and require map boundaries to eliminate non-matching particles. This process introduces substantial computational overhead, limiting efficiency and real-time performance on resource-constrained platforms such as smartphones. To address this key issue, this work proposes a novel map-aided PF-PDR framework that leverages a smartphone’s IMU data and a pre-built vectorized floor plan map. The proposed method introduces an adaptive PF-PDR solution that detects particle convergence using a cross-entropy distance of the particles and a Gaussian distribution. The number of particles is reduced significantly after a convergence is detected. Further, in order to reduce the computational cost, only the heading is included in particle attitude sampling. The heading is estimated accurately by levelling gyroscope measurements to a virtual plane, parallel to the ground. Experiments are performed using a dataset collected on a smartphone and the results demonstrate improved performance, especially in drift reduction, achieving an mean position error of 0.9 m and a processing rate of 37.0 Hz. Full article
(This article belongs to the Special Issue Indoor Mobile Mapping and Location-Based Knowledge Services)
Show Figures

Figure 1

21 pages, 3387 KB  
Article
Development of an Autonomous and Interactive Robot Guide for Industrial Museum Environments Using IoT and AI Technologies
by Andrés Arteaga-Vargas, David Velásquez, Juan Pablo Giraldo-Pérez and Daniel Sanin-Villa
Sci 2025, 7(4), 175; https://doi.org/10.3390/sci7040175 - 1 Dec 2025
Viewed by 470
Abstract
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The [...] Read more.
This paper presents the design of an autonomous robot guide for a museum-like environment in a motorcycle assembly plant. The system integrates Industry 4.0 technologies such as artificial vision, indoor positioning, generative artificial intelligence, and cloud connectivity to enhance the visitor experience. The development follows the Design Inclusive Research (DIR) methodology and the VDI 2206 standard to ensure a structured scientific and engineering process. A key innovation is the integration of mmWave sensors alongside LiDAR and RGB-D cameras, enabling reliable human detection and improved navigation safety in reflective indoor environments, as well as the deployment of an open-source large language model for natural, on-device interaction with visitors. The current results include the complete mechanical, electronic, and software architecture; simulation validation; and a preliminary implementation in the real museum environment, where the system demonstrated consistent autonomous navigation, stable performance, and effective user interaction. Full article
(This article belongs to the Section Computer Sciences, Mathematics and AI)
Show Figures

Figure 1

26 pages, 2310 KB  
Systematic Review
A Systematic Review of Intelligent Navigation in Smart Warehouses Using Prisma: Integrating AI, SLAM, and Sensor Fusion for Mobile Robots
by Domagoj Zimmer, Mladen Jurišić, Ivan Plaščak, Željko Barač, Hrvoje Glavaš, Dorijan Radočaj and Robert Benković
Eng 2025, 6(12), 339; https://doi.org/10.3390/eng6120339 - 1 Dec 2025
Viewed by 544
Abstract
This systematic review focuses on intelligent navigation as a core enabler of autonomy in smart warehouses, where mobile robots must dynamically perceive, reason, and act in complex, human-shared environments. By synthesizing advancements in AI-driven decision-making, SLAM, and multi-sensor fusion, the study highlights how [...] Read more.
This systematic review focuses on intelligent navigation as a core enabler of autonomy in smart warehouses, where mobile robots must dynamically perceive, reason, and act in complex, human-shared environments. By synthesizing advancements in AI-driven decision-making, SLAM, and multi-sensor fusion, the study highlights how intelligent navigation architectures reduce operational uncertainty and enhance task efficiency in logistics automation. Smart warehouses, powered by mobile robots and AGVs and integrated with AI and algorithms, are enabling more efficient storage with less human labour. This systematic review followed PRISMA 2020 guidelines to systematically identify, screen, and synthesize evidence from 106 peer-reviewed scientific articles (including pri-mary studies, technical papers, and reviews) published between 2020–2025, sourced from Web of Science. Thematic synthesis was conducted across 8 domains: AI, SLAM, sensor fusion, safety, network, path planning, implementation, and design. The transition to smart warehouses requires modern technologies to automate tasks and optimize resources. This article examines how intelligent systems can be integrated with mathematical models to improve navigation accuracy, reduce costs and prioritize human safety. Real-time data management with precise information for AMRs and AGVs is crucial for low-risk operation. This article studies AI, the IoT, LiDAR, machine learning (ML), SLAM and other new technologies for the successful implementation of mobile robots in smart warehouses. Modern technologies such as reinforcement learning optimize the routes and tasks of mobile robots. Data and sensor fusion methods integrate information from various sources to provide a more precise understanding of the indoor environment and inventory. Semantic mapping enables mobile robots to navigate and interact with complex warehouse environments with high accuracy in real time. The article also analyses how virtual reality (VR) can improve the spatial orientation of mobile robots by developing sophisticated navigation solutions that reduce time and financial costs. Full article
(This article belongs to the Special Issue Interdisciplinary Insights in Engineering Research)
Show Figures

Figure 1

28 pages, 3541 KB  
Article
Hybrid Boustrophedon and Direction-Biased Region Transitions for Mobile Robot Coverage Path Planning: A Region-Based Multi-Cost Framework
by Suat Karakaya and Mehmet Zeki Konyar
Appl. Sci. 2025, 15(23), 12666; https://doi.org/10.3390/app152312666 - 29 Nov 2025
Viewed by 195
Abstract
Achieving efficient Coverage Path Planning (CPP) in indoor and semi-structured settings necessitates both organized area segmentation and dependable transitions between coverage zones. This research introduces an improved region-guided CPP framework that incorporates rectangular region expansion, Boustrophedon-based coverage within regions, and an obstacle-aware planner [...] Read more.
Achieving efficient Coverage Path Planning (CPP) in indoor and semi-structured settings necessitates both organized area segmentation and dependable transitions between coverage zones. This research introduces an improved region-guided CPP framework that incorporates rectangular region expansion, Boustrophedon-based coverage within regions, and an obstacle-aware planner for transitioning between regions. In contrast to conventional methods that depend solely on A*-based routing, the suggested transition module utilizes a multi-weighted cost model that integrates Euclidean distance, obstacle density, and heading changes to create smoother, more context-sensitive links between regions. The approach is assessed on five representative grid maps inspired by the layouts of building corridors and greenhouse-like strip structures. Performance indicators—including intra-region coverage distance, inter-region transition cost, overall path distance, coverage ratio, and computation duration—illustrate the method’s efficiency. Experimental findings indicate consistent coverage rates ranging from 96% to 99%, with total computation times between 312 and 844 ms. When compared to traditional global Boustrophedon and spiral scanning methods, the proposed system attains noticeably shorter transition paths and enhanced navigation efficiency, particularly in narrow corridors and cluttered environments. In summary, the framework provides a modular, computationally efficient, and obstacle-aware solution that is well-suited for autonomous mobile robot coverage path planning tasks. Full article
Show Figures

Figure 1

16 pages, 8229 KB  
Article
MVL-Loc: Leveraging Vision-Language Model for Generalizable Multi-Scene Camera Relocalization
by Zhendong Xiao, Shan Yang, Shujie Ji, Jun Yin, Ziling Wen and Wu Wei
Appl. Sci. 2025, 15(23), 12642; https://doi.org/10.3390/app152312642 - 28 Nov 2025
Viewed by 258
Abstract
Camera relocalization, a cornerstone capability of modern computer vision, accurately determines a camera’s position and orientation from images and is essential for applications in augmented reality, mixed reality, autonomous driving, delivery drones, and robotic navigation. Unlike traditional deep learning-based methods regress camera pose [...] Read more.
Camera relocalization, a cornerstone capability of modern computer vision, accurately determines a camera’s position and orientation from images and is essential for applications in augmented reality, mixed reality, autonomous driving, delivery drones, and robotic navigation. Unlike traditional deep learning-based methods regress camera pose from images in a single scene which lack generalization and robustness in diverse environments. We propose MVL-Loc, a novel end-to-end multi-scene six degrees of freedom camera relocalization framework. MVL-Loc leverages pretrained world knowledge from vision-language models and incorporates multimodal data to generalize across both indoor and outdoor settings. Furthermore, natural language is employed as a directive tool to guide the multi-scene learning process, facilitating semantic understanding of complex scenes and capturing spatial relationships among objects. Extensive experiments on the 7Scenes and Cambridge Landmarks datasets demonstrate MVL-Loc’s robustness and state-of-the-art performance in real-world multi-scene camera relocalization, with improved accuracy in both positional and orientational estimates. Full article
Show Figures

Figure 1

24 pages, 15285 KB  
Article
An Efficient and Accurate UAV State Estimation Method with Multi-LiDAR–IMU–Camera Fusion
by Junfeng Ding, Pei An, Kun Yu, Tao Ma, Bin Fang and Jie Ma
Drones 2025, 9(12), 823; https://doi.org/10.3390/drones9120823 - 27 Nov 2025
Viewed by 373
Abstract
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion [...] Read more.
State estimation plays a vital role in UAV navigation and control. With the continuous decrease in sensor cost and size, UAVs equipped with multiple LiDARs, Inertial Measurement Units (IMUs), and cameras have attracted increasing attention. Such systems can acquire rich environmental and motion information from multiple perspectives, thereby enabling more precise navigation and mapping in complex environments. However, efficiently utilizing multi-sensor data for state estimation remains challenging. There is a complex coupling relationship between IMUs’ bias and UAV state. To address these challenges, this paper proposes an efficient and accurate UAV state estimation method tailored for multi-LiDAR–IMU–camera systems. Specifically, we first construct an efficient distributed state estimation model. It decomposes the multi-LiDAR–IMU–camera system into a series of single LiDAR–IMU–camera subsystems, reformulating the complex coupling problem as an efficient distributed state estimation problem. Then, we derive an accurate feedback function to constrain and optimize the UAV state using estimated subsystem states, thus enhancing overall estimation accuracy. Based on this model, we design an efficient distributed state estimation algorithm with multi-LiDAR-IMU-Camerafusion, termed DLIC. DLIC achieves robust multi-sensor data fusion via shared feature maps, effectively improving both estimation robustness and accuracy. In addition, we design an accelerated image-to-point cloud registration module (A-I2P) to provide reliable visual measurements, further boosting state estimation efficiency. Extensive experiments are conducted on 18 real-world indoor and outdoor scenarios from the public NTU VIRAL dataset. The results demonstrate that DLIC consistently outperforms existing multi-sensor methods across key evaluation metrics, including RMSE, MAE, SD, and SSE. More importantly, our method runs in real time on a resource-constrained embedded device equipped with only an 8-core CPU, while maintaining low memory consumption. Full article
(This article belongs to the Special Issue Advances in Guidance, Navigation, and Control)
Show Figures

Figure 1

24 pages, 15361 KB  
Article
UAV Sensor Data Fusion for Localization Using Adaptive Multiscale Feature Matching Mechanisms Under GPS-Deprived Environment
by Yu-Shun Wang and Chia-Hao Chang
Aerospace 2025, 12(12), 1048; https://doi.org/10.3390/aerospace12121048 - 25 Nov 2025
Viewed by 306
Abstract
The application of unmanned vehicles in civilian and military fields is increasingly widespread. Traditionally, unmanned vehicles primarily rely on Global Positioning Systems (GPSs) for positioning; however, GPS signals can be limited or completely lost in conditions such as building obstructions, indoor environments, or [...] Read more.
The application of unmanned vehicles in civilian and military fields is increasingly widespread. Traditionally, unmanned vehicles primarily rely on Global Positioning Systems (GPSs) for positioning; however, GPS signals can be limited or completely lost in conditions such as building obstructions, indoor environments, or electronic interference. In addition, countries are actively developing GPS jamming and deception technologies for military applications, making precise positioning and navigation of unmanned vehicles in GPS-denied or constrained environments a critical issue that needs to be addressed. In this work, authors propose a method based on Visual–Inertial Odometry (VIO), integrating the extended Kalman filter (EKF), an Inertial Measurement Unit (IMU), optical flow, and feature matching to achieve drone localization in GPS-denied environments. The proposed method uses the heading angle and acceleration data obtained from the IMU as the state prediction for the EKF, and estimates relative displacement using optical flow. It further corrects the optical flow calculation errors through IMU rotation compensation, enhancing the robustness of visual odometry. Additionally, when re-selecting feature points for optical flow, it combines a KAZE feature matching technique for global position correction, reducing drift errors caused by long-duration flight. The authors also employ an adaptive noise adjustment strategy that dynamically adjusts the internal state and measurement noise matrices of the EKF based on the rate of change in heading angle and feature matching reliability, allowing the drone to maintain stable positioning in various flight conditions. According to the simulation results, the proposed method is able to effectively estimate the flight trajectory of drones without GPS. Compared to results that rely solely on optical flow or feature matching, it significantly reduces cumulative errors. This makes it suitable for urban environments, forest areas, and military applications where GPS signals are limited, providing a reliable solution for autonomous navigation and positioning of drones. Full article
(This article belongs to the Section Aeronautics)
Show Figures

Figure 1

Back to TopTop