Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

Article Types

Countries / Regions

Search Results (169)

Search Parameters:
Keywords = GPS-denied

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
44 pages, 680 KB  
Article
Stochastically Optimal Hierarchical Control for Long-Endurance UAVs Under Communication Degradation: Theory and Validation
by Mosab Alrashed, Ali Fenjan, Humoud Aldaihani and Mohammad Alqattan
Drones 2026, 10(5), 371; https://doi.org/10.3390/drones10050371 - 13 May 2026
Viewed by 275
Abstract
This paper establishes a theoretical framework for treating communication quality as a navigable resource in long-endurance unmanned aerial vehicle (UAV) control under stochastic degradation. We prove that a hierarchical architecture integrating communication-aware model predictive control (MPC) achieves ε-optimality with respect to the [...] Read more.
This paper establishes a theoretical framework for treating communication quality as a navigable resource in long-endurance unmanned aerial vehicle (UAV) control under stochastic degradation. We prove that a hierarchical architecture integrating communication-aware model predictive control (MPC) achieves ε-optimality with respect to the intractable stochastic dynamic programming formulation while maintaining exponential stability guarantees under switched system dynamics governed by continuous-time Markov chains. Three primary theoretical contributions were made: (1) A stochastic optimality theorem is given showing that sigmoid penalty function approximation yields bounded suboptimality of η0.12 under mild ergodicity conditions; (2) a formal stability result for mode switching based on hysteresis was established using multiple Lyapunov functions, and it showed exponentially fast convergence with a decay rate of λ0.23; and (3) bifurcation analysis showed that there is a critical time threshold of 72 h at which thermal-induced gyro-drift in the GPS sensor causes a transition in navigation error dynamics from linear to catastrophic nonlinear growth. The validation through 2430 Monte Carlo missions over 54,686 flight hours resulted in an average increase in endurance by 243% (18.2 days versus 5.3 days), while keeping CEP at approximately 8.7 m and achieving 82% mission success under extreme communication degradation (qcomm<0.3). The statistical results confirm a very strong positive relationship between the Resilience Quotient (RQ) and the length of successful missions (R2=0.89, p<0.001), supporting the theoretical model with empirical evidence. Full article
Show Figures

Graphical abstract

19 pages, 906 KB  
Article
Cooperative UAV Swarm Communication Networks for Rapid Disaster Assessment in GPS-Denied Environments
by Pinglu Wang, Jiahao Li, Jiahua Wei, Lei Shi, Bei Hou and Fei Xie
Drones 2026, 10(5), 355; https://doi.org/10.3390/drones10050355 - 7 May 2026
Viewed by 217
Abstract
Timely situational awareness is essential in disaster management but normal Unmanned Aerial Vehicle (UAV) flight cannot take place when the Global Positioning System (GPS) signals are blocked or jammed. This paper addresses the issue of swarm cohesion and localization in these hostile conditions. [...] Read more.
Timely situational awareness is essential in disaster management but normal Unmanned Aerial Vehicle (UAV) flight cannot take place when the Global Positioning System (GPS) signals are blocked or jammed. This paper addresses the issue of swarm cohesion and localization in these hostile conditions. We present a Cooperative Swarm-Mesh Network (CSMN), a hybrid structure that can alternate between an implicit Silent Mode and an explicit Leader–Follower mode based on distributed Extended Kalman Filters (DEKFs) in the face of communication failures. The system takes advantage of convex polygon decomposition to optimize the coverage in the area. The use of simulation studies with NS-3 and ROS has shown that the proposed framework can retain sub-meter localization error (RMSE < 0.9 m) in GPS-denied environments and provide 92% coverage of the area, which is 35% higher than the coverage with other baseline approaches. Within the simulated conditions evaluated using Gazebo/NS-3, sensor drift and network vulnerability are effectively addressed by the CSMN framework. These simulation-based results offer a promising blueprint for autonomous disaster evaluation, pending hardware-in-the-loop and field validation. Validation is conducted across two qualitatively distinct simulated environments: dense urban rubble and a sparse open field. Performance advantages generalise beyond a single test configuration, with mean localization RMSE remaining below 0.85 m in both scenarios. Full article
22 pages, 55205 KB  
Article
A Distributed and Reconfigurable Architecture for Unified Multimodal Indoor Localization of a Mobile Edge Node in a Cyber-Physical Context
by Theodoros Papafotiou, Emmanouil Tsardoulias and Andreas Symeonidis
Robotics 2026, 15(5), 91; https://doi.org/10.3390/robotics15050091 - 30 Apr 2026
Viewed by 228
Abstract
Precise 3D positioning in GPS-denied environments is a critical enabler of autonomous robotics, industrial automation, and smart logistics within the emerging cyber-physical landscape. This paper presents a distributed and reconfigurable architecture designed to benchmark and provide unified multimodal indoor localization for mobile edge [...] Read more.
Precise 3D positioning in GPS-denied environments is a critical enabler of autonomous robotics, industrial automation, and smart logistics within the emerging cyber-physical landscape. This paper presents a distributed and reconfigurable architecture designed to benchmark and provide unified multimodal indoor localization for mobile edge nodes. Unlike rigid commercial solutions, our architecture employs a distributed, reconfigurable framework that allows the rapid interchange of Absolute Localization Methods (UWB, External RGB-D Vision) and Relative Localization Methods (Inertial Odometry, Visual Odometry). We evaluate these modalities individually and in hybrid configurations using a custom low-cost mobile edge node. Experimental results in a controlled environment demonstrate that while all-optical systems offer high precision, a cost-effective fusion of Ultra-Wideband (UWB) and Inertial Measurement Unit (IMU) data provides a robust balance of accuracy and reliability. Conversely, we identify significant limitations in monocular visual odometry within feature-poor indoor spaces. The developed platform serves as a reproducible foundation for researchers to prototype hybrid localization algorithms and assess the trade-offs between hardware cost and operational accuracy within complex cyber-physical ecosystems. Full article
(This article belongs to the Special Issue Localization and 3D Mapping of Intelligent Robotics)
Show Figures

Figure 1

24 pages, 22374 KB  
Article
A Hybrid Drone SINS/GNSS Information Fusion Method Based on Attention-Augmented TCN in GNSS-Denied Environments
by Chuan Xu, Shuai Chen, Daxiang Zhao, Zhikuan Hou and Changhui Jiang
Remote Sens. 2026, 18(9), 1379; https://doi.org/10.3390/rs18091379 - 29 Apr 2026
Viewed by 316
Abstract
In the field of drone navigation systems, a high-precision positioning solution can be provided by an integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS). But when satellite signals are interfered with or blocked by tall buildings, the errors of SINS will [...] Read more.
In the field of drone navigation systems, a high-precision positioning solution can be provided by an integrated strapdown inertial navigation system (SINS)/global navigation satellite system (GNSS). But when satellite signals are interfered with or blocked by tall buildings, the errors of SINS will disperse rapidly due to the complex air and mechanical vibrations, leading to a serious degradation of navigation accuracy. To enhance the positioning performance in this situation, this paper proposes a hybrid information fusion method based on attention-augmented temporal convolutional network (TCN) for drone SINS/GNSS navigation system. A feature integration and prediction model is constructed to provide a pseudo-positioning reference for the integrated navigation filter during GNSS-denied periods, in which TCN is used to establish a predictive positioning error correction model based on inertial measurements and SINS data, while a self-attention model is incorporated to extract complex global drone motion features. The performance of the proposed method has been experimentally verified using Global Positioning System (GPS) and SINS data collected from real drone flight test. Comparison results among the proposed model, SINS with TCN, SINS with convergent Kalman filter (KF) prediction section and SINS-only indicate that the proposed method can effectively improve the drone positioning accuracy in specific GNSS-denied environments. Full article
Show Figures

Figure 1

28 pages, 33079 KB  
Article
Pedestrian Localization Using Smartphone LiDAR in Indoor Environments
by Kwangjae Sung and Jaehun Kim
Electronics 2026, 15(9), 1810; https://doi.org/10.3390/electronics15091810 - 24 Apr 2026
Viewed by 270
Abstract
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied [...] Read more.
Many place recognition approaches, which identify previously visited places or locations by matching current sensory data, such as 2D RGB images and 3D point clouds, have been proposed to achieve accurate and robust localization and loop closure detection in global positioning system (GPS)-denied environments. Since visual place recognition (VPR) methods that rely on images captured by camera sensors are highly sensitive to variations in appearance, including changes in lighting, surface color, and shadows, they can lead to poor place recognition accuracy. In contrast, light detection and ranging (LiDAR)-based place recognition (LPR) approaches based on 3D point cloud data that captures the shape and geometric structure of the environment are robust to changes in place appearance and can therefore provide more reliable place recognition results than VPR methods. This work presents an indoor LPR method called PointNetVLAD-based indoor pedestrian localization (PIPL). PIPL is a deep network model that uses PointNetVLAD to learn to extract global descriptors from 3D LiDAR point cloud data. PIPL can recognize places previously visited by a pedestrian using point clouds captured by a low-cost LiDAR sensor on a smartphone in small-scale indoor environments, while PointNetVLAD performs place recognition for vehicles using high-cost LiDAR, GPS, and inertial measurement unit (IMU) sensors in large-scale outdoor areas. For place recognition on 3D point cloud reference maps generated from LiDAR scans, PointNetVLAD exploits the universal transverse mercator (UTM) coordinate system based on GPS and IMU measurements, whereas PIPL uses a virtual coordinate system designed in this study due to the unavailability of GPS indoors. In experiments conducted in campus buildings, PIPL shows significant advantages over NetVLAD (known as a convolutional neural network (CNN)-based VPR method). Particularly in indoor environments with repetitive scenes where geometric structures are preserved and image-based appearance features are sparse or unclear, PIPL achieved 39% higher top-1 accuracy and 10% higher top-3 accuracy compared to NetVLAD. Furthermore, PIPL achieved place recognition accuracy comparable to NetVLAD even with a small number of points in a 3D point cloud and outperformed NetVLAD even with a smaller model training dataset. The experimental results also indicate that PIPL requires over 76% less place retrieval time than NetVLAD while maintaining robust place classification performance. Full article
(This article belongs to the Special Issue Advanced Indoor Localization Technologies: From Theory to Application)
Show Figures

Figure 1

35 pages, 6276 KB  
Article
AI-Enhanced Thermal–Visual–Inertial Odometry and Autonomous Planning for GPS-Denied Search-and- Rescue Robotics
by Islam T. Almalkawi, Sabya Shtaiwi, Alaa Alhowaide and Manel Guerrero Zapata
Sensors 2026, 26(8), 2462; https://doi.org/10.3390/s26082462 - 16 Apr 2026
Viewed by 563
Abstract
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an [...] Read more.
Search and rescue (SAR) missions in collapsed or underground environments remain challenging due to GPS unavailability, which hinders localization and autonomous navigation. Systems that rely on single-sensor inputs or structured settings often degrade under smoke, dust, or dynamic clutter. This paper presents an autonomous ground robot for GPS-denied SAR that integrates low-cost thermal, visual, inertial, and acoustic cues within a unified, computation-efficient architecture. The stack combines Thermal–Visual Odometry (TV–VO) with Zero-Velocity Updates (ZUPT) for drift-resistant localization, RescueGraph for multimodal survivor detection, and a Proximal Policy Optimization (PPO) planner for adaptive navigation under uncertainty. Across simulated disaster scenarios and benchmark corridor runs, the system shows embedded-feasible runtime behavior and supports return to base without external beacons under the evaluated conditions. Quantitatively, TV–VO+ZUPT reduces drift in short internal evaluations, while RescueGraph attains an F1-score of 0.6923 and an area under the ROC curve (AUC) of 0.976 for survivor detection. At the system level, the integrated navigation stack achieves full mission completion in the reported SAR-style trials, while the separate A*/PPO comparison highlights a trade-off between completion rate, traversal time, and collisions. Overall, the results support the practical promise of a low-cost sensor-fusion and learning-assisted navigation framework for GPS-denied SAR robotics. Full article
(This article belongs to the Section Sensors and Robotics)
Show Figures

Graphical abstract

25 pages, 1418 KB  
Article
Artificial Intelligence-Based Decision Support System for UAV Control in a Simulated Environment
by Przemysław Sujecki and Damian Frąszczak
Sensors 2026, 26(8), 2436; https://doi.org/10.3390/s26082436 - 15 Apr 2026
Cited by 1 | Viewed by 356
Abstract
Unmanned aerial vehicles (UAVs) are increasingly deployed in missions that require high autonomy and reliable decision-making; however, many operational concepts still assume access to GNSS and stable communication with a human operator. In contested environments, this assumption may no longer hold because GNSS [...] Read more.
Unmanned aerial vehicles (UAVs) are increasingly deployed in missions that require high autonomy and reliable decision-making; however, many operational concepts still assume access to GNSS and stable communication with a human operator. In contested environments, this assumption may no longer hold because GNSS degradation, radio-frequency interference, and intentional jamming can disrupt positioning and communication, thereby reducing mission effectiveness and safety. Recent surveys show that operation in GNSS-denied environments remains a major challenge and often requires alternative perception, localization, and control strategies. In response, this article investigates a reinforcement learning (RL)-based decision-support system for the autonomous control of a quadrotor UAV in a three-dimensional simulated environment. Rather than following pre-programmed waypoints, the UAV learns a control policy through interaction with the environment and reward-driven adaptation. The proposed system is designed for mission execution under uncertainty, limited external guidance, and partial observability. Two policy-gradient approaches are implemented and compared: classical REINFORCE and Proximal Policy Optimization (PPO) with an Actor–Critic architecture. The study presents the simulation environment, state and action representation, reward formulation, staged training procedure, and comparative evaluation. The results indicate that, within the considered unseen test scenario, the PPO-based configuration achieved higher mission effectiveness than REINFORCE in the final unseen test scenario, supporting the practical relevance of structured deep reinforcement learning for UAV operation in GPS-denied and communication-constrained environments. Full article
Show Figures

Figure 1

25 pages, 4371 KB  
Article
GTS-SLAM: A Tightly-Coupled GICP and 3D Gaussian Splatting Framework for Robust Dense SLAM in Underground Mines
by Yi Liu, Changxin Li and Meng Jiang
Vehicles 2026, 8(4), 79; https://doi.org/10.3390/vehicles8040079 - 3 Apr 2026
Viewed by 806
Abstract
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for [...] Read more.
To address unstable localization and sparse mapping for autonomous vehicles operating in GPS-denied and low-visibility environments, this paper proposes GTS-SLAM, a tightly coupled dense visual SLAM framework integrating Generalized Iterative Closest Point (GICP) and 3D Gaussian Splatting (3DGS). The system is designed for intelligent driving platforms such as underground mining vehicles, inspection robots, and tunnel autonomous navigation systems. The front-end performs covariance-aware point-cloud registration using GICP to achieve robust pose estimation under low texture, dust interference, and dynamic disturbances. The back-end employs probabilistic dense mapping based on 3DGS, combined with scale regularization, scale alignment, and keyframe factor-graph optimization, enabling synchronized optimization of localization and mapping. A Compact-3DGS compression strategy further reduces memory usage while maintaining real-time performance. Experiments on public datasets and real underground-like scenarios demonstrate centimeter-level trajectory accuracy, high-quality dense reconstruction, and real-time rendering. The system provides reliable perception capability for vehicle autonomous navigation, obstacle avoidance, and path planning in confined and weak-light environments. Overall, the proposed framework offers a deployable solution for autonomous driving and mobile robots requiring accurate localization and dense environmental understanding in challenging conditions. Full article
(This article belongs to the Special Issue AI-Empowered Assisted and Autonomous Driving)
Show Figures

Figure 1

24 pages, 3134 KB  
Article
Towards Ubiquitous Sensing and Navigation: A Lightweight Resilient Framework for UAVs Exploiting Unknown SOPs
by Zhiang Bian, Hu Lu, Chunlei Pang, Zhisen Wang and Xin He
Drones 2026, 10(4), 246; https://doi.org/10.3390/drones10040246 - 29 Mar 2026
Viewed by 446
Abstract
GNSS-based navigation can become unreliable when signals are blocked or deliberately interfered with. For small UAV platforms operating in complex environments, this limitation motivates the exploration of alternative positioning strategies such as opportunistic navigation (OpNav). Achieving reliable high-precision positioning under a fully non-cooperative [...] Read more.
GNSS-based navigation can become unreliable when signals are blocked or deliberately interfered with. For small UAV platforms operating in complex environments, this limitation motivates the exploration of alternative positioning strategies such as opportunistic navigation (OpNav). Achieving reliable high-precision positioning under a fully non-cooperative setting remains difficult in practice where no infrastructure information is available. This mode is defined by three key constraints: unknown transmitter locations, unknown environmental topology and strictly asynchronous clocks. To address this limitation, we develop a lightweight sensing and navigation framework designed for UAV platforms operating under strict hardware constraints. We model static scattering centers as environmental anchors, proving that these features restore system observability even with a single unknown emitter. To ensure real-time performance on lightweight flight controllers, a hierarchical two-stage solver is designed: Stage I derives a robust closed-form initial estimate via an algebraic differencing method that is agnostic to reflection orders; Stage II performs manifold refinement using a Clock-Null Projection (CNP) to attain the CRLB. This framework is confirmed through experiments in urban areas using commercial LTE signals. The results show that it can map unknown RF topologies with meter-level accuracy and keep navigating without prior infrastructure, offering a strong solution for UAV autonomy in environments where GNSS is unavailable. Full article
Show Figures

Figure 1

29 pages, 16603 KB  
Article
Hierarchical Neural-Guided Navigation with Vortex Artificial Potential Field for Robust Path Planning in Complex Environments
by Boyi Xiao, Lujun Wan, Jiwei Tian, Yuqin Zhou, Sibo Hou and Haowen Zhang
Drones 2026, 10(4), 240; https://doi.org/10.3390/drones10040240 - 26 Mar 2026
Viewed by 502
Abstract
Existing autonomous navigation systems for Unmanned Aerial Vehicles (UAVs) face the dual challenges of local minima entrapment and computational complexity that scales with environmental density. This paper proposes a hierarchical navigation architecture integrating deep representation learning with an improved Vortex Artificial Potential Field [...] Read more.
Existing autonomous navigation systems for Unmanned Aerial Vehicles (UAVs) face the dual challenges of local minima entrapment and computational complexity that scales with environmental density. This paper proposes a hierarchical navigation architecture integrating deep representation learning with an improved Vortex Artificial Potential Field (APF). At the decision layer, a Convolutional Neural Network (CNN) encodes the environment as a fixed-dimensional tensor and generates global waypoints with constant-time inference, independent of obstacle count. At the control layer, a Vortex APF resolves the Goal Non-Reachable with Obstacles Nearby (GNRON) problem and limit-cycle oscillations through tangential rotational potentials, achieving significant improvement in trajectory smoothness compared to traditional APF methods. A closed-loop replanning mechanism further ensures robust performance under execution drift. Experiments across varying obstacle densities demonstrate that the combined system achieves high navigation success rates in dense environments with substantially reduced computation time compared to sampling-based planners such as Rapidly exploring Random Tree star (RRT*), while maintaining superior trajectory quality. This architecture provides a computationally efficient solution for resource-constrained UAV platforms operating in GPS-denied or obstacle-rich environments such as warehouses, forests, and disaster sites. Full article
Show Figures

Figure 1

27 pages, 10311 KB  
Article
UAV-Based QR Code Scanning and Inventory Synchronization System with Safe Trajectory Planning
by Eknath Pore, Bhumeshwar K. Patle and Sandeep Thorat
Symmetry 2026, 18(4), 548; https://doi.org/10.3390/sym18040548 - 24 Mar 2026
Viewed by 598
Abstract
Modern-day urban warehouses face exploding large inventory and tight spaces requiring fast, accurate, and safe stocktaking in a narrow aisle in a GPS-denied environment. This paper proposes a complete UAV-enabled framework performing real-time QR code scanning with inventory synchronization through a safety-aware trajectory [...] Read more.
Modern-day urban warehouses face exploding large inventory and tight spaces requiring fast, accurate, and safe stocktaking in a narrow aisle in a GPS-denied environment. This paper proposes a complete UAV-enabled framework performing real-time QR code scanning with inventory synchronization through a safety-aware trajectory generation for obtaining collision-free motion. A novel hybrid workflow integrating MATLAB/Simulink R2024b and Unreal Engine is used for dynamics and photorealistic rendering, alongside a real-time warehouse setup using drone cameras and 3D LiDAR coupled with a ground control station and live dashboard. The system in this paper was evaluated by testing with single and multi-UAV models across high-fidelity simulations and experiments. Results demonstrate simulated QR accuracy of approximately 95 to 96%, with experimental validation achieving between 86 and 90.5% due to real-world environmental factors. In experimental and simulation analysis, mean end-to-end latency remained under half a second, trajectory error range between 8 and 10 cm, and safety margins were consistently maintained throughout the test. It was further observed that multi-UAV coordination halved mission time compared to single-drone tests while keeping duplicate reads negligible, indicating a scalable and safe pipeline for industry application. Full article
(This article belongs to the Special Issue Symmetry/Asymmetry in Fuzzy Control)
Show Figures

Figure 1

23 pages, 51743 KB  
Article
Debiased Multiplex Tokenization Using Mamba-Based Pointers for Efficient and Versatile Map-Free Visual Relocalization
by Wenshuai Wang, Hong Liu, Shengquan Li, Peifeng Jiang, Dandan Che and Runwei Ding
Mach. Learn. Knowl. Extr. 2026, 8(3), 83; https://doi.org/10.3390/make8030083 - 23 Mar 2026
Viewed by 471
Abstract
Visual localization plays a critical role for mobile robots to estimate their position and orientation in GPS-denied environments. However, its efficiency, robustness, and generalization are fundamentally undermined by severe viewpoint changes and dramatic appearance variations, which present persistent challenges for image-based feature representation [...] Read more.
Visual localization plays a critical role for mobile robots to estimate their position and orientation in GPS-denied environments. However, its efficiency, robustness, and generalization are fundamentally undermined by severe viewpoint changes and dramatic appearance variations, which present persistent challenges for image-based feature representation and pose estimation under real-world conditions. Recently, map-free visual relocalization (MFVR) has emerged as a promising paradigm for lightweight deployment and privacy isolation on edge devices, while how to learn compact and invariant image tokens without relying on structural 3D maps still remains a core problem, particularly in highly dynamic or long-term scenarios. In this paper, we propose the Debiased Multiplex Tokenizer as a novel method (termed as DMT-Loc) for efficient and versatile MFVR to address these issues. Specifically, DMT-Loc is built upon a pretrained vision Mamba encoder and integrates three key modules for relative pose regression: First, Multiplex Interactive Tokenization yields robust image tokens with non-local affinities and cross-domain descriptions. Second, Debiased Anchor Registration facilitates anchor token matching through proximity graph retrieval and autoregressive pointer attribution. Third, Geometry-Informed Pose Regression empowers multi-layer perceptrons with a symmetric swap gating mechanism operating inside each decoupled regression head to support accurate and flexible pose prediction in both pair-wise and multi-view modes. Extensive evaluations across seven public datasets demonstrate that DMT-Loc substantially outperforms existing baselines and ablation variants in diverse indoor and outdoor environments. Full article
Show Figures

Graphical abstract

19 pages, 3195 KB  
Article
UMLoc: Uncertainty-Aware Map-Constrained Inertial Localization with Quantified Bounds
by Mohammed S. Alharbi and Shinkyu Park
Sensors 2026, 26(6), 1904; https://doi.org/10.3390/s26061904 - 18 Mar 2026
Viewed by 325
Abstract
Inertial localization is particularly valuable in GPS-denied environments such as indoors. However, localization using only Inertial Measurement Units (IMUs) suffers from drift caused by motion-process noise and sensor biases. This paper introduces Uncertainty-aware Map-constrained Inertial Localization (UMLoc), an end-to-end framework that jointly models [...] Read more.
Inertial localization is particularly valuable in GPS-denied environments such as indoors. However, localization using only Inertial Measurement Units (IMUs) suffers from drift caused by motion-process noise and sensor biases. This paper introduces Uncertainty-aware Map-constrained Inertial Localization (UMLoc), an end-to-end framework that jointly models IMU uncertainty and map constraints to achieve drift-resilient positioning. UMLoc integrates two coupled modules: (1) a Long Short-Term Memory (LSTM) quantile regressor, which estimates the specific quantiles needed to define 68%, 90% and 95% prediction intervals serving as a measure of localization uncertainty and (2) a Conditioned Generative Adversarial Network (CGAN) with cross-attention that fuses IMU dynamic data with distance-based floor-plan maps to generate geometrically feasible trajectories. The modules are trained jointly, allowing uncertainty estimates to propagate through the CGAN during trajectory generation. UMLoc was evaluated on three datasets, including a newly collected 2-h indoor benchmark with time-aligned IMU data, ground-truth poses and floor-plan maps. Results show that the method achieves a mean drift ratio of 5.9% over a 70m travel distance and an average Absolute Trajectory Error (ATE) of 1.36m, while maintaining calibrated prediction bounds. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

23 pages, 6668 KB  
Article
Development of a Visual SLAM-Based Autonomous UAV System for Greenhouse Plant Monitoring
by Jing-Heng Lin and Ta-Te Lin
Drones 2026, 10(3), 205; https://doi.org/10.3390/drones10030205 - 15 Mar 2026
Viewed by 1469
Abstract
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. [...] Read more.
Autonomous monitoring is essential for precision agriculture in greenhouses, yet deploying unmanned aerial vehicles (UAVs) in confined, GPS-denied environments remains limited by payload, power, and cost constraints. This study developed and validated an autonomous UAV system for reliable, low-cost operation in such conditions. The proposed system employs a dual-link edge-computing architecture: a lightweight onboard controller handles flight control and sensor acquisition, while visual simultaneous localization and mapping (V-SLAM) is offloaded to an edge computer via the FPV video link. Phenotyping (flower detection and tracking/counting) is performed offline from the side-view RGB stream and does not participate in the flight control loop. Using muskmelon (Cucumis melo L.) flower development as a case study, the UAV autonomously executed daily missions for 27 days in a commercial greenhouse, performing flower detection and tracking to monitor phenological dynamics. Localization and control accuracy were evaluated against a validated UWB reference system, achieving 5.4~8.0 cm 2D RMSE for trajectory tracking and 12.7 cm translation RMSE for greenhouse mapping. This work demonstrates a practical architecture for autonomous monitoring in GPS-denied agricultural environments, with operational boundaries characterized through the sustained field deployment. The system’s design principles may extend to other indoor or communication-limited scenarios requiring lightweight, intelligent robotic operation. Full article
(This article belongs to the Section Drones in Agriculture and Forestry)
Show Figures

Figure 1

25 pages, 13812 KB  
Article
Robust and Cost-Effective Vision-Based Indoor UAV Localization with RWA-YOLO
by Feifei Wang, Kun Sun and Yuanqing Wang
Sensors 2026, 26(5), 1469; https://doi.org/10.3390/s26051469 - 26 Feb 2026
Viewed by 474
Abstract
Accurate indoor localization for unmanned aerial vehicles (UAVs) remains challenging in GPS-denied environments, especially for small-object detection and under low-light conditions. We propose Robust Wavelet-Aware YOLO (RWA-YOLO), a vision-based detection framework that integrates a wavelet-aware attention fusion module with a dual multi-path aggregation [...] Read more.
Accurate indoor localization for unmanned aerial vehicles (UAVs) remains challenging in GPS-denied environments, especially for small-object detection and under low-light conditions. We propose Robust Wavelet-Aware YOLO (RWA-YOLO), a vision-based detection framework that integrates a wavelet-aware attention fusion module with a dual multi-path aggregation mechanism to enhance small-object detection and multi-scale feature representation. UAV-mounted LEDs are utilized to ensure robust visual perception in low-light indoor scenarios. The UAV’s three-dimensional position is estimated through multi-view geometric triangulation without relying on external beacons or artificial markers. Beyond static localization, the system is validated under dynamic flight conditions, demonstrating smooth and temporally coherent trajectory reconstruction suitable for real-time control loops (update rate 25FPS). Extensive experiments in real indoor environments achieve centimeter-level localization accuracy (root mean square error: 9.9 mm, 95th percentile error: 13.5 mm), outperforming state-of-the-art vision-based methods and achieving accuracy comparable to or better than representative hybrid ultra-wideband–vision systems reported in the literature. These results confirm the effectiveness, robustness, and real-time capability of RWA-YOLO for indoor UAV navigation in constrained environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

Back to TopTop