Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (1,008)

Search Parameters:
Keywords = indoor-navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
22 pages, 4788 KB  
Article
Enhanced Indoor Mobile Robot Localization via Lie-Group IMU–UWB Fusion and Dual-Stage Kalman Filtering
by Zhengyang He, Xiaojie Tang, Muzi Li and Fengyun Zhang
Sensors 2026, 26(9), 2686; https://doi.org/10.3390/s26092686 (registering DOI) - 26 Apr 2026
Abstract
Indoor mobile robots often experience degraded localization accuracy and robustness when relying on a single positioning modality. In addition, conventional pose computation based on Euler-parameterized transformations can be computationally involved and susceptible to singularities, while practical fusion schemes may not adequately suppress measurement [...] Read more.
Indoor mobile robots often experience degraded localization accuracy and robustness when relying on a single positioning modality. In addition, conventional pose computation based on Euler-parameterized transformations can be computationally involved and susceptible to singularities, while practical fusion schemes may not adequately suppress measurement errors. This paper proposes an indoor robot localization method, termed IMU_UWB_ESKF, which tightly fuses inertial and UWB measurements using a Lie-group state representation. IMU- and UWB-derived quantities are formulated on the associated Lie algebra, enabling numerically stable pose propagation and measurement updates. To mitigate sensor noise and reduce drift, a dual-stage Kalman filtering strategy is adopted: an EKF-based measurement correction is first performed, followed by a multi-dimensional error-state Kalman filter for refined fusion. The proposed pipeline is implemented on a wheeled-robot platform under ROS, integrating real-time IMU/UWB parameter extraction, pose transformation, and online state estimation. Experimental results demonstrate stable real-time localization with improved robustness and accuracy under dynamic motion, indicating the method’s applicability to indoor navigation tasks. Full article
(This article belongs to the Section Sensors and Robotics)
23 pages, 3606 KB  
Article
Wireless Communication-Based Indoor Localization with Optical Initialization and Sensor Fusion
by Marcin Leplawy, Piotr Lipiński, Barbara Morawska and Ewa Korzeniewska
Sensors 2026, 26(9), 2653; https://doi.org/10.3390/s26092653 - 24 Apr 2026
Viewed by 400
Abstract
Indoor localization in GNSS-denied environments remains a significant challenge due to the low sampling frequency and high variability of wireless signal measurements. This~paper presents a wireless communication-based indoor localization method that integrates Wi-Fi received signal strength indication (RSSI) measurements with optical initialization and [...] Read more.
Indoor localization in GNSS-denied environments remains a significant challenge due to the low sampling frequency and high variability of wireless signal measurements. This~paper presents a wireless communication-based indoor localization method that integrates Wi-Fi received signal strength indication (RSSI) measurements with optical initialization and inertial sensor fusion. The proposed approach eliminates the need for labor-intensive fingerprinting and specialized infrastructure by leveraging existing Wi-Fi networks. Optical pose estimation using ArUco markers provides accurate initial position and orientation, enabling alignment between sensor coordinate systems and reducing inertial drift. During tracking, inertial measurements compensate for motion between sparse Wi-Fi observations by virtually translating historical RSSI samples, allowing statistically consistent averaging and improved distance estimation. A simplified factor graph framework is employed to fuse heterogeneous measurements while maintaining computational efficiency suitable for real-time operation on mobile devices. Experimental validation using a robot-based ground-truth reference system demonstrates sub-meter localization accuracy with an average positioning error of approximately 0.40~m. The proposed method provides a low-cost and scalable solution for indoor positioning and navigation applications such as access-controlled environments, exhibitions, and large public venues. Full article
(This article belongs to the Special Issue Positioning and Navigation Techniques Based on Wireless Communication)
9 pages, 4519 KB  
Proceeding Paper
UAV Position Tracking with Ground Cameras
by Andrea Masiero, Paolo Dabove, Vincenzo Di Pietra, Marco Piragnolo, Alberto Guarnieri, Charles Toth, Wioleta Blaszczak-Bak, Jelena Gabela and Kai-Wei Chiang
Eng. Proc. 2026, 126(1), 50; https://doi.org/10.3390/engproc2026126050 - 15 Apr 2026
Viewed by 146
Abstract
The use of Unmanned Aerial Vehicles (UAVs) has become quite popular in several applications during the last few years. Their spread is motivated by the flexibility of usage of UAVs and by their ability to automatically execute several tasks, mostly thanks to the [...] Read more.
The use of Unmanned Aerial Vehicles (UAVs) has become quite popular in several applications during the last few years. Their spread is motivated by the flexibility of usage of UAVs and by their ability to automatically execute several tasks, mostly thanks to the availability of Global Navigation Satellite Systems (GNSSs), which usually allow reliable outdoor localization of aerial vehicles. However, the extension of task automatic execution indoors, and in other challenging working conditions for the GNSS, requires an alternative positioning system able to compensate for the unreliability or unavailability of GNSS in those cases. To this end, additional sensors are usually considered. Among them, cameras are probably the most popular ones. The most common case of a vision-based positioning system is a camera mounted on a moving platform used to determine its ego-motion in a dead-reckoning approach, i.e., visual odometry. Although this solution is affordable and does not require the installation of any infrastructure, it enables absolute positioning of the camera, i.e., of the UAV, only if certain landmarks, with known position, are visible in the flying area. In contrast, this work considers the use of external cameras installed in the flying area to track the UAV movements. This approach is similar to the one implemented in motion capture systems as well, where a set of static cameras is used to triangulate some target positions using calibrated cameras. Instead, this work investigates the use of vision and machine learning tools to (i) extract the UAV position from each video frame and (ii) estimate its 3D position. Estimation of the 3D UAV position is performed with a single camera, exploiting machine learning tools in order to avoid the need for camera calibration. Performance analysis is provided for a dataset collected at the Agripolis campus of the University of Padua. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

36 pages, 2125 KB  
Article
Hybrid Neural Network-Based PDR with Multi-Layer Heading Correction Across Smartphone Carrying Modes
by Junhua Ye, Anzhe Ye, Ahmed Mansour, Shusu Qiu, Zhenzhen Li and Xuanyu Qu
Sensors 2026, 26(8), 2421; https://doi.org/10.3390/s26082421 - 15 Apr 2026
Viewed by 193
Abstract
Traditional pedestrian inertial navigation (PDR) algorithms usually assume that the carrying mode of a smartphone is fixed and remains horizontal, while ignoring the significant impact of dynamic changes in the carrying mode on heading estimation, which is the core element of PDR algorithms. [...] Read more.
Traditional pedestrian inertial navigation (PDR) algorithms usually assume that the carrying mode of a smartphone is fixed and remains horizontal, while ignoring the significant impact of dynamic changes in the carrying mode on heading estimation, which is the core element of PDR algorithms. In practical application scenarios, pedestrians often change their way of carrying smart terminals (e.g., calling) according to their needs, corresponding to the difference in the heading estimation method; especially when the mode is switched, it will cause a sudden change in heading, which will lead to a significant increase in the localization error if it cannot be corrected in time. Existing smart terminal carrying mode recognition methods that rely on traditional machine learning or set thresholds have poor robustness; lack of universality, especially weak diagnostic ability for mutation; and can not effectively reduce the heading error. Based on these practical problems, this paper innovatively proposes a PDR framework that tries to overcome these limitations. Based on this research purpose, firstly, this paper classifies four types of common carrying modes based on practical applications and designs a CNN-LSTM hybrid model, which can classify the four common carrying modes in near real-time, with a recognition accuracy as high as 99.68%. Secondly, based on the mode recognition results, a multi-layer heading correction strategy is introduced: (1) introducing a quaternion-based universal filter (VQF) algorithm to realize the accurate estimation of initial heading; (2) designing an algorithm to accurately detect the mode switching point and developing an adaptive offset correction algorithm to realize the dynamic compensation of heading in the process of mode switching to reduce the impact of sudden changes; and (3) considering the motion characteristics of pedestrians walking in a straight line segment where lateral displacement tends to be close to zero. This study designs a heading optimization method with lateral displacement constraints to further inhibit the drifting of the heading caused by the slight swaying of the smart terminal. In this study, two validation experiments are carried out in two different environment—an indoor corridor and a tree shelter—and the results show that based on the proposed multi-layer heading optimization strategy, the average heading error of the system is lower than 1.5°, the cumulative positioning error is lower than 1% of the walking distance, and the root mean square error of the checkpoints is lower than 2 m, which significantly reduces the positioning error and shows the effectiveness of the framework in complex environments. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

22 pages, 3840 KB  
Article
An Integrated Vision–Mobile Fusion Framework for Real-Time Smart Parking Navigation
by Oleksandr Laptiev, Ananthakrishnan Thuruthel Murali, Nathalie Saab, Nihad Soltanov and Agnė Paulauskaitė-Tarasevičienė
Logistics 2026, 10(4), 84; https://doi.org/10.3390/logistics10040084 - 9 Apr 2026
Viewed by 641
Abstract
Background: Efficient parking navigation in large and dynamic parking areas requires systems that can adapt to real-time conditions and provide precise vehicle localization. Methods: This paper presents a smart car parking navigation module that integrates camera-based vehicle perception, homography-based ground-plane localization, [...] Read more.
Background: Efficient parking navigation in large and dynamic parking areas requires systems that can adapt to real-time conditions and provide precise vehicle localization. Methods: This paper presents a smart car parking navigation module that integrates camera-based vehicle perception, homography-based ground-plane localization, mobile GNSS positioning, and dynamic route planning into a unified framework. Instance segmentation (YOLOv8n-seg) is used to detect vehicles and extract ground-contact regions, which are associated with parking slots defined in a GeoJSON-based site model. Mobile GNSS data are fused with visual observations via spatio-temporal proximity scoring to enable robust user–vehicle matching without optical identification. An A* routing algorithm dynamically computes and updates navigation paths, adapting to lane obstructions and slot availability in real time. Results: Experimental evaluation on a real six-camera parking facility shows that the proposed segmentation-based localization reduces mean error from 0.732 m to 0.283 m (61.3% improvement), with the 95th-percentile error dropping from 1.892 m to 0.908 m, and outperforming the bounding-box baseline in 85.3% of detections. Conclusions: These results demonstrate that sub-meter vehicle localization and reliable user–vehicle association are achievable using standard surveillance cameras without specialized infrastructure, offering a scalable and cost-effective solution for intelligent parking navigation. Full article
Show Figures

Figure 1

10 pages, 1085 KB  
Proceeding Paper
Active Reconfigurable Intelligent Surface (ARIS)-Empowered Satellite Positioning Approach for Indoor Environments
by Yu Zhang, Xin Sun, Tianwei Hou, Anna Li, Sofie Pollin, Yuanwei Liu and Arumugam Nallanathan
Eng. Proc. 2026, 126(1), 45; https://doi.org/10.3390/engproc2026126045 - 7 Apr 2026
Viewed by 199
Abstract
To mitigate the loss of satellite navigation signals in indoor environments, we propose an active reconfigurable intelligent surface (ARIS)-empowered satellite positioning approach. Deployed on building structures, ARIS reflects navigation signals to indoor receivers to bypass obstructions, providing high-precision positioning services to receivers in [...] Read more.
To mitigate the loss of satellite navigation signals in indoor environments, we propose an active reconfigurable intelligent surface (ARIS)-empowered satellite positioning approach. Deployed on building structures, ARIS reflects navigation signals to indoor receivers to bypass obstructions, providing high-precision positioning services to receivers in non-line-of-sight (NLoS) areas. The path between ARIS and the receiver is defined as the extended line-of-sight (ELoS) path, and an improved carrier phase observation equation is derived to accommodate this path. The receiver compensates for its clock bias through network time synchronization, corrects the actual satellite–ARIS–receiver signal path to the satellite–receiver distance through a distance correction algorithm, and determines the position using the least squares (LS) method. Simulation results show that the proposed method provides positioning services with errors not exceeding 4 m in indoor environments, with time synchronization accuracy within an error range of 10 ns. Full article
(This article belongs to the Proceedings of European Navigation Conference 2025)
Show Figures

Figure 1

38 pages, 3132 KB  
Article
Lightweight Semantic-Aware Route Planning on Edge Hardware for Indoor Mobile Robots: Monocular Camera–2D LiDAR Fusion with Penalty-Weighted Nav2 Route Server Replanning
by Bogdan Felician Abaza, Andrei-Alexandru Staicu and Cristian Vasile Doicin
Sensors 2026, 26(7), 2232; https://doi.org/10.3390/s26072232 - 4 Apr 2026
Viewed by 1120
Abstract
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic [...] Read more.
The paper introduces a computationally efficient semantic-aware route planning framework for indoor mobile robots, designed for real-time execution on resource-constrained edge hardware (Raspberry Pi 5, CPU-only). The proposed architecture fuses monocular object detection with 2D LiDAR-based range estimation and integrates the resulting semantic annotations into the Nav2 Route Server for penalty-weighted route selection. Object localization in the map frame is achieved through the Angular Sector Fusion (ASF) pipeline, a deterministic geometric method requiring no parameter tuning. The ASF projects YOLO bounding boxes onto LiDAR angular sectors and estimates the object range using a 25th-percentile distance statistic, providing robustness to sparse returns and partial occlusions. All intrinsic and extrinsic sensor parameters are resolved at runtime via ROS 2 topic introspection and the URDF transform tree, enabling platform-agnostic deployment. Detected entities are classified according to mobility semantics (dynamic, static, and minor) and persistently encoded in a GeoJSON-based semantic map, with these annotations subsequently propagated to navigation graph edges as additive penalties and velocity constraints. Route computation is performed by the Nav2 Route Server through the minimization of a composite cost functional combining geometric path length with semantic penalties. A reactive replanning module monitors semantic cost updates during execution and triggers route invalidation and re-computation when threshold violations occur. Experimental evaluation over 115 navigation segments (legs) on three heterogeneous robotic platforms (two single-board RPi5 configurations and one dual-board setup with inference offloading) yielded an overall success rate of 97% (baseline: 100%, adaptive: 94%), with 42 replanning events observed in 57% of adaptive trials. Navigation time distributions exhibited statistically significant departures from normality (Shapiro–Wilk, p < 0.005). While central tendency differences between the baseline and adaptive modes were not significant (Mann–Whitney U, p = 0.157), the adaptive planner reduced temporal variance substantially (σ = 11.0 s vs. 31.1 s; Levene’s test W = 3.14, p = 0.082), primarily by mitigating AMCL recovery-induced outliers. On-device YOLO26n inference, executed via the NCNN backend, achieved 5.5 ± 0.7 FPS (167 ± 21 ms latency), and distributed inference reduced the average system CPU load from 85% to 48%. The study further reports deployment-level observations relevant to the Nav2 ecosystem, including GeoJSON metadata persistence constraints, graph discontinuity (“path-gap”) artifacts, and practical Route Server configuration patterns for semantic cost integration. Full article
(This article belongs to the Special Issue Advances in Sensing, Control and Path Planning for Robotic Systems)
Show Figures

Figure 1

20 pages, 3255 KB  
Article
Seamless Indoor and Outdoor Navigation Using IMU-GNSS Sensor Data Fusion
by Bismark Kweku Asiedu Asante and Hiroki Imamura
Sensors 2026, 26(7), 2215; https://doi.org/10.3390/s26072215 - 3 Apr 2026
Viewed by 530
Abstract
Seamless localization across indoor and outdoor environments remains a fundamental challenge for wearable navigation systems, particularly those intended to assist visually impaired individuals. This challenge arises from the unreliability of GNSS signals in indoor and transitional spaces and the cumulative drift inherent to [...] Read more.
Seamless localization across indoor and outdoor environments remains a fundamental challenge for wearable navigation systems, particularly those intended to assist visually impaired individuals. This challenge arises from the unreliability of GNSS signals in indoor and transitional spaces and the cumulative drift inherent to IMU–based dead reckoning. To address these limitations, this paper proposes a physics-informed GNSS–IMU sensor fusion framework that enables robust, real-time wearable navigation across heterogeneous environments. The proposed system dynamically adapts to environmental context, employing GNSS dominant localization in outdoor settings and PINN enhanced IMU-based dead reckoning during GNSS denied indoor operation. At the core of the framework is a tightly coupled Physics-Informed Neural Network (PINN) and Extended Kalman Filter (EKF), where the PINN embeds kinematic motion constraints to correct inertial drift and suppress sensor noise, while the EKF performs probabilistic state estimation and sensor fusion. The framework is implemented on a compact, energy-efficient wearable platform and evaluated using real-world indoor–outdoor pedestrian trajectories. Experimental results demonstrate improved localization accuracy, significantly reduced drift during indoor navigation, and stable indoor–outdoor transitions compared to conventional GNSS–IMU fusion methods. The proposed approach offers a practical and reliable solution for wearable assistive navigation and has broader applicability in smart mobility and autonomous wearable systems. Full article
(This article belongs to the Topic AI Sensors and Transducers)
Show Figures

Figure 1

20 pages, 5717 KB  
Article
An Improved YOLOv10 and DeepSORT Algorithm for Pedestrian Detection and Tracking in Crowd Navigation
by Shihang Hu and Changyong Li
Algorithms 2026, 19(4), 274; https://doi.org/10.3390/a19040274 - 1 Apr 2026
Viewed by 308
Abstract
In indoor crowd navigation, quickly and accurately acquiring the kinematic data of pedestrians within a robot’s field of view is a crucial factor determining success. Existing indoor pedestrian tracking methods have limitations in accuracy and real-time performance. To address these issues, a lightweight [...] Read more.
In indoor crowd navigation, quickly and accurately acquiring the kinematic data of pedestrians within a robot’s field of view is a crucial factor determining success. Existing indoor pedestrian tracking methods have limitations in accuracy and real-time performance. To address these issues, a lightweight pedestrian tracking method based on an improved YOLOv10s and DeepSORT is proposed. In the detection stage, a CPNGhostNetV2 module incorporating Ghost Convolution and attention mechanisms is first designed to replace the original C2f module in YOLOv10s. This achieves lightweight while effectively preserving global feature information. Secondly, the GSConv module is introduced to further reduce computational load and model parameters. Finally, the Focal Loss function is introduced to enhance the detection capability of the YOLOv10s model in dense scenes. In the tracking stage, a novel trajectory management mechanism is proposed to reduce the ID-switching problem under occlusion conditions. The experimental results show that the improved YOLOv10s reduces computational complexity by 33.9% and parameters by 17.4% compared to the original model. It also improves mAP@50 by 0.6%. The improved DeepSORT algorithm achieves a 7.0% increase in MOTA, a 1.4% increase in MOTP, and a 24.8% reduction in ID-switch counts compared to the original YOLOv10-DeepSORT. It outperforms traditional algorithms in terms of accuracy, real-time performance, and computational efficiency, demonstrating promising application prospects. Full article
Show Figures

Figure 1

31 pages, 7864 KB  
Article
Development of a General-Purpose AI-Powered Robotic Platform for Strawberry Harvesting
by Muhammad Tufail, Jamshed Iqbal and Rafiq Ahmad
Agriculture 2026, 16(7), 769; https://doi.org/10.3390/agriculture16070769 - 31 Mar 2026
Viewed by 576
Abstract
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system [...] Read more.
The integration of emerging technologies such as robotics and artificial intelligence (AI) has the potential to transform agricultural harvesting by improving efficiency, reducing waste, lowering labor dependency, and enhancing produce quality. This paper presents the development of an intelligent robotic berry harvesting system that combines deep learning–based perception with autonomous robotic manipulation for real-time strawberry harvesting. A computer vision pipeline based on the YOLOv11 segmentation model was developed and integrated into a Smart Mobile Manipulator (SMM) equipped with autonomous navigation, a 6-degree-of-freedom (6-DoF) xArm 6 robotic arm, and ROS middleware to enable real-time operation. Using a publicly available strawberry dataset comprising 2,800 images collected under ridge-planted cultivation conditions, the proposed YOLOv11-small segmentation model achieved 84.41% mAP@0.5, outperforming YOLOv11 object detection, Faster R-CNN, and RT-DETR in segmentation quality while maintaining real-time performance at 10 FPS on an NVIDIA Jetson Orin Nano edge GPU. A PCA-based fruit orientation and geometric analysis method achieved 86.5% localization accuracy on 200 test images. Controlled indoor harvesting experiments using synthetic strawberries demonstrated an overall harvesting success rate of 72% across 50 trials. The proposed system provides a general-purpose platform for berry harvesting in controlled environments, offering a scalable and efficient solution for autonomous harvesting. Full article
(This article belongs to the Special Issue Advances in Robotic Systems for Precision Orchard Operations)
Show Figures

Figure 1

20 pages, 927 KB  
Systematic Review
Towards Continuous Swim Leg Analytics in Olympic Triathlon: A Systematic Review of Sensor-Based Assessment Approaches in Open-Water Sports Contexts
by Jannik Seelhöfer, Jürgen Wick and Maren Witt
Sensors 2026, 26(7), 2151; https://doi.org/10.3390/s26072151 - 31 Mar 2026
Viewed by 349
Abstract
Global Navigation Satellite Systems (GNSS) offer precise movement analyses based on distance and speed in open-water sports. Despite the influence of swimming in triathlon, its performance analysis remains underdeveloped due to methodological limitations in capturing continuous data in aquatic environments. This review aimed [...] Read more.
Global Navigation Satellite Systems (GNSS) offer precise movement analyses based on distance and speed in open-water sports. Despite the influence of swimming in triathlon, its performance analysis remains underdeveloped due to methodological limitations in capturing continuous data in aquatic environments. This review aimed to: (1) systematically analyse and compare the sensor-based technologies applied to open-water movement analysis, and (2) propose a framework for continuous GNSS-based assessment of triathlon swim performance. A systematic search was conducted prior to the 14 August 2025 across four databases (Web of Science, SPORTDiscus, PubMed, and SPONET). Studies were eligible if they analysed open-water sports using GNSS-based technologies for continuous movement or performance analysis. Studies limited to indoor swimming, inertial sensors, or non-sporting applications were excluded. Methodological quality and potential sources of bias were evaluated using a custom scheme based on GNSS reporting guidelines, as methodological heterogeneity precluded the application of standardised tools. Following screening and eligibility assessment, articles were analysed qualitatively. In total, 20 articles were included and focused on surfing, sailing, water skiing, windsurfing, kitesurfing, stand-up paddling (SUP), and swimming. Most studies focused on board- and sail-based sports, employed sampling frequencies between 1 and 15 Hz, and demonstrated substantial variability in device specifications and reporting quality. Different sensors and GNSS-derived variables were central to discipline-specific performance analysis. The strength of evidence is limited by the heterogeneous methodologies, and variable reporting quality. The proposed framework provides methodological guidance for implementing high-resolution GNSS-based monitoring in triathlon swimming to improve pacing analysis and race strategy development. Full article
(This article belongs to the Special Issue Wearable Sensors in Biomechanics and Human Motion)
Show Figures

Figure 1

30 pages, 22493 KB  
Article
H-CoRE: A Cooperative Framework for Heterogeneous Multi-Robot Exploration and Inspection
by Simone D’Angelo, Francesca Pagano, Riccardo Caccavale, Vincenzo Scognamiglio, Alessandro De Crescenzo, Pasquale Merone, Stefano Ciaravino, Alberto Finzi and Vincenzo Lippiello
Drones 2026, 10(4), 232; https://doi.org/10.3390/drones10040232 - 25 Mar 2026
Viewed by 655
Abstract
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system [...] Read more.
This paper presents the H-CoRE (Heterogeneous Cooperative Multi-Robot Execution) framework designed to enable autonomous multi-robot operations in GNSS-denied environments. Built on an ROS 2-based architecture, H-CoRE enables collaborative, structured task execution through standardized software stacks. Each robot’s stack combines a high-level executive system with an agent-specific motion layer and leverages multi-sensor fusion for localization and mapping. The framework is inherently reconfigurable, allowing individual agents to operate autonomously or as part of a multi-robot team for collaborative missions. In the considered scenario, the system integrates aerial and ground vehicles, a fixed pan–tilt–zoom camera, and a human supervisory interface within a unified, modular infrastructure. The proposed system has been deployed in indoor, GNSS-denied environments, demonstrating autonomous navigation, cooperative area coverage, and real-time information sharing across multiple agents. Experimental results confirm the effectiveness of H-CoRE in maintaining general awareness and mission continuity, paving the way for future applications in search-and-rescue, inspection, and exploration tasks. Full article
Show Figures

Figure 1

22 pages, 2044 KB  
Article
Vertex: A Semantic Graph-Based Indoor Navigation System with Vision-Language Landmark Verification
by Isabel Ferri-Molla, Dena Bazazian, Marius N. Varga, Jordi Linares-Pellicer and Joan Albert Silvestre-Cerdà
Sensors 2026, 26(7), 2031; https://doi.org/10.3390/s26072031 - 24 Mar 2026
Viewed by 361
Abstract
Older adults often need guidance when visiting new buildings for the first time. However, indoor navigation remains challenging due to the lack of Global Positioning System (GPS) availability, visually repetitive corridors, and frequent location failures. This article presents a multimodal indoor navigation assistant [...] Read more.
Older adults often need guidance when visiting new buildings for the first time. However, indoor navigation remains challenging due to the lack of Global Positioning System (GPS) availability, visually repetitive corridors, and frequent location failures. This article presents a multimodal indoor navigation assistant that combines graph-based route planning with visual landmark verification to provide step-by-step guidance. The environment is modelled as a directed graph whose nodes are annotated with semantic landmarks, and the graph is constructed primarily from a video of the building, reducing the need for 3D scanners, beacons, or other specialised instruments. Routes are calculated using Dijkstra’s shortest-path algorithm over the semantic graph. During navigation, camera frames are analysed using a restricted vision-language recognition strategy that only considers candidate landmarks from the current and next nodes, reducing false detections and improving interpretability. To increase robustness, a temporary voting mechanism was introduced to confirm node transitions, as well as a hierarchical redirection strategy with local and global recovery. The system is implemented in two modes: handheld mode with visual cues using augmented reality arrows, mini map and voice instructions, and hands-free mode with front camera using voice instructions and keywords. Evaluation involved preliminary technical testing in the United Kingdom followed by formal user validation in Spain. During these trials, participants reported high usability, strong confidence and safety, and increased perceived independence. Full article
Show Figures

Figure 1

29 pages, 6656 KB  
Article
Improvements to the FLOAM Algorithm: GICP Registration and SOR Filtering in Mobile Robots with Pure Laser Configuration and Enhanced SLAM Performance
by Shichen Fu, Tianbao Zhao, Junkai Zhang, Guangming Guo and Weixiong Zheng
Appl. Sci. 2026, 16(7), 3141; https://doi.org/10.3390/app16073141 - 24 Mar 2026
Viewed by 306
Abstract
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus [...] Read more.
Laser SLAM is a key enabling technology for autonomous navigation of intelligent mobile robots. The standard FLOAM algorithm experiences low positioning accuracy, weak anti-interference performance, and prone error accumulation in pure LiDAR scenarios, making it difficult to meet practical engineering requirements. The focus of numerous studies is thus on improved pure laser SLAM algorithms that are highly robust. The enhanced algorithm of FLOAM GICP registration and SOR filtering is applied in this study. The SOR filtering processes the laser point cloud to remove outlier noise. The GICP registration replaces the classic with an optimized matching cost function. Experiments are conducted on a mobile robot with a Leishen C16 LiDAR to simulate real-life tests in an indoor corridor and outdoor plaza on the Gazebo simulation platform. The results from the EVO tool’s quantitative evaluation indicate that the indoor mean absolute error and RMSE were reduced by 46.67% and 41.67% compared with FLOAM. The outdoor mean and maximum errors are reduced by 46.00% and 70.00%, respectively. The proposed improved scheme achieves centimeter-level positioning accuracy and strong robustness in pure laser configurations without auxiliary sensors such as IMUs or odometers, providing a reliable technical solution for the engineering application of mobile robots in sensor-constrained scenarios. Full article
Show Figures

Figure 1

23 pages, 2536 KB  
Article
Axes Mapping and Sensor Fusion for Attitude-Unconstrained Pedestrian Dead Reckoning
by Constantina Isaia, Lingming Yu, Wenyu Cai and Michalis P. Michaelides
Sensors 2026, 26(6), 1968; https://doi.org/10.3390/s26061968 - 21 Mar 2026
Viewed by 617
Abstract
Localization and navigation techniques have become fundamental for modern lives, while achieving accurate results indoors still remains a significant challenge. The widespread adoption of smart devices, and especially smartphones, has increased the need for accurate and robust pedestrian dead reckoning systems that operate [...] Read more.
Localization and navigation techniques have become fundamental for modern lives, while achieving accurate results indoors still remains a significant challenge. The widespread adoption of smart devices, and especially smartphones, has increased the need for accurate and robust pedestrian dead reckoning systems that operate in infrastructure-less environments. Pedestrian dead reckoning’s primary challenge is maintaining accuracy despite varying smartphone placements (attitudes) and the noisy, low-cost inertial measurements units. In this work, a comprehensive pedestrian dead reckoning framework is presented that integrates advanced step counting and heading estimation techniques. For step detection and counting, we propose a robust step counting algorithm that utilizes the optimum fusion of the raw IMU readings, i.e., accelerometer, linear accelerometer, gyroscope, and magnetometer readings, each broken down into three degrees of freedom for different body placements and walking speeds. Furthermore, to address the critical issue of heading estimation, we propose the heading estimation axis mapping (HEAT-MAP) algorithm, which dynamically adjusts the sensor axes in response to the smartphone’s orientation, ensuring a consistent coordinate frame and reducing heading drift. Moreover, to eliminate cumulative pedestrian dead reckoning errors, the system incorporates an adaptive weighted fusion mechanism with Wi-Fi fingerprinting. Experimental results demonstrate that this integrated system significantly improves the overall trajectory accuracy, providing a high-precision, attitude-unconstrained solution for real-time indoor pedestrian navigation. Full article
(This article belongs to the Special Issue Indoor Localization Techniques Based on Wireless Communication)
Show Figures

Figure 1

Back to TopTop