Sign in to use this feature.

Years

Between: -

Subjects

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Journals

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Article Types

Countries / Regions

remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline
remove_circle_outline

Search Results (749)

Search Parameters:
Keywords = real-world navigation

Order results
Result details
Results per page
Select all
Export citation of selected articles as:
32 pages, 410 KB  
Article
Embedding AI Ethics in Technical Training: A Multi-Stakeholder Pilot Module Emphasizing Co-Design and Interdisciplinary Collaboration at Rome Technopole
by Giuseppe Esposito, Massimo Sanchez, Federica Fratini, Egidio Iorio, Lucia Bertuccini, Serena Cecchetti, Valentina Tirelli and Daniele Giansanti
Educ. Sci. 2025, 15(10), 1416; https://doi.org/10.3390/educsci15101416 - 21 Oct 2025
Viewed by 88
Abstract
Higher technical education plays a strategic role in equipping the workforce to navigate rapid technological advancements and evolving labor market demands. Within the Rome Technopole framework, Spoke 4 targets ITS Academies, promoting the development of flexible, modular programs that integrate advanced technical skills [...] Read more.
Higher technical education plays a strategic role in equipping the workforce to navigate rapid technological advancements and evolving labor market demands. Within the Rome Technopole framework, Spoke 4 targets ITS Academies, promoting the development of flexible, modular programs that integrate advanced technical skills with ethical, legal, and societal perspectives. This study reports on a pilot training initiative on Artificial Intelligence (AI) co-designed by the Istituto Superiore di Sanità (ISS), aimed at exploring the ethical, practical, and educational relevance of AI in higher technical education. The module was developed and tested through a multi-stakeholder collaboration involving educators, institutional actors, and learners. A four-phase approach was adopted: (1) initial stakeholder consultation to identify needs and content directions, (2) collaborative design of the training module, (3) online delivery and engagement using a CAWI-based focus group, and (4) mixed-method evaluation, combining quantitative assessments and open-ended qualitative feedback. This design facilitated asynchronous participation and encouraged critical reflection on the real-world implications of AI. Through the four-phase approach, the pilot module was developed, delivered, and assessed with 37 participants. Quantitative analysis revealed high ratings for clarity, relevance, and perceived utility in terms of employability. Qualitative feedback highlighted the interdisciplinary design, the integration of ethical reasoning, and the module’s broad applicability across sectors—particularly Healthcare and Industry. Participants suggested including more real-world case studies and collaborative learning activities to enhance engagement. The findings support the feasibility and added value of embedding ethically informed, interdisciplinary AI education in professional technical training pathways. Developed within the Rome Technopole ecosystem, the pilot module offers a promising approach to fostering critical digital literacy and preparing learners for responsible engagement with emerging technologies. Full article
(This article belongs to the Special Issue AI Literacy: An Essential 21st Century Competence)
23 pages, 11502 KB  
Article
Enhanced Full-Section Pavement Rutting Detection via Structured Light and Texture-Aware Point-Cloud Registration
by Huayong Zhu, Yishun Li, Feng Li, Difei Wu, Yuchuan Du and Ziyue Gao
Appl. Sci. 2025, 15(20), 11283; https://doi.org/10.3390/app152011283 - 21 Oct 2025
Viewed by 91
Abstract
Rutting is a critical form of pavement distress that compromises driving safety and long-term structural integrity. Traditional detection methods predominantly rely on cross-sectional measurements and high-cost inertial navigation-assisted laser scanning, which limits their applicability for large-scale, full-section evaluation. To address these limitations, this [...] Read more.
Rutting is a critical form of pavement distress that compromises driving safety and long-term structural integrity. Traditional detection methods predominantly rely on cross-sectional measurements and high-cost inertial navigation-assisted laser scanning, which limits their applicability for large-scale, full-section evaluation. To address these limitations, this study proposes a framework for full-section rutting detection leveraging an area-array structured light camera for efficient 3D data acquisition. A multi-scale texture enhancement strategy based on 2D wavelet transform is introduced to extract latent surface features, enabling robust and accurate point-cloud registration without the need for artificial markers. Additionally, an improved Random Sample Consensus—Density-Based Spatial Clustering of Applications with Noise (RANSAC-DBSCAN) algorithm is designed to enhance the precision and robustness of rutting region segmentation under real-world pavement conditions. The proposed method is experimentally validated using 102 multi-frame pavement point clouds. Compared to Fast Point Feature Histograms (FPFH) and Deep Closest Point (DCP), the registration approach achieves a 71.31% and 80.64% reduction in point-to-plane error, respectively. For rutting segmentation, the enhanced clustering method attains an average F1-score of 90.5%, outperforming baseline methods by over 15%. The proposed workflow can be seamlessly integrated into vehicle-mounted structured-light inspection systems, offering a low-cost and scalable solution for near real-time, full-lane rutting detection in routine pavement monitoring. Full article
Show Figures

Figure 1

15 pages, 1323 KB  
Article
The Impact of Patient Navigators on Overactive Bladder Care: Real-World Practice Patterns from a US National Database
by Ekene Enemchukwu, Jennifer Miles-Thomas, Nitya Abraham, Diane K. Newman, Marc Schwartz, Kimberly Becker Ifantides, Mariana Nelson and Raveen Syan
Soc. Int. Urol. J. 2025, 6(5), 60; https://doi.org/10.3390/siuj6050060 - 20 Oct 2025
Viewed by 98
Abstract
Background/Objectives: We here describe the impact of navigated care on utilization patterns of pharmacologic and minimally invasive overactive bladder therapies. Methods: This retrospective observational cohort study used electronic medical record data from the Precision Point Specialty Analytics Portal in the United States. Eligible [...] Read more.
Background/Objectives: We here describe the impact of navigated care on utilization patterns of pharmacologic and minimally invasive overactive bladder therapies. Methods: This retrospective observational cohort study used electronic medical record data from the Precision Point Specialty Analytics Portal in the United States. Eligible patients were adults (≥18 years) newly diagnosed and treated for non-neurogenic overactive bladder (1 January 2015 to 31 December 2019). Categorical endpoints were analyzed by chi-square test or Fisher exact test. Of 170,000 eligible patients, 8982 (≈5%) were randomly selected and stratified by navigation status (navigated: 1150 [12.8%]; non-navigated: 7832 [87.2%]). Results: Overall, 60.0% of patients were female, 69.9% were White, and 42.7% had Medicare coverage. Navigated care was more common among women, Black patients, and those covered by Medicaid/Medicare. Initial pharmacologic treatment rates were similar between navigated and non-navigated groups (anticholinergic: 57.0% vs. 57.4%; beta-3 agonist: 43.0% vs. 42.6%). Greater percentages of navigated versus non-navigated patients received minimally invasive therapy (23.8% vs. 10.8%, respectively; p < 0.0001). Discontinuation rates were lower for navigated versus non-navigated patients undergoing pharmacologic treatment (62.5% vs. 71.3%; p < 0.0001). Conclusions: Patient navigation for overactive bladder may help increase access to minimally invasive therapies and may be a tool to address treatment disparities. Full article
Show Figures

Graphical abstract

22 pages, 59687 KB  
Article
Multi-View Omnidirectional Vision and Structured Light for High-Precision Mapping and Reconstruction
by Qihui Guo, Maksim A. Grigorev, Zihan Zhang, Ivan Kholodilin and Bing Li
Sensors 2025, 25(20), 6485; https://doi.org/10.3390/s25206485 - 20 Oct 2025
Viewed by 494
Abstract
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient [...] Read more.
Omnidirectional vision systems enable panoramic perception for autonomous navigation and large-scale mapping, but physical testbeds are costly, resource-intensive, and carry operational risks. We develop a virtual simulation platform for multi-view omnidirectional vision that supports flexible camera configuration and cross-platform data streaming for efficient processing. Building on this platform, we propose and validate a reconstruction and ranging method that fuses multi-view omnidirectional images with structured-light projection. The method achieves high-precision obstacle contour reconstruction and distance estimation without extensive physical calibration or rigid hardware setups. Experiments in simulation and the real world demonstrate distance errors within 8 mm and robust performance across diverse camera configurations, highlighting the practicality of the platform for omnidirectional vision research. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Graphical abstract

17 pages, 940 KB  
Article
ON-NSW: Accelerating High-Dimensional Vector Search on Edge Devices with GPU-Optimized NSW
by Taeyoon Park, Haena Lee, Yedam Na and Wook-Hee Kim
Sensors 2025, 25(20), 6461; https://doi.org/10.3390/s25206461 - 19 Oct 2025
Viewed by 315
Abstract
The Industrial Internet of Things (IIoT) increasingly relies on vector embeddings for analytics and AI-driven applications such as anomaly detection, predictive maintenance, and sensor fusion. Efficient approximate nearest neighbor search (ANNS) is essential for these workloads. Graph-based methods are among the most representative [...] Read more.
The Industrial Internet of Things (IIoT) increasingly relies on vector embeddings for analytics and AI-driven applications such as anomaly detection, predictive maintenance, and sensor fusion. Efficient approximate nearest neighbor search (ANNS) is essential for these workloads. Graph-based methods are among the most representative methods for ANNS. However, most existing graph-based methods, such as Hierarchical Navigable Small World (HNSW), are designed for CPU execution on high-end servers and give little consideration to the unique characteristics of edge devices. In this work, we present ON-NSW, a GPU-optimized design of HNSW optimized for edge devices. ON-NSW employs a flat graph structure derived from HNSW to fully exploit GPU parallelism. In addition, it carefully places HNSW components in the unified memory architecture of NVIDIA Jetson Orin Nano. Also, ON-NSW introduces warp-level parallel neighbor exploration and lightweight synchronization to reduce search latency. Our experimental results on real-world high-dimensional datasets show that ON-NSW achieves up to 1.44× higher throughput than the original HNSW on the NVIDIA Jetson device while maintaining comparable recall. These results demonstrate that ON-NSW provides an effective design for enabling efficient and high-throughput vector search on embedded edge platforms. Full article
Show Figures

Figure 1

27 pages, 7611 KB  
Article
4D BIM-Based Enriched Voxel Map for UAV Path Planning in Dynamic Construction Environments
by Ashkan Golpour, Moslem Sheikhkhoshkar, Mostafa Khanzadi, Morteza Rahbar and Saeed Banihashemi
Systems 2025, 13(10), 917; https://doi.org/10.3390/systems13100917 - 18 Oct 2025
Viewed by 195
Abstract
Unmanned Aerial Vehicles (UAVs) are increasingly integral to construction site management, supporting monitoring, inspection, and data collection tasks. Effective UAV path planning is essential for maximizing operational efficiency, particularly in complex and dynamic construction environments. While previous BIM-based approaches have explored representation models [...] Read more.
Unmanned Aerial Vehicles (UAVs) are increasingly integral to construction site management, supporting monitoring, inspection, and data collection tasks. Effective UAV path planning is essential for maximizing operational efficiency, particularly in complex and dynamic construction environments. While previous BIM-based approaches have explored representation models such as space graphs, grid patterns, and voxel models, each has limitations. Space graphs, though common, rely on predefined spatial spaces, making them less suitable for projects still under construction. Voxel-based methods, considered well-suited for 3D indoor navigation, suffer from three key challenges: (1) a disconnect between the BIM and voxel models, limiting data integration; (2) the computational cost and time required for voxelization, hindering real-time application; and (3) inadequate support for 4D BIM integration during active construction phases. This research introduces a novel framework that bridges the BIM–voxel gap via an enriched voxel map, eliminates the need for repeated voxelization, and incorporates 4D BIM and additional model data such as defined workspaces and safety buffers around fragile components. The framework’s effectiveness is demonstrated through path planning simulations on BIM models from two real-world construction projects under varying scenarios. Results indicate that the enriched voxel map successfully creates a connection between BIM model and voxel model, while covering every timestamp of the project and element attributes during path planning without requiring additional voxel map creation. Full article
Show Figures

Figure 1

13 pages, 879 KB  
Article
Heuristic Approaches for Coordinating Collaborative Heterogeneous Robotic Systems in Harvesting Automation with Size Constraints
by Hyeseon Lee, Jungyun Bae, Abhishek Patil, Myoungkuk Park and Vinh Nguyen
Sensors 2025, 25(20), 6443; https://doi.org/10.3390/s25206443 - 18 Oct 2025
Viewed by 338
Abstract
Multi-agent coordination with task allocation, routing, and scheduling presents critical challenges when deploying heterogeneous robotic systems in constrained agricultural environments. These systems involve real-time sensing during their operations with various sensors, and having quick updates on coordination based on sensed data is critical. [...] Read more.
Multi-agent coordination with task allocation, routing, and scheduling presents critical challenges when deploying heterogeneous robotic systems in constrained agricultural environments. These systems involve real-time sensing during their operations with various sensors, and having quick updates on coordination based on sensed data is critical. This paper addresses the specific requirements of harvesting automation through three heuristic approaches: (1) primal–dual workload balancing inspired by combinatorial optimization techniques, (2) greedy task assignment with iterative local optimization, and (3) LLM-based constraint processing through prompt engineering. Our agricultural application scenario incorporates robot size constraints for navigating narrow crop rows while optimizing task completion time. The greedy heuristic employs rapid initial task allocation based on proximity and capability matching, followed by iterative route refinement. The primal–dual approach adapts combinatorial optimization principles from recent multi-depot routing solutions, dynamically redistributing workloads between robots through dual variable adjustments to minimize maximum completion time. The LLM-based method utilizes structured prompt engineering to encode spatial constraints and robot capabilities, generating feasible solutions through successive refinement cycles. We implemented and compared these approaches through extensive simulations. Preliminary results demonstrate that all three approaches produce feasible solutions with reasonable quality. The results demonstrate the potential of the methods for real-world applications that can be quickly adopted into variations of the problem to offer valuable insights into solving complex coordination problems with heterogeneous multi-robot systems. Full article
Show Figures

Figure 1

42 pages, 104137 KB  
Article
A Hierarchical Absolute Visual Localization System for Low-Altitude Drones in GNSS-Denied Environments
by Qing Zhou, Haochen Tang, Zhaoxiang Zhang, Yuelei Xu, Feng Xiao and Yulong Jia
Remote Sens. 2025, 17(20), 3470; https://doi.org/10.3390/rs17203470 - 17 Oct 2025
Viewed by 534
Abstract
Current drone navigation systems primarily rely on Global Navigation Satellite Systems (GNSSs), but their signals are susceptible to interference, spoofing, or suppression in complex environments, leading to degraded positioning performance or even failure. To enhance the positioning accuracy and robustness of low-altitude drones [...] Read more.
Current drone navigation systems primarily rely on Global Navigation Satellite Systems (GNSSs), but their signals are susceptible to interference, spoofing, or suppression in complex environments, leading to degraded positioning performance or even failure. To enhance the positioning accuracy and robustness of low-altitude drones in satellite-denied environments, this paper investigates an absolute visual localization solution. This method achieves precise localization by matching real-time images with reference images that have absolute position information. To address the issue of insufficient feature generalization capability due to the complex and variable nature of ground scenes, a visual-based image retrieval algorithm is proposed, which utilizes a fusion of shallow spatial features and deep semantic features, combined with generalized average pooling to enhance feature representation capabilities. To tackle the registration errors caused by differences in perspective and scale between images, an image registration algorithm based on cyclic consistency matching is designed, incorporating a reprojection error loss function, a multi-scale feature fusion mechanism, and a structural reparameterization strategy to improve matching accuracy and inference efficiency. Based on the above methods, a hierarchical absolute visual localization system is constructed, achieving coarse localization through image retrieval and fine localization through image registration, while also integrating IMU prior correction and a sliding window update strategy to mitigate the effects of scale and rotation differences. The system is implemented on the ROS platform and experimentally validated in a real-world environment. The results show that the localization success rates for the h, s, v, and w trajectories are 95.02%, 64.50%, 64.84%, and 91.09%, respectively. Compared to similar algorithms, it demonstrates higher accuracy and better adaptability to complex scenarios. These results indicate that the proposed technology can achieve high-precision and robust absolute visual localization without the need for initial conditions, highlighting its potential for application in GNSS-denied environments. Full article
Show Figures

Graphical abstract

18 pages, 3754 KB  
Article
Hardware Implementation of Improved Oriented FAST and Rotated BRIEF-Simultaneous Localization and Mapping Version 2
by Ji-Long He, Ying-Hua Chen, Wenny Ramadha Putri, Chung-I. Huang, Ming-Hsiang Su, Kuo-Chen Li, Jian-Hong Wang, Shih-Lun Chen, Yung-Hui Li and Jia-Ching Wang
Sensors 2025, 25(20), 6404; https://doi.org/10.3390/s25206404 - 17 Oct 2025
Viewed by 480
Abstract
The field of autonomous driving has seen continuous advances, yet achieving higher levels of automation in real-world applications remains challenging. A critical requirement for autonomous navigation is accurate map construction, particularly in novel and unstructured environments. In recent years, Simultaneous Localization and Mapping [...] Read more.
The field of autonomous driving has seen continuous advances, yet achieving higher levels of automation in real-world applications remains challenging. A critical requirement for autonomous navigation is accurate map construction, particularly in novel and unstructured environments. In recent years, Simultaneous Localization and Mapping (SLAM) has evolved to support diverse sensor modalities, with some implementations incorporating machine learning to improve performance. However, these approaches often demand substantial computational resources. The key challenge lies in achieving efficiency within resource-constrained environments while minimizing errors that could degrade downstream tasks. This paper presents an enhanced ORB-SLAM2 (Oriented FAST and Rotated BRIEF Simultaneous Localization and Mapping, version 2) algorithm implemented on a Raspberry Pi 3 (ARM A53 CPU) to improve mapping performance under limited computational resources. ORB-SLAM2 comprises four main stages: Tracking, Local Mapping, Loop Closing, and Full Bundle Adjustment (BA). The proposed improvements include employing a more efficient feature descriptor to increase stereo feature-matching rates and optimizing loop-closing parameters to reduce accumulated errors. Experimental results demonstrate that the proposed system achieves notable improvements on the Raspberry Pi 3 platform. For monocular SLAM, RMSE is reduced by 18.11%, mean error by 22.97%, median error by 29.41%, and maximum error by 17.18%. For stereo SLAM, RMSE decreases by 0.30% and mean error by 0.38%. Furthermore, the ROS topic frequency stabilizes at 10 Hz, with quad-core CPU utilization averaging approximately 90%. These results indicate that the system satisfies real-time requirements while maintaining a balanced trade-off between accuracy and computational efficiency under resource constraints. Full article
(This article belongs to the Section Intelligent Sensors)
Show Figures

Figure 1

22 pages, 8968 KB  
Article
A Comparative Study of Authoring Performances Between In-Situ Mobile and Desktop Tools for Outdoor Location-Based Augmented Reality
by Komang Candra Brata, Nobuo Funabiki, Htoo Htoo Sandi Kyaw, Prismahardi Aji Riyantoko, Noprianto and Mustika Mentari
Information 2025, 16(10), 908; https://doi.org/10.3390/info16100908 - 16 Oct 2025
Viewed by 155
Abstract
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, [...] Read more.
In recent years, Location-Based Augmented Reality (LAR) systems have been increasingly implemented in various applications for tourism, navigation, education, and entertainment. Unfortunately, the LAR content creation using conventional desktop-based authoring tools has become a bottleneck, as it requires time-consuming and skilled work. Previously, we proposed an in-situ mobile authoring tool as an efficient solution to this problem by offering direct authoring interactions in real-world environments using a smartphone. Currently, the evaluation through the comparison between the proposal and conventional ones is not sufficient to show superiority, particularly in terms of interaction, authoring performance, and cognitive workload, where our tool uses 6DoF device movement for spatial input, while desktop ones rely on mouse-pointing. In this paper, we present a comparative study of authoring performances between the tools across three authoring phases: (1) Point of Interest (POI) location acquisition, (2) AR object creation, and (3) AR object registration. For the conventional tool, we adopt Unity and ARCore SDK. As a real-world application, we target the LAR content creation for pedestrian landmark annotation across campus environments at Okayama University, Japan, and Brawijaya University, Indonesia, and identify task-level bottlenecks in both tools. In our experiments, we asked 20 participants aged 22 to 35 with different LAR development experiences to complete equivalent authoring tasks in an outdoor campus environment, creating various LAR contents. We measured task completion time, phase-wise contribution, and cognitive workload using NASA-TLX. The results show that our tool made faster creations with 60% lower cognitive loads, where the desktop tool required higher mental efforts with manual data input and object verifications. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

28 pages, 3013 KB  
Article
Dynamic Robot Navigation in Confined Indoor Environment: Unleashing the Perceptron-Q Learning Fusion
by M. Denesh Babu, C. Maheswari and B. Meenakshi Priya
Sensors 2025, 25(20), 6384; https://doi.org/10.3390/s25206384 - 16 Oct 2025
Viewed by 290
Abstract
Robot navigation in confined spaces has gained popularity in recent years, but offline planning assumes static obstacles, which limits its application to online path-planning. Several methods have been introduced to perform an efficient robot navigation process. However, various existing methods mainly depend on [...] Read more.
Robot navigation in confined spaces has gained popularity in recent years, but offline planning assumes static obstacles, which limits its application to online path-planning. Several methods have been introduced to perform an efficient robot navigation process. However, various existing methods mainly depend on pre-defined maps and struggle in a dynamic environment. Also, diminishing the moving costs and detour percentages is important for real-world scenarios of robot navigation systems. Thus, this study proposes a novel perceptron-Q learning fusion (PQLF) model for Robot Navigation to address the aforementioned difficulties. The proposed model is a combination of perceptron learning and Q-learning for enhancing the robot navigation process. The robot uses the sensors to dynamically determine the distances of nearby, intermediate, and distant obstacles during local path-planning. These details are sent to the robot’s PQLF Model-based navigation controller, which acts as an agent in a Markov Decision Process (MDP) and makes effective decisions making. Thus, it is possible to express the Dynamic Robot Navigation in a Confined Indoor Environment as an MDP. The simulation results show that the proposed work outperforms other existing methods by attaining a reduced moving cost of 1.1 and a detour percentage of 7.8%. This demonstrates the superiority of the proposed model in robot navigation systems. Full article
(This article belongs to the Section Navigation and Positioning)
Show Figures

Figure 1

27 pages, 4875 KB  
Article
A Comprehensive Radar-Based Berthing-Aid Dataset (R-BAD) and Onboard System for Safe Vessel Docking
by Fotios G. Papadopoulos, Antonios-Periklis Michalopoulos, Efstratios N. Paliodimos, Ioannis K. Christopoulos, Charalampos Z. Patrikakis, Alexandros Simopoulos and Stylianos A. Mytilinaios
Electronics 2025, 14(20), 4065; https://doi.org/10.3390/electronics14204065 - 16 Oct 2025
Viewed by 224
Abstract
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been [...] Read more.
Ship berthing operations are inherently challenging for maritime vessels, particularly within restricted port areas and under unfavorable weather conditions. Contrary to autonomous open-sea navigation, autonomous ship berthing remains a significant technological challenge for the maritime industry. Lidar and optical camera systems have been deployed as auxiliary tools to support informed berthing decisions; however, these sensing modalities are severely affected by weather and light conditions, respectively, while cameras in particular are inherently incapable of providing direct range measurements. In this paper, we introduce a comprehensive, Radar-Based Berthing-Aid Dataset (R-BAD), aimed to cultivate the development of safe berthing systems onboard ships. The proposed R-BAD dataset includes a large collection of Frequency-Modulated Continuous Wave (FMCW) radar data in point cloud format alongside timestamped and synced video footage. There are more than 69 h of recorded ship operations, and the dataset is freely accessible to the interested reader(s). We also propose an onboard support system for radar-aided vessel docking, which enables obstacle detection, clustering, tracking and classification during ferry berthing maneuvers. The proposed dataset covers all docking/undocking scenarios (arrivals, departures, port idle, and cruising operations) and was used to train various machine/deep learning models of substantial performance, showcasing its validity for further autonomous navigation systems development. The berthing-aid system is tested in real-world conditions onboard an operational Ro-Ro/Passenger Ship and demonstrated superior, weather-resilient, repeatable and robust performance in detection, tracking and classification tasks, demonstrating its technology readiness for integration into future autonomous berthing-aid systems. Full article
Show Figures

Figure 1

18 pages, 1711 KB  
Article
Head and Eye Movements During Pedestrian Crossing in Patients with Visual Impairment: A Virtual Reality Eye Tracking Study
by Mark Mervic, Ema Grašič, Polona Jaki Mekjavić, Nataša Vidovič Valentinčič and Ana Fakin
J. Eye Mov. Res. 2025, 18(5), 55; https://doi.org/10.3390/jemr18050055 - 15 Oct 2025
Viewed by 161
Abstract
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual [...] Read more.
Real-world navigation depends on coordinated head–eye behaviour that standard tests of visual function miss. We investigated how visual impairment affects traffic navigation, whether behaviour differs by visual impairment type, and whether this functional grouping better explains performance than WHO categorisation. Using a virtual reality (VR) headset with integrated head and eye tracking, we evaluated detection of moving cars and safe road-crossing opportunities in 40 patients with central, peripheral, or combined visual impairment and 19 controls. Only two patients with a combination of very low visual acuity and severely constricted visual fields failed both visual tasks. Overall, patients identified safe-crossing intervals 1.3–1.5 s later than controls (p ≤ 0.01). Head-eye movement profiles diverged by visual impairment: patients with central impairment showed shorter, more frequent saccades (p < 0.05); patients with peripheral impairment showed exploratory behaviour similar to controls; while patients with combined impairment executed fewer microsaccades (p < 0.05), reduced total macrosaccade amplitude (p < 0.05), and fewer head turns (p < 0.05). Classification by impairment type explained behaviour better than WHO categorisation. These findings challenge acuity/field-based classifications and support integrating functional metrics into risk stratification and targeted rehabilitation, with VR providing a safe, scalable assessment tool. Full article
Show Figures

Graphical abstract

10 pages, 1067 KB  
Proceeding Paper
Adaptive Q-Learning in Noisy Environments: A Study on Sensor Noise Influence
by Mouna El Wafi, My Abdelkader Youssefi, Rachid Dakir, Mohamed Bakir and Younes El Koudia
Eng. Proc. 2025, 112(1), 21; https://doi.org/10.3390/engproc2025112021 - 14 Oct 2025
Viewed by 181
Abstract
Reinforcement learning, particularly Q-learning, has demonstrated significant potential in autonomous navigation applications. However, the environments of the real world introduce sensor noise, which can impact learning efficiency and decision-making. This study examines the influence of sensor noise on Q-learning performance by simulating an [...] Read more.
Reinforcement learning, particularly Q-learning, has demonstrated significant potential in autonomous navigation applications. However, the environments of the real world introduce sensor noise, which can impact learning efficiency and decision-making. This study examines the influence of sensor noise on Q-learning performance by simulating an agent navigating an environment with noise. We compare two learning strategies: one with fixed hyperparameters and another with dynamically adjusted hyperparameters. Our results show that high sensor noise degrades learning performance, increasing convergence time and sub-optimal decision-making. However, adapting hyperparameters over time improves resilience to noise by optimizing the balance between exploration and exploitation. These findings highlight the importance of robust learning strategies for autonomous systems under uncertain conditions. Full article
Show Figures

Figure 1

21 pages, 8957 KB  
Article
Autonomous Navigation of Unmanned Ground Vehicles Based on Micro-Shell Resonator Gyroscope Rotary INS Aided by LDV
by Hangbin Cao, Yuxuan Wu, Longkang Chang, Yunlong Kong, Hongfu Sun, Wenqi Wu, Jiangkun Sun, Yongmeng Zhang, Xiang Xi and Tongqiao Miao
Drones 2025, 9(10), 706; https://doi.org/10.3390/drones9100706 - 13 Oct 2025
Viewed by 248
Abstract
Micro-Shell Resonator Gyroscopes have obvious SWaP (Size, Weight and Power) advantages and applicable accuracy for the autonomous navigation of Unmanned Ground Vehicles (UGVs), especially under GNSS-denied environments. When the Micro-Shell Resonator Gyroscope Rotary Inertial Navigation System (MSRG–RINS) operates in the whole-angle mode, its [...] Read more.
Micro-Shell Resonator Gyroscopes have obvious SWaP (Size, Weight and Power) advantages and applicable accuracy for the autonomous navigation of Unmanned Ground Vehicles (UGVs), especially under GNSS-denied environments. When the Micro-Shell Resonator Gyroscope Rotary Inertial Navigation System (MSRG–RINS) operates in the whole-angle mode, its bias varies as an even-harmonic function of the pattern angle, which leads to difficulty in estimating and compensating the bias based on the MSRG in the process of attitude measurement. In this paper, an attitude measurement method based on virtual rotation self-calibration and rotary modulation is proposed for the MSRG–RINS to address this problem. The method utilizes the characteristics of the two operating modes of the MSRG, the force-rebalanced mode and whole-angle mode, to perform virtual rotation self-calibration, thereby eliminating the characteristic bias of the MSRG. In addition, the reciprocating rotary modulation method is used to suppress the residual bias of the MSRG. Furthermore, the magnetometer-aided initial alignment of the MSRG–RINS is carried out and the state-transformation extended Kalman filter is adopted to solve the large misalignment-angle problem under magnetometer assistance so as to enhance the rapidity and accuracy of initial attitude acquisition. Results from real-world experiments substantiated that the proposed method can effectively suppress the influence of MSRG’s bias on attitude measurement, thereby achieving high-precision autonomous navigation in GNSS-denied environments. In the 1 h, 3.7 km, long-range in-vehicle autonomous navigation experiments, the MSRG–RINS, integrated with a Laser Doppler Velocimetry (LDV), attained a heading accuracy of 0.35° (RMS), a horizontal positioning error of 4.9 m (RMS), and a distance-traveled accuracy of 0.24% D. Full article
Show Figures

Figure 1

Back to TopTop